Man-machine interaction method and device, storage medium and electrical equipment

文档序号:1337225 发布日期:2020-07-17 浏览:8次 中文

阅读说明:本技术 一种人机交互方法、装置、存储介质及电器设备 (Man-machine interaction method and device, storage medium and electrical equipment ) 是由 刘康 王子 李保水 汪进 于 2020-02-24 设计创作,主要内容包括:本发明公开了一种人机交互方法、装置、存储介质及电器设备,该方法包括:通过双目摄像头识别电器设备所属环境中使用者的手势信息;根据电器设备所属环境中使用者的手势信息,确定电器设备的控制指令;根据电器设备的控制指令,控制电器设备执行该控制指令,以实现电器设备与电器设备所属环境中使用者之间的人机交互。本发明的方案,可以解决智能设备的人机交互不够多样化的问题,达到使智能设备的人机交互更加多样化的效果。(The invention discloses a man-machine interaction method, a man-machine interaction device, a storage medium and electrical equipment, wherein the method comprises the following steps: recognizing gesture information of a user in the environment of the electrical equipment through the binocular camera; determining a control instruction of the electrical equipment according to gesture information of a user in the environment to which the electrical equipment belongs; and controlling the electrical equipment to execute the control instruction according to the control instruction of the electrical equipment so as to realize human-computer interaction between the electrical equipment and a user in the environment to which the electrical equipment belongs. The scheme of the invention can solve the problem that the man-machine interaction of the intelligent equipment is not diversified enough, and achieves the effect of diversifying the man-machine interaction of the intelligent equipment.)

1. A human-computer interaction method, comprising:

recognizing gesture information of a user in the environment of the electrical equipment through the binocular camera;

determining a control instruction of the electrical equipment according to gesture information of a user in the environment to which the electrical equipment belongs;

and controlling the electrical equipment to execute the control instruction according to the control instruction of the electrical equipment so as to realize human-computer interaction between the electrical equipment and a user in the environment to which the electrical equipment belongs.

2. The human-computer interaction method according to claim 1, wherein recognizing gesture information of the user in the environment to which the electrical equipment belongs through the binocular camera comprises:

positioning a user in the environment to which the electrical equipment belongs by using a binocular camera to obtain face information of the user in the environment to which the electrical equipment belongs;

according to the face information of the user in the environment where the electrical equipment belongs, the gesture information of the user at the position information of the user in the environment where the electrical equipment belongs is recognized by adopting the binocular camera.

3. The human-computer interaction method of claim 2, wherein positioning a user in an environment to which the electrical device belongs by using a binocular camera comprises:

acquiring integral image information of a user in an environment to which the electrical equipment belongs, wherein the integral image information is acquired by a binocular camera;

identifying and positioning the user in the environment to which the electrical equipment belongs according to the integral image information of the user in the environment to which the electrical equipment belongs to obtain the position information of the user in the environment to which the electrical equipment belongs;

further acquiring a face image of the user in the environment of the electrical equipment, which is acquired by the binocular camera, according to the position information of the user in the environment of the electrical equipment;

and according to the face image of the user in the environment to which the electrical equipment belongs, the face information of the user at the position information of the user in the environment to which the electrical equipment belongs.

4. The human-computer interaction method according to claim 2 or 3, wherein recognizing gesture information of the user at the position information of the user in the environment to which the electrical equipment belongs by using a binocular camera comprises:

determining whether the face of the user in the environment of the electrical equipment faces the electrical equipment or not according to the face information of the user at the position information of the user in the environment of the electrical equipment;

if the face of the user in the environment of the electrical equipment faces the electrical equipment, acquiring a limb image of the user in the environment of the electrical equipment;

according to the limb image of the user in the environment to which the electrical equipment belongs, the limb information of the user in the environment to which the electrical equipment belongs is determined, so that the limb information of the user in the environment to which the electrical equipment belongs is determined as the gesture information of the user in the environment to which the electrical equipment belongs.

5. The human-computer interaction method according to any one of claims 1 to 3, wherein the gesture information comprises: category information of the gesture, direction information of the gesture, and/or depth information of the gesture.

6. The human-computer interaction method according to any one of claims 1 to 3, further comprising:

outputting gesture information of a user in the environment to which the electrical equipment belongs, a control instruction of the electrical equipment, an execution process of executing the control instruction by the electrical equipment and/or an execution result of executing the control instruction by the electrical equipment; wherein the output comprises: and displaying on a display screen of the electrical equipment, sending to a set client, and/or broadcasting through a voice system of the electrical equipment.

7. A human-computer interaction device, comprising:

the identification unit is used for identifying gesture information of a user in the environment to which the electrical equipment belongs through the binocular camera;

the determining unit is used for determining a control instruction of the electrical equipment according to gesture information of a user in the environment to which the electrical equipment belongs;

and the control unit is used for controlling the electrical equipment to execute the control instruction according to the control instruction of the electrical equipment so as to realize the man-machine interaction between the electrical equipment and a user in the environment to which the electrical equipment belongs.

8. The human-computer interaction device of claim 7, wherein the recognition unit recognizes the gesture information of the user in the environment to which the electrical equipment belongs through a binocular camera, and comprises:

positioning a user in the environment to which the electrical equipment belongs by using a binocular camera to obtain face information of the user in the environment to which the electrical equipment belongs;

according to the face information of the user in the environment where the electrical equipment belongs, the gesture information of the user at the position information of the user in the environment where the electrical equipment belongs is recognized by adopting the binocular camera.

9. The human-computer interaction device of claim 8, wherein the identification unit uses a binocular camera to locate the user in the environment to which the electrical equipment belongs, and comprises:

acquiring integral image information of a user in an environment to which the electrical equipment belongs, wherein the integral image information is acquired by a binocular camera;

identifying and positioning the user in the environment to which the electrical equipment belongs according to the integral image information of the user in the environment to which the electrical equipment belongs to obtain the position information of the user in the environment to which the electrical equipment belongs;

further acquiring a face image of the user in the environment of the electrical equipment, which is acquired by the binocular camera, according to the position information of the user in the environment of the electrical equipment;

and according to the face image of the user in the environment to which the electrical equipment belongs, the face information of the user at the position information of the user in the environment to which the electrical equipment belongs.

10. The human-computer interaction device according to claim 8 or 9, wherein the recognition unit recognizes the gesture information of the user at the position information of the user in the environment to which the electrical equipment belongs by using a binocular camera, and comprises:

determining whether the face of the user in the environment of the electrical equipment faces the electrical equipment or not according to the face information of the user at the position information of the user in the environment of the electrical equipment;

if the face of the user in the environment of the electrical equipment faces the electrical equipment, acquiring a limb image of the user in the environment of the electrical equipment;

according to the limb image of the user in the environment to which the electrical equipment belongs, the limb information of the user in the environment to which the electrical equipment belongs is determined, so that the limb information of the user in the environment to which the electrical equipment belongs is determined as the gesture information of the user in the environment to which the electrical equipment belongs.

11. The human-computer interaction device of any one of claims 7 to 9, wherein the gesture information comprises: category information of the gesture, direction information of the gesture, and/or depth information of the gesture.

12. A human-computer interaction device according to any one of claims 7 to 9, further comprising:

the control unit is also used for outputting gesture information of a user in the environment to which the electrical equipment belongs, a control instruction of the electrical equipment, an execution process of the control instruction executed by the electrical equipment and/or an execution result of the control instruction executed by the electrical equipment; wherein the output comprises: and displaying on a display screen of the electrical equipment, sending to a set client, and/or broadcasting through a voice system of the electrical equipment.

13. An electrical device, comprising: a human-computer interaction device as claimed in any one of claims 7 to 12;

alternatively, it comprises:

a processor for executing a plurality of instructions;

a memory to store a plurality of instructions;

wherein the plurality of instructions are for being stored by the memory and loaded and executed by the processor to carry out the human-computer interaction method of any one of claims 1 to 8.

14. A storage medium having a plurality of instructions stored therein; the plurality of instructions for being loaded by a processor and for performing the human-computer interaction method of any one of claims 1 to 8.

Technical Field

The invention belongs to the technical field of intelligent home furnishing, and particularly relates to a human-computer interaction method, a human-computer interaction device, a storage medium and electrical equipment, in particular to a human-computer interaction method, a human-computer interaction device, a storage medium and electrical equipment of intelligent equipment based on binocular vision gesture control.

Background

The intelligent equipment is developed rapidly, the single voice control on the intelligent equipment cannot meet the requirements of users, and the main reason is that the man-machine interaction is not diversified enough.

The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.

Disclosure of Invention

The invention aims to provide a human-computer interaction method, a human-computer interaction device, a storage medium and electrical equipment to solve the problem that human-computer interaction of intelligent equipment is not diversified enough, so that the effect of diversifying human-computer interaction of the intelligent equipment is achieved.

The invention provides a man-machine interaction method, which comprises the following steps: recognizing gesture information of a user in the environment of the electrical equipment through the binocular camera; determining a control instruction of the electrical equipment according to gesture information of a user in the environment to which the electrical equipment belongs; and controlling the electrical equipment to execute the control instruction according to the control instruction of the electrical equipment so as to realize human-computer interaction between the electrical equipment and a user in the environment to which the electrical equipment belongs.

Optionally, the gesture information of the user in the environment to which the electrical equipment belongs is identified through a binocular camera, including: positioning a user in the environment to which the electrical equipment belongs by using a binocular camera to obtain face information of the user in the environment to which the electrical equipment belongs; according to the face information of the user in the environment where the electrical equipment belongs, the gesture information of the user at the position information of the user in the environment where the electrical equipment belongs is recognized by adopting the binocular camera.

Optionally, adopt binocular camera to fix a position user among the affiliated environment of electrical equipment, include: acquiring integral image information of a user in an environment to which the electrical equipment belongs, wherein the integral image information is acquired by a binocular camera; identifying and positioning the user in the environment to which the electrical equipment belongs according to the integral image information of the user in the environment to which the electrical equipment belongs to obtain the position information of the user in the environment to which the electrical equipment belongs; further acquiring a face image of the user in the environment of the electrical equipment, which is acquired by the binocular camera, according to the position information of the user in the environment of the electrical equipment; and according to the face image of the user in the environment to which the electrical equipment belongs, the face information of the user at the position information of the user in the environment to which the electrical equipment belongs.

Optionally, the gesture information of the user at the position information of the user in the environment where the electrical equipment belongs is identified by using a binocular camera, including: determining whether the face of the user in the environment of the electrical equipment faces the electrical equipment or not according to the face information of the user at the position information of the user in the environment of the electrical equipment; if the face of the user in the environment of the electrical equipment faces the electrical equipment, acquiring a limb image of the user in the environment of the electrical equipment; according to the limb image of the user in the environment to which the electrical equipment belongs, the limb information of the user in the environment to which the electrical equipment belongs is determined, so that the limb information of the user in the environment to which the electrical equipment belongs is determined as the gesture information of the user in the environment to which the electrical equipment belongs.

Optionally, the gesture information includes: category information of the gesture, direction information of the gesture, and/or depth information of the gesture.

Optionally, the method further comprises: outputting gesture information of a user in the environment to which the electrical equipment belongs, a control instruction of the electrical equipment, an execution process of executing the control instruction by the electrical equipment and/or an execution result of executing the control instruction by the electrical equipment; wherein the output comprises: and displaying on a display screen of the electrical equipment, sending to a set client, and/or broadcasting through a voice system of the electrical equipment.

In another aspect, the invention provides a human-computer interaction device, including: the identification unit is used for identifying gesture information of a user in the environment to which the electrical equipment belongs through the binocular camera; the determining unit is used for determining a control instruction of the electrical equipment according to gesture information of a user in the environment to which the electrical equipment belongs; and the control unit is used for controlling the electrical equipment to execute the control instruction according to the control instruction of the electrical equipment so as to realize the man-machine interaction between the electrical equipment and a user in the environment to which the electrical equipment belongs.

Optionally, the recognition unit recognizes gesture information of the user in the environment to which the electrical equipment belongs through a binocular camera, and includes: positioning a user in the environment to which the electrical equipment belongs by using a binocular camera to obtain face information of the user in the environment to which the electrical equipment belongs; according to the face information of the user in the environment where the electrical equipment belongs, the gesture information of the user at the position information of the user in the environment where the electrical equipment belongs is recognized by adopting the binocular camera.

Optionally, the recognition unit adopts binocular camera to locate the user in the affiliated environment of electrical equipment, includes: acquiring integral image information of a user in an environment to which the electrical equipment belongs, wherein the integral image information is acquired by a binocular camera; identifying and positioning the user in the environment to which the electrical equipment belongs according to the integral image information of the user in the environment to which the electrical equipment belongs to obtain the position information of the user in the environment to which the electrical equipment belongs; further acquiring a face image of the user in the environment of the electrical equipment, which is acquired by the binocular camera, according to the position information of the user in the environment of the electrical equipment; and according to the face image of the user in the environment to which the electrical equipment belongs, the face information of the user at the position information of the user in the environment to which the electrical equipment belongs.

Optionally, the recognition unit adopts a binocular camera to recognize gesture information of a user at the position information of the user in the environment to which the electrical equipment belongs, and includes: determining whether the face of the user in the environment of the electrical equipment faces the electrical equipment or not according to the face information of the user at the position information of the user in the environment of the electrical equipment; if the face of the user in the environment of the electrical equipment faces the electrical equipment, acquiring a limb image of the user in the environment of the electrical equipment; according to the limb image of the user in the environment to which the electrical equipment belongs, the limb information of the user in the environment to which the electrical equipment belongs is determined, so that the limb information of the user in the environment to which the electrical equipment belongs is determined as the gesture information of the user in the environment to which the electrical equipment belongs.

Optionally, the gesture information includes: category information of the gesture, direction information of the gesture, and/or depth information of the gesture.

Optionally, the method further comprises: the control unit is also used for outputting gesture information of a user in the environment to which the electrical equipment belongs, a control instruction of the electrical equipment, an execution process of the control instruction executed by the electrical equipment and/or an execution result of the control instruction executed by the electrical equipment; wherein the output comprises: and displaying on a display screen of the electrical equipment, sending to a set client, and/or broadcasting through a voice system of the electrical equipment.

In accordance with another aspect of the present invention, there is provided an electrical apparatus, including: the human-computer interaction device is described above.

In accordance with the above method, a further aspect of the present invention provides a storage medium comprising: the storage medium has stored therein a plurality of instructions; the instructions are used for loading and executing the human-computer interaction method by the processor.

In accordance with the above method, another aspect of the present invention provides an electrical device, including: a processor for executing a plurality of instructions; a memory to store a plurality of instructions; the instructions are stored by the memory, and loaded and executed by the processor.

According to the scheme, the binocular camera is adopted to position the position of the user, and gesture recognition is carried out based on the positioned position, so that man-machine interaction is realized, and the control accuracy can be ensured.

Furthermore, according to the scheme of the invention, the positions of the users are positioned by adopting the binocular camera, gesture recognition is carried out based on the positioning positions, so that man-machine interaction is realized, a voice system is assisted to realize visual control, and user experience can be enhanced.

Furthermore, according to the scheme of the invention, the positions of the users are positioned by adopting the binocular camera, gesture recognition is carried out based on the positioning positions so as to realize man-machine interaction, the change of depth information during user operation is extracted, gesture control is increased, and the interaction diversity is improved.

Furthermore, according to the scheme of the invention, the positions of the users are positioned by adopting the binocular camera, and gesture recognition is carried out based on the positioning positions so as to realize human-computer interaction, the voice system can output voice in gesture control to increase human-computer interaction experience, the running state of the equipment can be fed back vividly, and the use experience of the users is improved.

Furthermore, according to the scheme of the invention, the binocular camera is adopted to position the position of the user, and the related control information is displayed in the display screen in an associated manner, so that the control accuracy can be ensured, and the use efficiency and use experience of the user can be improved.

Therefore, according to the scheme provided by the invention, the positions of the users are positioned by adopting the binocular camera, the operation gestures of the users at the positioning positions are identified, the obtained identification information is converted into corresponding control instructions to realize man-machine interaction, the problem that the man-machine interaction of the intelligent equipment is not diversified is solved, and the effect of enabling the man-machine interaction of the intelligent equipment to be more diversified is achieved.

Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.

The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.

Drawings

FIG. 1 is a flowchart illustrating a human-computer interaction method according to an embodiment of the present invention;

FIG. 2 is a schematic flow chart illustrating an embodiment of identifying gesture information of a user in an environment to which an electrical device belongs by using a binocular camera in the method of the present invention;

FIG. 3 is a schematic flow chart illustrating an embodiment of positioning a user in an environment to which an electrical device belongs by using a binocular camera in the method of the present invention;

FIG. 4 is a schematic flow chart illustrating an embodiment of the method of the present invention in which a binocular camera is used to identify gesture information of a user at position information of the user in an environment to which the electrical device belongs;

FIG. 5 is a schematic structural diagram of a human-computer interaction device according to an embodiment of the present invention;

FIG. 6 is a system diagram of an embodiment of an electrical device of the present invention;

FIG. 7 is a schematic diagram of a human-computer interaction control flow of an embodiment of an electrical device according to the present invention;

fig. 8 is a schematic view of a positioning principle of the binocular camera according to the embodiment of the electrical apparatus of the present invention.

The reference numbers in the embodiments of the present invention are as follows, in combination with the accompanying drawings:

102-an identification unit; 104-a determination unit; 106-control unit.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

According to an embodiment of the present invention, a human-computer interaction method is provided, as shown in fig. 1, which is a schematic flow chart of an embodiment of the method of the present invention. The man-machine interaction method can comprise the following steps: step S110 to step S130.

In step S110, gesture information of the user in the environment to which the electrical apparatus belongs is recognized by the binocular camera.

For example: the system can select whether to start the depth gesture or not after being started, the binocular camera starts working immediately after being started, the binocular camera collects images in the environment, the binocular camera transmits image information to the main control of the electrical equipment for image processing, and gesture information is obtained.

The gesture information may include: category information of the gesture, direction information of the gesture, and/or depth information of the gesture.

For example: the binocular camera can be used for positioning gesture information controlled by human gestures, and the method can comprise the following steps: the information such as the type, the direction, the depth and the like of the gesture is compatible with a voice system, and the gesture control of a user is realized. The gesture information may include a depth information gesture instead of a conventional gesture collected by the monocular camera. The binocular camera is used, the change of the depth information during the operation of a user can be extracted, and the accuracy and the diversity of gesture control are improved.

For example: the gesture recognition may include hand shape recognition and spatial position recognition. The gesture of the user identified based on the depth information can be moved longitudinally in space, for example, the operation of pressing a key is realized, the pressing effect can be displayed on a UI (user interface), and the operation experience of the user is enhanced. In addition, the hand-grabbing operation can be completed based on the gesture of the user recognized by the depth information, and the method can be used for interface switching, file transfer and the like.

Therefore, the diversity and convenience of gesture control can be improved through various forms of gesture information.

Optionally, in step S110, a specific process of recognizing gesture information of the user in the environment to which the electrical apparatus belongs through the binocular camera may be referred to the following exemplary description.

With reference to the flowchart of fig. 2 showing an embodiment of the method for recognizing gesture information of a user in an environment to which an electrical apparatus belongs by using a binocular camera, a specific process of recognizing gesture information of a user in an environment to which an electrical apparatus belongs by using a binocular camera in step S110 is further described, which may include: step S210 and step S220.

Step S210, positioning a user in the environment to which the electrical equipment belongs by using the binocular camera to obtain face information of the user in the environment to which the electrical equipment belongs.

More optionally, in step S210, a specific process of positioning the user in the environment to which the electrical apparatus belongs by using a binocular camera to obtain the face information of the user in the environment to which the electrical apparatus belongs may be referred to as the following exemplary description.

The following further describes, with reference to a schematic flow chart of an embodiment of the method of the present invention shown in fig. 3, where a binocular camera is used to locate a user in an environment to which the electrical equipment belongs, a specific process of locating a user in an environment to which the electrical equipment belongs in step S210 with a binocular camera, and the specific process may include: step S310 to step S340.

And step S310, acquiring the integral image information of the user in the environment of the electrical equipment acquired by the binocular camera.

Step S320, according to the whole image information of the user in the environment to which the electrical apparatus belongs, identifying and positioning the user in the environment to which the electrical apparatus belongs, to obtain the position information of the user in the environment to which the electrical apparatus belongs.

Step S330, further acquiring the face image of the user in the environment of the electrical equipment acquired by the binocular camera according to the position information of the user in the environment of the electrical equipment.

Step S340 is to determine the face information of the user at the position information of the user in the environment to which the electrical equipment belongs according to the face image of the user in the environment to which the electrical equipment belongs.

For example: the binocular camera has working mode conversion, when no user appears, the image processing of the binocular camera only carries out personnel identification, and whether gesture operation exists in the user is confirmed by judging the face orientation of the user. After the user is confirmed to be in the control state, the gesture recognition area in the image is located based on the face information of the user, and recognition efficiency is improved. That is to say, adopt two mesh cameras to fix a position the user position, and then realize gesture control.

For example: the binocular camera can be used for positioning the position of the user, the face recognition is needed to position the position of the user before the positioning, and the related control information is displayed on the display screen in a correlated mode.

The binocular camera is similar to the eyes of a person, and the distance is determined mainly through parallax calculation of two images; that is, the binocular camera does not need to know what the obstacle is, and can measure the distance only by calculation. Such as: and (3) changing the position (three-dimensional coordinates) in the space, calculating depth information, comparing the depth information with a pre-installed gesture model library, and outputting control information. The result of the judgment can be selectively fed back to the display interface (the user can select).

From this, carry out face identification and location to the user through adopting two mesh cameras, can accurately acquire user's facial information and positional information, carry out accurate location for gesture recognition, be favorable to promoting gesture recognition's accurate nature.

Step S220, according to the face information of the user in the environment where the electrical equipment belongs, the gesture information of the user at the position information of the user in the environment where the electrical equipment belongs is identified by adopting the binocular camera.

For example: the binocular camera collects images in the environment, transmits image information into the master control of the electrical equipment for image processing, identifies the position of a user in the environment for positioning, identifies the face of the user, and positions and identifies the image area of a gesture.

For example: the position of a user is required to be positioned by face recognition before positioning, and the position can be shifted to the relevant position of a human limb at a camera pixel point based on the face position, so that the gesture recognition efficiency is enhanced; moreover, the false triggering of the user in the environment can be effectively avoided by identifying the face for positioning.

From this, carry out location identification to user's face information through adopting the binocular camera, and then carry out gesture recognition based on the positional information of people's face, can avoid gesture recognition not accurate and maloperation, promote the accurate nature of control.

More optionally, in step S220, a specific process of recognizing gesture information of the user at the position information of the user in the environment to which the electrical apparatus belongs by using a binocular camera may be referred to as the following exemplary description.

The following further explains, with reference to a schematic flow chart of an embodiment of the method of the present invention shown in fig. 4, in which a binocular camera is used to identify gesture information of a user at a location of the user in an environment to which the electrical equipment belongs, a specific process of identifying gesture information of a user at a location of the user in an environment to which the electrical equipment belongs in step S220 using the binocular camera may include: step S410 to step S430.

Step S410, determining whether the face of the user in the environment to which the electrical apparatus belongs faces the electrical apparatus according to the face information of the user at the position information of the user in the environment to which the electrical apparatus belongs.

In step S420, if the face of the user in the environment to which the electrical apparatus belongs faces the electrical apparatus, the limb image of the user in the environment to which the electrical apparatus belongs is obtained.

Step S430, determining the limb information of the user in the environment to which the electrical apparatus belongs according to the limb image of the user in the environment to which the electrical apparatus belongs, so as to determine the limb information of the user in the environment to which the electrical apparatus belongs as the gesture information of the user in the environment to which the electrical apparatus belongs.

For example: the binocular camera collects images in the environment, identifies the position of a user in the environment to position, identifies the face of the user, and positions and identifies the image area of the gesture. And when the face of the user in the image faces the equipment, recognizing the gesture of the user. Specifically, the binocular camera is in a face detection state, and waits for the user to be found in an operation position (the operation position is an area capable of performing depth recognition in front of the camera, and the user is reminded when the binocular camera is used for the first time). When the face of the user is found in the operation area and the direction of the face of the user is found to meet the requirement of approximately facing the screen, the user is judged to be a manipulator, and gesture recognition is carried out.

For example: the equipment uses a binocular camera to perform image recognition, the image recognition needs to complete personnel positioning, whether the current user has control or not is judged, and when the image recognition confirms that the user is just operating the equipment, the image recognition is performed and the control gesture of the user is matched.

Therefore, the gesture of the limb is recognized on the basis of the accurately positioned face recognition, and the gesture can be recognized under the condition that the user is determined to have gesture operation, so that the gesture control is more reliable and accurate.

In step S120, a control instruction of the electrical apparatus is determined according to the gesture information of the user in the environment to which the electrical apparatus belongs. For example: and according to the corresponding relation between the set gesture information and the set control instruction, determining the set control instruction corresponding to the set gesture information which is the same as the gesture information of the user in the environment to which the electrical equipment belongs in the corresponding relation, and determining the control instruction of the electrical equipment corresponding to the gesture information of the user in the environment to which the electrical equipment belongs.

In step S130, the electrical apparatus is controlled to execute the control instruction according to the control instruction of the electrical apparatus, so as to implement human-computer interaction between the electrical apparatus and a user in an environment to which the electrical apparatus belongs.

For example: the gesture information is converted into a corresponding control instruction, the main controller completes corresponding control, the voice system is matched with the main controller to complete actual interaction, and the display screen displays a gesture recognition result and enhances control feedback. Such as: the control instructions may include: home, return, screen page control, confirm, cancel, etc. basic function gestures that are easy to recognize and that can accomplish the basic operation of the device. The binocular camera can be used for recognizing the depth, so that the reliability of gesture recognition is improved.

For example: the binocular camera can be used for gesture recognition, so that the gesture information of a user can be greatly acquired, and the control accuracy is ensured; can cooperate gesture operation to realize more quick audio-visual controlling, promote user experience.

Therefore, gesture recognition is carried out by using the binocular camera, user gesture information can be acquired greatly, control accuracy and user experience are guaranteed, and human-computer interaction between the electrical equipment and a user is more humanized and diversified.

In an alternative embodiment, the method may further include: the process of outputting the human-computer interaction process may specifically include: and outputting gesture information of a user in the environment to which the electrical equipment belongs, a control instruction of the electrical equipment, an execution process of the control instruction executed by the electrical equipment and/or an execution result of the control instruction executed by the electrical equipment. Wherein, the output may include: and displaying on a display screen of the electrical equipment, sending to a set client, and/or broadcasting through a voice system of the electrical equipment.

For example: and the gesture control based on binocular vision is compatible with voice control. Specifically, the binocular camera can be used for gesture recognition, so that the gesture information of the user can be acquired greatly, and the accuracy of control is guaranteed. Visual control can be realized through an auxiliary voice system, user experience is enhanced, and use efficiency is improved.

For example: after the gesture of the user is found and recognized, the controller converts the gesture information into corresponding control parameters and completes control, the voice system is matched with the corresponding control parameters to complete human-computer interaction, and the display screen completes state display. That is to say, still can cooperate the operation pronunciation under the condition of controlling based on binocular camera discernment gesture, when using the gesture operation, suggestion user under certain feedback condition to increase user interaction experience.

For example: the display screen can display the identified control information on the display screen, the system simultaneously executes the control instruction, and the output result information is displayed on the screen after the control is finished. The voice system can output voice in gesture control to increase human-computer interaction experience, and the running state of the equipment can be fed back vividly.

For example: voice interaction can be added in the gesture operation, the device feeds back to the user through voice, and the user is prompted to carry out related operation. The user can realize the opening or closing of gesture operation by using voice so as to meet the actual requirement of the user. The gesture operation is combined with the display screen, and the display screen can also perform display feedback when the gesture operation is used under the condition of displaying a page so as to prompt a user whether the identification operation is in accordance with expectation. For example: the control system can complete control feedback and report control result information by matching with a voice system, such as: open music for you, close XXX for you, operate successfully, etc.

Therefore, output control such as display and voice is performed on the basis of the picking control, so that a user can know the working condition of the electrical equipment at any time conveniently, the use is more convenient, and the humanization is better.

Through a large amount of experimental verifications, adopt the technical scheme of this embodiment, fix a position through adopting two mesh cameras to user's position to thereby carry out gesture recognition based on the location position and realize human-computer interaction, can guarantee the accuracy nature of control.

According to the embodiment of the invention, the invention further provides a man-machine interaction device corresponding to the man-machine interaction method. Referring to fig. 5, a schematic diagram of an embodiment of the apparatus of the present invention is shown. The human-computer interaction device can comprise: a recognition unit 102, a determination unit 104 and a control unit 106.

In an alternative example, the recognition unit 102 may be configured to recognize gesture information of a user in an environment where the electrical apparatus belongs through a binocular camera. The specific function and processing of the recognition unit 102 are referred to in step S110.

For example: the system can select whether to start the depth gesture or not after being started, the binocular camera starts working immediately after being started, the binocular camera collects images in the environment, the binocular camera transmits image information to the main control of the electrical equipment for image processing, and gesture information is obtained.

The gesture information may include: category information of the gesture, direction information of the gesture, and/or depth information of the gesture.

For example: the binocular camera can be used for positioning gesture information controlled by human gestures, and the method can comprise the following steps: the information such as the type, the direction, the depth and the like of the gesture is compatible with a voice system, and the gesture control of a user is realized. The gesture information may include a depth information gesture instead of a conventional gesture collected by the monocular camera. The binocular camera is used, the change of the depth information during the operation of a user can be extracted, and the accuracy and the diversity of gesture control are improved.

For example: the gesture recognition may include hand shape recognition and spatial position recognition. The gesture of the user identified based on the depth information can be moved longitudinally in space, for example, the operation of pressing a key is realized, the pressing effect can be displayed on a UI (user interface), and the operation experience of the user is enhanced. In addition, the hand-grabbing operation can be completed based on the gesture of the user recognized by the depth information, and the method can be used for interface switching, file transfer and the like.

Therefore, the diversity and convenience of gesture control can be improved through various forms of gesture information.

Optionally, the recognizing unit 102 recognizes gesture information of the user in the environment to which the electrical device belongs through a binocular camera, and may include:

the identification unit 102 may be further configured to position a user in the environment to which the electrical apparatus belongs by using a binocular camera, so as to obtain face information of the user in the environment to which the electrical apparatus belongs. The specific function and processing of the recognition unit 102 are also referred to in step S210.

More optionally, the identifying unit 102 may use a binocular camera to locate the user in the environment to which the electrical equipment belongs, so as to obtain the face information of the user in the environment to which the electrical equipment belongs, and the locating may include:

the identification unit 102 may be further configured to obtain the whole image information of the user in the environment to which the electrical equipment belongs, which is collected by the binocular camera. The specific function and processing of the recognition unit 102 are also referred to in step S310.

The identifying unit 102 may be further configured to identify and locate a user in the environment to which the electrical apparatus belongs according to the overall image information of the user in the environment to which the electrical apparatus belongs, so as to obtain the position information of the user in the environment to which the electrical apparatus belongs. The specific function and processing of the recognition unit 102 are also referred to in step S320.

The identification unit 102 may be further configured to further obtain, according to the position information of the user in the environment to which the electrical equipment belongs, a face image of the user in the environment to which the electrical equipment belongs, which is acquired by the binocular camera. The specific function and processing of the recognition unit 102 are also referred to in step S330.

The recognition unit 102 may be further configured to specifically use the face information of the user at the position information of the user in the environment to which the electrical equipment belongs according to the face image of the user in the environment to which the electrical equipment belongs. The specific function and processing of the recognition unit 102 are also referred to in step S340.

For example: the binocular camera has working mode conversion, when no user appears, the image processing of the binocular camera only carries out personnel identification, and whether gesture operation exists in the user is confirmed by judging the face orientation of the user. After the user is confirmed to be in the control state, the gesture recognition area in the image is located based on the face information of the user, and recognition efficiency is improved. That is to say, adopt two mesh cameras to fix a position the user position, and then realize gesture control.

For example: the binocular camera can be used for positioning the position of the user, the face recognition is needed to position the position of the user before the positioning, and the related control information is displayed on the display screen in a correlated mode.

The binocular camera is similar to the eyes of a person, and the distance is determined mainly through parallax calculation of two images; that is, the binocular camera does not need to know what the obstacle is, and can measure the distance only by calculation. Such as: and (3) changing the position (three-dimensional coordinates) in the space, calculating depth information, comparing the depth information with a pre-installed gesture model library, and outputting control information. The result of the judgment can be selectively fed back to the display interface (the user can select).

From this, carry out face identification and location to the user through adopting two mesh cameras, can accurately acquire user's facial information and positional information, carry out accurate location for gesture recognition, be favorable to promoting gesture recognition's accurate nature.

The recognition unit 102 may be further configured to recognize gesture information of the user at the position information of the user in the environment to which the electrical equipment belongs, by using a binocular camera according to the face information of the user in the environment to which the electrical equipment belongs. The specific function and processing of the recognition unit 102 are also referred to in step S220.

For example: the binocular camera collects images in the environment, transmits image information into the master control of the electrical equipment for image processing, identifies the position of a user in the environment for positioning, identifies the face of the user, and positions and identifies the image area of a gesture.

For example: the position of a user is required to be positioned by face recognition before positioning, and the position can be shifted to the relevant position of a human limb at a camera pixel point based on the face position, so that the gesture recognition efficiency is enhanced; moreover, the false triggering of the user in the environment can be effectively avoided by identifying the face for positioning.

From this, carry out location identification to user's face information through adopting the binocular camera, and then carry out gesture recognition based on the positional information of people's face, can avoid gesture recognition not accurate and maloperation, promote the accurate nature of control.

More optionally, the recognizing unit 102 recognizes the gesture information of the user at the position information of the user in the environment to which the electrical device belongs by using a binocular camera, and may include:

the identification unit 102 may be further configured to determine whether the face of the user in the environment to which the electrical apparatus belongs faces the electrical apparatus according to the face information of the user at the position information of the user in the environment to which the electrical apparatus belongs. The specific function and processing of the recognition unit 102 are also referred to in step S410.

The identification unit 102 may be further configured to acquire a limb image of the user in the environment to which the electrical device belongs if the face of the user in the environment to which the electrical device belongs faces the electrical device. The specific function and processing of the recognition unit 102 are also referred to in step S420.

The identification unit 102 may be further configured to determine, according to the limb image of the user in the environment to which the electrical apparatus belongs, limb information of the user in the environment to which the electrical apparatus belongs, so as to determine the limb information of the user in the environment to which the electrical apparatus belongs as gesture information of the user in the environment to which the electrical apparatus belongs. The specific function and processing of the recognition unit 102 are also referred to in step S430.

For example: the binocular camera collects images in the environment, identifies the position of a user in the environment to position, identifies the face of the user, and positions and identifies the image area of the gesture. And when the face of the user in the image faces the equipment, recognizing the gesture of the user. Specifically, the binocular camera is in a face detection state, and waits for the user to be found in an operation position (the operation position is an area capable of performing depth recognition in front of the camera, and the user is reminded when the binocular camera is used for the first time). When the face of the user is found in the operation area and the direction of the face of the user is found to meet the requirement of approximately facing the screen, the user is judged to be a manipulator, and gesture recognition is carried out.

For example: the equipment uses a binocular camera to perform image recognition, the image recognition needs to complete personnel positioning, whether the current user has control or not is judged, and when the image recognition confirms that the user is just operating the equipment, the image recognition is performed and the control gesture of the user is matched.

Therefore, the gesture of the limb is recognized on the basis of the accurately positioned face recognition, and the gesture can be recognized under the condition that the user is determined to have gesture operation, so that the gesture control is more reliable and accurate.

In an optional example, the determining unit 104 may be configured to determine a control instruction of the electrical apparatus according to gesture information of a user in an environment to which the electrical apparatus belongs. For example: and according to the corresponding relation between the set gesture information and the set control instruction, determining the set control instruction corresponding to the set gesture information which is the same as the gesture information of the user in the environment to which the electrical equipment belongs in the corresponding relation, and determining the control instruction of the electrical equipment corresponding to the gesture information of the user in the environment to which the electrical equipment belongs. The specific function and processing of the determination unit 104 are referred to in step S120.

In an optional example, the control unit 106 may be configured to control the electrical apparatus to execute the control instruction according to the control instruction of the electrical apparatus, so as to implement human-computer interaction between the electrical apparatus and a user in an environment to which the electrical apparatus belongs. The specific function and processing of the control unit 106 are shown in step S130.

For example: the gesture information is converted into a corresponding control instruction, the main controller completes corresponding control, the voice system is matched with the main controller to complete actual interaction, and the display screen displays a gesture recognition result and enhances control feedback. Such as: the control instructions may include: home, return, screen page control, confirm, cancel, etc. basic function gestures that are easy to recognize and that can accomplish the basic operation of the device. The binocular camera can be used for recognizing the depth, so that the reliability of gesture recognition is improved.

For example: the binocular camera can be used for gesture recognition, so that the gesture information of a user can be greatly acquired, and the control accuracy is ensured; can cooperate gesture operation to realize more quick audio-visual controlling, promote user experience.

Therefore, gesture recognition is carried out by using the binocular camera, user gesture information can be acquired greatly, control accuracy and user experience are guaranteed, and human-computer interaction between the electrical equipment and a user is more humanized and diversified.

In an alternative embodiment, the method may further include: the process of outputting the human-computer interaction process may specifically include: the control unit 106 may be further configured to output gesture information of a user in an environment to which the electrical apparatus belongs, a control instruction of the electrical apparatus, an execution process of the control instruction executed by the electrical apparatus, and/or an execution result of the control instruction executed by the electrical apparatus. Wherein, the output may include: and displaying on a display screen of the electrical equipment, sending to a set client, and/or broadcasting through a voice system of the electrical equipment.

For example: and the gesture control based on binocular vision is compatible with voice control. Specifically, the binocular camera can be used for gesture recognition, so that the gesture information of the user can be acquired greatly, and the accuracy of control is guaranteed. Visual control can be realized through an auxiliary voice system, user experience is enhanced, and use efficiency is improved.

For example: after the gesture of the user is found and recognized, the controller converts the gesture information into corresponding control parameters and completes control, the voice system is matched with the corresponding control parameters to complete human-computer interaction, and the display screen completes state display. That is to say, still can cooperate the operation pronunciation under the condition of controlling based on binocular camera discernment gesture, when using the gesture operation, suggestion user under certain feedback condition to increase user interaction experience.

For example: the display screen can display the identified control information on the display screen, the system simultaneously executes the control instruction, and the output result information is displayed on the screen after the control is finished. The voice system can output voice in gesture control to increase human-computer interaction experience, and the running state of the equipment can be fed back vividly.

For example: voice interaction can be added in the gesture operation, the device feeds back to the user through voice, and the user is prompted to carry out related operation. The user can realize the opening or closing of gesture operation by using voice so as to meet the actual requirement of the user. The gesture operation is combined with the display screen, and the display screen can also perform display feedback when the gesture operation is used under the condition of displaying a page so as to prompt a user whether the identification operation is in accordance with expectation. For example: the control system can complete control feedback and report control result information by matching with a voice system, such as: open music for you, close XXX for you, operate successfully, etc.

Therefore, output control such as display and voice is performed on the basis of the picking control, so that a user can know the working condition of the electrical equipment at any time conveniently, the use is more convenient, and the humanization is better.

Since the processes and functions implemented by the apparatus of this embodiment substantially correspond to the embodiments, principles and examples of the method shown in fig. 1 to 4, the description of this embodiment is not detailed, and reference may be made to the related descriptions in the foregoing embodiments, which are not repeated herein.

Through a large number of tests, the technical scheme of the invention is adopted, the binocular camera is adopted to position the position of the user, gesture recognition is carried out based on the positioned position, so that man-machine interaction is realized, a voice system is assisted to realize visual control, and user experience can be enhanced.

According to the embodiment of the invention, the electrical equipment corresponding to the human-computer interaction device is also provided. The electric device may include: the human-computer interaction device is described above.

The intelligent equipment greatly enriches human-computer interaction after being additionally provided with the camera and the display screen, but certain space distance exists between people and the equipment in an actual application environment, and the intelligent equipment cannot be operated directly.

In an optional embodiment, the scheme of the invention provides the intelligent equipment based on binocular vision gesture control, and can be used for performing gesture recognition by using a binocular camera at least aiming at the problem that the intelligent equipment is single in voice operation, so that the gesture information of a user can be greatly acquired, and the control accuracy is ensured; user experience can be enhanced, visual control of a voice system is assisted, and use efficiency is improved. Therefore, the scheme of the invention can be matched with gesture operation to realize faster and more intuitive operation and control, and improve the user experience.

The scheme of the invention can position gesture information controlled by using the human gesture by using the binocular camera, and can comprise the following steps: the information such as the type, the direction, the depth and the like of the gesture is compatible with a voice system, and the gesture control of a user is realized. The gesture information may include a depth information gesture instead of a conventional gesture collected by the monocular camera.

For example: the types of the included gestures comprise home, return, confirmation, cancellation and other basic control gestures; the included gesture direction depth refers to the translation and depth change of the current operation with direction control in space. And (3) completing complex control methods (such as interface dragging, UI layer changing and the like) by matching with basic gestures.

Optionally, in the solution of the present invention, the purpose of using the binocular camera is to extract a change of depth information when the user operates, so as to increase the diversity of gesture manipulation. Some conventional gesture recognition mainly uses a single camera to complete detection of hand shape and plane displacement in an image, longitudinal depth information cannot be extracted, and user experience is not high.

Specifically, the scheme of the invention can adopt a binocular camera to position the position of the user and display related control information on the display screen in an associated manner.

In fact, the position of a user needs to be located through face recognition before the location, and the position can be shifted to the relevant position of a human limb at a camera pixel point based on the face position, so that the gesture recognition efficiency is enhanced; moreover, the false triggering of the user in the environment can be effectively avoided by identifying the face for positioning.

Further optionally, according to the scheme of the invention, after the gesture of the user is found and recognized, the controller converts the gesture information into corresponding control parameters and completes control, the voice system completes human-computer interaction in cooperation, and the display screen completes state display. That is to say, the scheme of the invention can cooperate with the operation voice under the condition of controlling based on the binocular camera to recognize the gesture, and prompt the user under a certain feedback condition when using the gesture operation, thereby increasing the user interaction experience.

The corresponding control parameter or control instruction may include: home, return, screen page control, confirm, cancel, etc.

Particularly, the gesture of the user identified based on the depth information can be moved longitudinally in space, such as the operation of pressing a key, and the UI interface can also display the pressing effect, so that the user operation experience is enhanced. In addition, the hand-grabbing operation can be completed based on the gesture of the user recognized by the depth information, and the method can be used for interface switching, file transfer and the like.

In an alternative embodiment, reference may be made to the examples shown in fig. 6 to 8 to illustrate a specific implementation process of the scheme of the present invention.

The scheme of the invention is mainly in the field of intelligent home, in particular to the field of man-machine interaction of intelligent equipment. The scheme of the invention is based on binocular vision gesture control and is compatible with voice control. The equipment uses a binocular camera to perform image recognition, the image recognition needs to complete personnel positioning, whether the current user has control or not is judged, when the image recognition confirms that the user is just operating the equipment, the image recognition is matched with the control gesture of the user, the display screen can display the recognized control information on the display screen, the system simultaneously executes a control instruction, and the output result information is displayed on the screen after the control is completed. The voice system can output voice in gesture control to increase human-computer interaction experience, and the running state of the equipment can be fed back vividly.

Referring to the examples shown in fig. 6 and 7, in the solution of the present invention, the process of human-computer interaction of the intelligent device based on binocular vision gesture control may include:

step 1, a camera collects images in an environment. Of course, the system can select whether to start the depth gesture or not after being started, and the camera starts to work immediately after being started.

Specifically, the gesture control in the scheme of the invention is based on a binocular camera, and the camera transmits the image information to the master control for image processing.

And 2, identifying the position of the user in the environment, positioning, identifying the face, and positioning the image area of the identified gesture.

Specifically, the binocular camera has working mode conversion, when no user appears, the image processing of the binocular camera only carries out personnel identification, and whether the user has gesture operation or not is confirmed by judging the face orientation of the user. After the user is confirmed to be in the control state, the gesture recognition area in the image is located based on the face information of the user, and recognition efficiency is improved. That is to say, the scheme of the invention adopts the binocular camera to position the position of the user, thereby realizing gesture control.

The binocular camera is similar to the eyes of a person, and the distance is determined mainly through parallax calculation of two images. That is, the binocular camera does not need to know what the obstacle is, and can measure the distance only by calculation.

Referring to fig. 8, an algorithm for locating the position of a user using binocular cameras may include:

the focal length f and the reference line b are known parameters of the camera in order to find the distance of the object p from the center of the camera, i.e. the depth value z. According to the triangle similarity principle, the following can be obtained:

from the above two formulas, we can get:that is, the real object depth Z and the parallax d ═ X are obtained1-X2The relationship between them. As long as X is studied1And X2The depth value of the object can be measured according to the relation between the object and the depth value. The gesture position can be accurately determined by establishing coordinates of one dimension and obtaining positions of three coordinates.

And 3, when the face of the user in the image faces the equipment, recognizing the gesture of the user.

Specifically, the binocular camera is in a face detection state, and waits for the user to be found in an operation position (the operation position is an area capable of performing depth recognition in front of the camera, and the user is reminded when the binocular camera is used for the first time). When the face of the user is found in the operation area and the direction of the face of the user is found to meet the requirement of approximately facing the screen, the user is judged to be a manipulator, and gesture recognition is carried out.

For example: the gesture recognition may include hand shape recognition and spatial position recognition.

And 4, converting the gesture information into a corresponding control instruction, finishing corresponding control by the main controller, finishing actual interaction by matching the voice system, displaying a gesture recognition result by the display screen, and enhancing control feedback.

For example: and (3) changing the position (three-dimensional coordinates) in the space, calculating depth information, comparing the depth information with a pre-installed gesture model library, and outputting control information. The result of the judgment can be selectively fed back to the display interface (the user can select).

Optionally, the control instruction may include: home, return, screen page control, confirm, cancel, etc. basic function gestures that are easy to recognize and that can accomplish the basic operation of the device. The binocular camera can be used for recognizing the depth, so that the reliability of gesture recognition is improved.

Optionally, voice interaction may be added to the gesture operation, and the device feeds back to the user through voice and prompts the user to perform a related operation. The user can realize the opening or closing of gesture operation by using voice so as to meet the actual requirement of the user.

Optionally, the gesture operation is combined with a display screen, and the display screen can also perform display feedback when the gesture operation is used to prompt a user whether the recognition operation is expected or not under the condition that a page is displayed.

For example: the control system can complete control feedback and report control result information by matching with a voice system, such as: open music for you, close XXX for you, operate successfully, etc.

Since the processes and functions implemented by the electrical apparatus of this embodiment substantially correspond to the embodiments, principles, and examples of the apparatus shown in fig. 5, the descriptions of this embodiment are not detailed herein, and refer to the related descriptions in the foregoing embodiments, which are not described herein again.

Through a large number of tests, the technical scheme of the invention is adopted, the positions of users are positioned by adopting the binocular camera, and gesture recognition is carried out based on the positioning positions, so that man-machine interaction is realized, the change of depth information during user operation is extracted, gesture control is increased, and the diversity of interaction is improved.

According to an embodiment of the invention, a storage medium corresponding to the human-computer interaction method is also provided. The storage medium may include: the storage medium has stored therein a plurality of instructions; the instructions are used for loading and executing the human-computer interaction method by the processor.

Since the processing and functions implemented by the storage medium of this embodiment substantially correspond to the embodiments, principles, and examples of the methods shown in fig. 1 to fig. 4, details are not described in the description of this embodiment, and reference may be made to the related descriptions in the foregoing embodiments, which are not described herein again.

Through a large number of tests, the technical scheme of the invention is adopted, the positions of users are positioned by adopting the binocular camera, and gesture recognition is carried out based on the positioning positions, so that man-machine interaction is realized, a voice system can output voice in gesture control to increase man-machine interaction experience, the running state of equipment can be fed back vividly, and the use experience of the users is improved.

According to the embodiment of the invention, the electrical equipment corresponding to the man-machine interaction method is also provided. The electrical device may include: a processor for executing a plurality of instructions; a memory to store a plurality of instructions; the instructions are stored by the memory, and loaded and executed by the processor.

Since the processes and functions implemented by the electrical apparatus of this embodiment substantially correspond to the embodiments, principles, and examples of the methods shown in fig. 1 to fig. 4, details are not described in the description of this embodiment, and reference may be made to the related descriptions in the foregoing embodiments, which are not described herein again.

Through a large number of tests, the technical scheme of the invention is adopted, the binocular camera is adopted to position the position of the user, and the related control information is displayed on the display screen in a correlated manner, so that the control accuracy can be ensured, and the use efficiency and the use experience of the user can be improved.

In summary, it is readily understood by those skilled in the art that the advantageous modes described above can be freely combined and superimposed without conflict.

The above description is only an example of the present invention, and is not intended to limit the present invention, and it is obvious to those skilled in the art that various modifications and variations can be made in the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:头戴显示设备的虚拟键盘显示方法、装置及头戴显示设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类