Man-machine interaction method and man-machine interaction device

文档序号:602788 发布日期:2021-05-04 浏览:7次 中文

阅读说明:本技术 人机交互方法与人机交互装置 (Man-machine interaction method and man-machine interaction device ) 是由 彭帅华 武昊 于 2021-01-04 设计创作,主要内容包括:本申请提供一种人机交互方法等,在用户与对象设备进行交互时,用户用握持移动终端的手做出手势动作,此时,一方面,根据对象设备所具有光学传感器检测到用户的手势动作信息,另一方面,根据移动终端所具有的运动传感器检测到移动终端随用户的手一起运动的运动轨迹信息即终端运动轨迹信息。之后,判断手势动作信息与终端运动轨迹信息是否相匹配,在相匹配时,执行相应的第一控制。由于该移动终端随用户的手一同运动,因此,其运动轨迹信息与用户的手的手势动作信息具有唯一对应性,因而能够通过判断手势动作信息与终端轨迹信息是否相匹配来可靠地判断出手势动作是否有效,避免无关人员的手势动作的干扰,进而能够实现有效的人机交互。(When a user interacts with object equipment, the user makes gesture actions by using a hand holding a mobile terminal, at the moment, on one hand, the gesture action information of the user is detected according to an optical sensor arranged on the object equipment, and on the other hand, the motion track information of the mobile terminal moving along with the hand of the user, namely the terminal motion track information, is detected according to a motion sensor arranged on the mobile terminal. And then, judging whether the gesture action information is matched with the terminal motion track information, and executing corresponding first control when the gesture action information is matched with the terminal motion track information. The mobile terminal moves along with the hand of the user, so that the motion track information of the mobile terminal has unique correspondence with the gesture action information of the hand of the user, whether the gesture action is effective or not can be reliably judged by judging whether the gesture action information is matched with the terminal track information or not, the interference of the gesture action of irrelevant personnel is avoided, and the effective human-computer interaction can be realized.)

1. A man-machine interaction method is provided,

the method comprises the following steps:

acquiring motion trail information of a mobile terminal, wherein the motion trail information is obtained through a motion sensor of the mobile terminal;

acquiring first gesture action information of a user, wherein the first gesture action information is obtained through an optical sensor of object equipment interacting with the user;

and when the first gesture action information is matched with the motion trail information, executing first control, wherein the first control comprises control executed according to a control instruction corresponding to the first gesture action information.

2. Human-computer interaction method according to claim 1,

the first gesture information comprises gesture action form information and gesture action time information,

the motion trail information comprises motion trail form information and motion trail time information, and the method further comprises the following steps:

and determining that the first gesture action information is matched with the motion track information according to the similarity between the gesture action form information and the motion track form information and the consistency between the gesture action time information and the motion track time information.

3. The human-computer interaction method of claim 1 or 2,

further comprising:

recognizing a user corresponding to the first gesture motion information according to the optical sensor,

and when the first gesture action information is matched with the motion trail information, the user corresponding to the first gesture action information is authenticated as an effective user.

4. The human-computer interaction method of claim 3,

further comprising:

acquiring second gesture motion information of the valid user through the optical sensor, the second gesture motion information being later in time than the first gesture motion information,

the first control comprises control executed according to a control instruction corresponding to the second gesture action information.

5. A human-computer interaction method according to claim 3,

the subject device is a vehicle having a display, and the first controlling includes displaying an environment image including the active user on the display, wherein the active user is highlighted in the environment image.

6. A human-computer interaction method according to claim 3,

the subject device is a vehicle, and the first control includes: causing the vehicle to autonomously move towards the active user.

7. The human-computer interaction method according to any one of claims 1 to 6,

the acquiring of the first gesture information is performed on condition that the mobile terminal is subjected to a predetermined operation.

8. The human-computer interaction method according to any one of claims 1 to 7,

the acquiring of the first gesture information includes:

acquiring the position information of the mobile terminal from the mobile terminal;

and adjusting the optical sensor according to the position information to enable the mobile terminal to be located in the detection range of the optical sensor.

9. The human-computer interaction method of any one of claims 1-8,

the acquiring of the first gesture information includes:

and sending information for requesting to perform a first gesture action to the mobile terminal when the motion trail information is acquired but the gesture action information is not acquired within a preset time.

10. The human-computer interaction method of any one of claims 1-9,

further comprising: the validity of the ID of the mobile terminal is authenticated,

the acquiring of the motion trail information of the mobile terminal comprises acquiring of the motion trail information of the mobile terminal with the effective ID.

11. A man-machine interaction device is applied to object equipment interacting with a user,

the method comprises the following steps:

the terminal track acquisition module is used for acquiring motion track information of the mobile terminal, and the motion track information is obtained through a motion sensor of the mobile terminal;

the gesture action acquisition module is used for acquiring first gesture action information of a user, and the first gesture action information is obtained through an optical sensor of the object equipment;

and the control execution module is used for executing first control when the first gesture action information is matched with the motion trail information, wherein the first control comprises control executed according to a control instruction corresponding to the first gesture action.

12. The human-computer interaction device of claim 11,

the first gesture information comprises gesture action form information and gesture action time information,

the motion track information comprises motion track form information and motion track time information, and the human-computer interaction device further comprises:

and the gesture matching module is used for determining that the first gesture information is matched with the motion track information according to the similarity between the gesture action form information and the motion track form information and the consistency between the gesture action time information and the motion track time information.

13. A human-computer interaction device according to claim 11 or 12,

the gesture action acquisition module comprises a user identification unit which identifies a user corresponding to the first gesture action information according to the optical sensor,

the man-machine interaction device further comprises a user authentication module, wherein the authentication module is used for authenticating the user corresponding to the first gesture action information as an effective user when the first gesture action information is matched with the motion track information.

14. The human-computer interaction device of claim 13,

the gesture action acquisition module acquires second gesture action information of the effective user according to the optical sensor, wherein the second gesture action information is later than the first gesture action information in time,

the first control comprises control executed according to a control instruction corresponding to the second gesture action information.

15. The human-computer interaction device of claim 13,

the subject device is a vehicle having a display, and the first controlling includes displaying an environment image including the active user on the display, wherein the active user is highlighted in the environment image.

16. The human-computer interaction device of claim 13,

the subject device is a vehicle, and the first control includes: causing the vehicle to autonomously move towards the active user.

17. A human-computer interaction device according to any of claims 11-16,

the acquiring of the first gesture information is performed on condition that the mobile terminal is subjected to a predetermined operation.

18. A human-computer interaction device according to any of claims 11-17,

further comprising:

a terminal position acquisition unit for acquiring position information of the mobile terminal from the mobile terminal;

an optical sensor actuation control unit for adjusting the optical sensor to position the mobile terminal within a detection range of the optical sensor according to the position information.

19. A human-computer interaction device according to any of claims 11-18,

and sending information for requesting to perform a first gesture action to the mobile terminal when the motion trail information is acquired but the gesture action information is not acquired within a preset time.

20. A human-computer interaction device according to any of claims 11-19,

also comprises a terminal ID authentication module for authenticating the validity of the ID of the mobile terminal,

the terminal track obtaining module is used for obtaining the motion track information of the mobile terminal with the effective ID.

Technical Field

The application relates to a human-computer interaction method and a human-computer interaction device.

Background

In the prior art, there is a technology in which a user performs human-computer interaction with a target device through an air-separating operation such as a gesture motion, for example, the user interacts with a vehicle as the target device through the gesture motion outside the vehicle to control the vehicle to be started in advance or direct the vehicle to perform backing and warehousing.

In this case, in order to avoid illegal control, it is necessary to authenticate the validity of the gesture motion or the validity of the user identity. Specifically, for example, when the active user makes a gesture motion, there is a possibility that other people (inactive user) exist nearby and the other people also make the gesture motion at substantially the same time. At this time, it is difficult for the object device to determine which gesture motion is valid or which user is a valid user, and thus effective human-computer interaction cannot be achieved. Therefore, a technology for realizing effective human-computer interaction is needed.

Disclosure of Invention

In view of the above, an object of the present application is to provide a technology capable of realizing effective human-computer interaction.

In order to achieve the above object, a first aspect of the present application provides a human-computer interaction method, including: acquiring motion trail information of a mobile terminal, wherein the motion trail information is obtained through a motion sensor of the mobile terminal; acquiring first gesture action information of a user, wherein the first gesture action information is obtained through an optical sensor of object equipment interacting with the user; and when the first gesture action information is matched with the motion trail information, executing first control, wherein the first control comprises control executed according to a control instruction corresponding to the first gesture action information.

By adopting the man-machine interaction method, when the user interacts with the object device, the user makes a gesture action by using a hand (or an arm) holding the mobile terminal, at this time, on one hand, the gesture action information (namely, first gesture action information) of the user is detected according to the optical sensor, and on the other hand, the motion track information (namely, terminal motion track information) of the mobile terminal moving along with the hand of the user is detected according to the motion sensor of the mobile terminal. And when the first gesture action information is matched with the terminal motion track information, executing corresponding first control.

Because the mobile terminal moves along with the hand of the user, the motion track information of the mobile terminal has unique correspondence with the gesture action information of the hand of the user, and therefore, the first control is executed when the gesture action information is matched with the terminal track information, the control can be prevented from being executed according to the gesture action of an invalid user, and effective human-computer interaction can be further realized.

In addition, as a method different from the present application, in order to authenticate the validity of the gesture motion or the validity of the user identity, it is conceivable to perform authentication by using a face recognition technique. However, the face recognition technology has some problems, for example, the user sometimes does not want to perform face recognition for privacy protection or the like, or the accuracy and reliability of face recognition are reduced and authentication cannot be efficiently performed because the user is far from the target device (for example, the user performs an operation of separating a vehicle by several tens of meters away).

By adopting the man-machine interaction method of the first aspect of the application, the first control is executed under the condition that the gesture action information is matched with the terminal motion track information, so that the control can be prevented from being executed according to the gesture action of an invalid user, effective man-machine interaction can be realized even if face recognition is not carried out, and the problems of invasion of privacy and the like caused by the face recognition can be avoided.

The term "even if face recognition is not performed" as used herein means that the technique of the present application is different from the face recognition technique, and does not mean that the technique of the present application excludes the face recognition technique, and the technique of the present application can be used in combination with the face recognition technique, where appropriate.

As a possible implementation manner of the first aspect of the present application, the first gesture information includes gesture motion form information and gesture motion time information, and the motion track information includes motion track form information and motion track time information, and the method further includes: and determining whether the first gesture action information is matched with the motion track information or not according to the similarity between the gesture action form information and the motion track form information and the consistency between the gesture action time information and the motion track time information.

By adopting the man-machine interaction method, the judgment of matching is carried out not only according to the similarity between the gesture action form information and the motion track form information, but also according to the consistency between the gesture action time information and the motion track time information, so that the gesture action matched with the track of the mobile terminal can be reliably recognized, and the gesture action interference of invalid users is further avoided.

By adopting the man-machine interaction method, whether the gesture action is effective or not can be judged according to whether the first gesture action information is matched with the motion track information or not, so that whether a control instruction corresponding to the gesture action is executed or not is determined, and therefore error control can be reliably avoided.

As a possible implementation manner of the first aspect of the present application, the method further includes: and identifying a user corresponding to the first gesture action information according to the optical sensor, and when the first gesture action information is matched with the motion track information, authenticating the user corresponding to the first gesture action information as a valid user.

In this case, the second gesture information of the valid user may be acquired by the optical sensor, the second gesture information being later in time than the first gesture information, and the first control may include a control executed according to a control command corresponding to the second gesture information.

By adopting the man-machine interaction method, after the user is authenticated to be a valid user, the gesture action (second gesture action) made later is also considered to be valid, and whether the second gesture action is matched with the terminal motion track or not is not required to be compared, so that the user does not need to make the gesture action by using a hand holding the mobile phone, and the operation burden of the user is reduced.

As one possible implementation of the first aspect of the present application, the object device is a vehicle, the vehicle has a display, and the first control may be displaying an environment image including the active user on the display, wherein the active user is highlighted in the environment image.

By adopting the man-machine interaction method, the effective user is highlighted on the display of the vehicle, and the driver can be prompted to quickly find the user.

As one possible implementation manner of the first aspect of the present application, the object apparatus is a vehicle, and the first control includes: causing the vehicle to autonomously move towards the active user.

As a possible implementation manner of the first aspect of the present application, the obtaining of the first gesture information is performed on condition that the mobile terminal is subjected to a predetermined operation.

By adopting the man-machine interaction method, the function of acquiring the gesture action is activated under the condition that the mobile terminal is subjected to the preset operation, the gesture action acquisition can be prevented from being activated against the intention of a user, and the power consumption is reduced.

As a possible implementation manner of the first aspect of the present application, the acquiring the first gesture information includes: acquiring the position information of the mobile terminal from the mobile terminal; and adjusting the optical sensor according to the position information to enable the mobile terminal to be located in the detection range of the optical sensor.

In this way, since the optical sensor is adjusted according to the position information of the mobile terminal, it is possible to ensure that the user and the gesture motion thereof are detected well.

As a possible implementation manner of the first aspect of the present application, the acquiring the first gesture information includes: and sending information for requesting to perform a first gesture action to the mobile terminal when the motion trail information is acquired but the gesture action information is not acquired within a preset time.

Thus, for example, the optical sensor fails to recognize the gesture motion of the user due to the user standing at a hidden position or being blocked, and at this time, the user may be prompted to make the gesture motion again.

As a possible implementation manner of the first aspect of the present application, the method further includes: authenticating validity of the ID of the mobile terminal; the acquiring of the motion trail information of the mobile terminal comprises acquiring of the motion trail information of the mobile terminal with the effective ID.

Thus, illegal control can be avoided more reliably.

In addition, to achieve the above object, a second aspect of the present application relates to a human-computer interaction device applied to an object device interacting with a user, including: the terminal track acquisition module is used for acquiring motion track information of the mobile terminal, and the motion track information is obtained through a motion sensor of the mobile terminal; the gesture action acquisition module is used for acquiring first gesture action information of a user, wherein the first gesture action information is obtained according to an optical sensor of the object equipment; and the control execution module is used for executing first control when the first gesture action information is matched with the motion trail information, wherein the first control comprises control executed according to a control instruction corresponding to the first gesture action.

With the human-computer interaction device, when a user interacts with the object device, the user makes a gesture action with a hand (or an arm) holding the mobile terminal, and at this time, on one hand, gesture action information (namely, first gesture action information) of the user is detected according to the optical sensor, and on the other hand, motion track information (namely, terminal motion track information) of the mobile terminal moving along with the hand of the user is detected according to a motion sensor of the mobile terminal. And then, comparing the first gesture action information with the terminal motion track information, judging whether the first gesture action information is matched with the terminal motion track information, and executing corresponding first control when the first gesture action information is matched with the terminal motion track information, wherein the first control comprises control executed according to a control instruction corresponding to the first gesture action.

The mobile terminal moves along with the hand of the user, so that the motion track information of the mobile terminal has unique correspondence with the gesture action information of the hand of the user, whether the gesture action is effective or not can be reliably judged by judging whether the gesture action information is matched with the terminal track information or not, the interference of the gesture action of an invalid user is avoided, and the effective human-computer interaction can be further realized.

Therefore, by adopting the human-computer interaction device, effective human-computer interaction can be realized even if the human face recognition is not carried out, and the problems of invasion of privacy and the like caused by the human face recognition can be avoided.

As a possible implementation manner of the second aspect of the present application, the first gesture information includes gesture movement form information and gesture movement time information, and the movement track information includes movement track form information and movement track time information, the human-computer interaction device further includes a gesture matching module, and the gesture matching module is configured to determine whether the first gesture information matches with the movement track information according to similarity between the gesture movement form information and the movement track form information and consistency between the gesture movement time information and the movement track time information.

As a possible implementation manner of the second aspect of the present application, the gesture motion acquiring module includes a user identifying unit, the user identifying unit identifies a user corresponding to the first gesture motion information according to the optical sensor,

the man-machine interaction device further comprises a user authentication module, wherein the authentication module is used for authenticating the user corresponding to the first gesture action information as an effective user when the first gesture action information is matched with the motion track information.

As a possible implementation manner of the second aspect of the present application, the gesture motion obtaining module obtains, according to the optical sensor, second gesture motion information of the active user, where the second gesture motion information is later in time than the first gesture motion information, and the first control includes a control executed according to a control instruction corresponding to the second gesture motion information.

As one possible implementation of the second aspect of the present application, the object device is a vehicle having a display, and the first control includes displaying an environment image including the active user on the display, wherein the active user is highlighted in the environment image.

As one possible implementation manner of the second aspect of the present application, the object apparatus is a vehicle, and the first control includes: causing the vehicle to autonomously move towards the active user.

As a possible implementation manner of the second aspect of the present application, the obtaining of the first gesture information is performed on condition that the mobile terminal is subjected to a predetermined operation.

As a possible implementation manner of the second aspect of the present application, the method further includes: a terminal position acquisition unit for acquiring position information of the mobile terminal from the mobile terminal; an optical sensor actuation control unit for adjusting the optical sensor to position the mobile terminal within a detection range of the optical sensor according to the position information.

As a possible implementation manner of the second aspect of the present application, when the motion trajectory information is acquired but the gesture motion information is not acquired within a predetermined time, information requesting a first gesture motion is sent to the mobile terminal.

As a possible implementation manner of the second aspect of the present application, the mobile terminal further includes a terminal ID authentication module, configured to authenticate validity of an ID of the mobile terminal, and the terminal trajectory acquisition module is configured to acquire motion trajectory information of the mobile terminal whose ID is valid.

In addition, to achieve the above object, a third aspect of the present application relates to a control method of a vehicle having an optical sensor, including: acquiring motion trail information of a mobile terminal, wherein the motion trail information is obtained through a motion sensor of the mobile terminal; acquiring first gesture action information of a user, wherein the first gesture action information is obtained according to the optical sensor; and when the first gesture action information is matched with the motion trail information, executing first control, wherein the first control comprises control executed according to a control instruction corresponding to the first gesture action information.

When the vehicle control is illegal as described above, the user performs a gesture operation with a hand (or arm) holding the mobile terminal when the user interacts with the vehicle, and at this time, on one hand, the gesture operation information (i.e., the first gesture operation information) of the user is detected by the optical sensor provided in the vehicle, and on the other hand, the motion trajectory information, i.e., the terminal motion trajectory information, of the mobile terminal moving along with the hand of the user is detected by the motion sensor provided in the mobile terminal. And then, comparing the first gesture action information with the terminal motion track information, judging whether the first gesture action information is matched with the terminal motion track information, and executing corresponding first control when the first gesture action information is judged to be matched with the terminal motion track information.

The mobile terminal moves along with the hand of the user, so that the motion track information of the mobile terminal has unique correspondence with the gesture action information of the hand of the user, whether the gesture action is effective or not can be reliably judged by judging whether the gesture action information is matched with the terminal track information or not, the interference of the gesture action of an invalid user is avoided, and the effective human-computer interaction can be further realized.

Therefore, by adopting the human-computer interaction method, effective human-computer interaction can be realized even if face recognition is not carried out, and the problems of invasion of privacy and the like caused by face recognition can be avoided.

As a possible implementation manner of the third aspect of the present application, the first gesture information includes gesture motion form information and gesture motion time information, and the motion track information includes motion track form information and motion track time information, and the method further includes: and determining that the first gesture action information is matched with the motion track information according to the similarity between the gesture action form information and the motion track form information and the consistency between the gesture action time information and the motion track time information.

As a possible implementation manner of the third aspect of the present application, the method further includes: and identifying a user corresponding to the first gesture action information according to the optical sensor, and authenticating the user corresponding to the first gesture action information as a valid user when the first gesture action information is judged to be matched with the motion track information.

As a possible implementation manner of the third aspect of the present application, the method further includes: and acquiring second gesture action information of the effective user according to the optical sensor, wherein the second gesture action information is later than the first gesture action information in time, and the first control comprises control executed according to a control instruction corresponding to the second gesture action information.

As one possible implementation of the third aspect of the present application, the vehicle has a display, and the first controlling includes displaying an environment image including the active user on the display, wherein the active user is highlighted in the environment image.

As a possible implementation manner of the third aspect of the present application, the first control includes: causing the vehicle to autonomously move towards the active user.

As a possible implementation manner of the third aspect of the present application, the obtaining of the first gesture information is performed on condition that the mobile terminal is subjected to a predetermined operation.

As a possible implementation manner of the third aspect of the present application, the acquiring the first gesture information includes: the vehicle acquires the position information of the mobile terminal from the mobile terminal; and the vehicle adjusts the optical sensor according to the position information so that the mobile terminal is positioned in the detection range of the optical sensor.

As a possible implementation manner of the third aspect of the present application, the acquiring the first gesture information includes: and sending information for requesting to perform a first gesture action to the mobile terminal when the motion trail information is acquired but the gesture action information is not acquired within a preset time.

As a possible implementation manner of the third aspect of the present application, the method further includes: and authenticating the validity of the ID of the mobile terminal, wherein the motion trail information of the mobile terminal comprises the motion trail information of the mobile terminal with the valid ID.

In addition, to achieve the above object, a fourth aspect of the present application provides a vehicle control apparatus, the vehicle having an optical sensor, comprising: the terminal track acquisition module is used for acquiring motion track information of the mobile terminal, wherein the motion track information is obtained according to a motion sensor of the mobile terminal; the gesture action acquisition module is used for acquiring gesture action information of a user and acquiring first gesture action information according to the optical sensor; and the control execution module is used for executing first control when the first gesture action information is matched with the motion trail information, wherein the first control comprises control executed according to a control instruction corresponding to the first gesture action.

With the vehicle control device as described above, when a user interacts with a vehicle, the user performs a gesture operation with a hand (or arm) holding the mobile terminal, and at this time, on one hand, gesture operation information (i.e., first gesture operation information) of the user is detected by an optical sensor such as a camera, a millimeter wave radar, or a laser radar included in the vehicle, and on the other hand, terminal motion trajectory information, which is motion trajectory information in which the mobile terminal moves along with the hand of the user, is detected by a motion sensor included in the mobile terminal. And then, comparing the first gesture action information with the terminal motion track information, judging whether the first gesture action information is matched with the terminal motion track information, and executing corresponding first control when the first gesture action information is judged to be matched with the terminal motion track information.

The mobile terminal moves along with the hand of the user, so that the motion track information of the mobile terminal has unique correspondence with the gesture action information of the hand of the user, whether the gesture action is effective or not can be reliably judged by judging whether the gesture action information is matched with the terminal track information or not, the interference of the gesture action of an invalid user is avoided, and the effective human-computer interaction can be further realized.

Therefore, by adopting the vehicle control device, effective human-computer interaction can be realized even if face recognition is not carried out, and the problems of invasion of privacy and the like caused by face recognition can be avoided.

As a possible implementation manner of the fourth aspect of the present application, the first gesture information includes gesture movement form information and gesture movement time information, and the movement track information includes movement track form information and movement track time information, the apparatus further includes a gesture matching module, and determines whether the first gesture information matches the movement track information according to similarity between the gesture movement form information and the movement track form information and consistency between the gesture movement time information and the movement track time information.

As a possible implementation manner of the fourth aspect of the present application, the gesture action obtaining module includes a user identification unit, the user identification unit identifies a user corresponding to the first gesture action information according to the optical sensor, the vehicle control device further includes a user authentication module, and when the gesture matching module determines that the first gesture action information matches the motion trajectory information, the authentication module authenticates that the user corresponding to the first gesture action information is a valid user.

As a possible implementation manner of the fourth aspect of the present application, the gesture motion acquiring module acquires, according to the optical sensor, second gesture motion information of the active user, where the second gesture motion information is later in time than the first gesture motion information, and the first control includes a control executed according to a control instruction corresponding to the second gesture motion information.

As one possible implementation of the fourth aspect of the present application, the vehicle has a display, and the first controlling includes displaying an environment image including the active user on the display, wherein the active user is highlighted in the environment image.

As a possible implementation manner of the fourth aspect of the present application, the first control includes: causing the vehicle to autonomously move towards the active user.

As a possible implementation manner of the fourth aspect of the present application, the obtaining of the first gesture information is performed on condition that the mobile terminal is subjected to a predetermined operation.

As a possible implementation manner of the fourth aspect of the present application, the present application further includes a terminal location obtaining unit, configured to obtain location information of the mobile terminal from the mobile terminal; an optical sensor actuation control unit for adjusting the optical sensor to position the mobile terminal within a detection range of the optical sensor according to the position information.

As a possible implementation manner of the fourth aspect of the present application, when the motion trajectory information is acquired but the gesture motion information is not acquired within a predetermined time, information requesting a first gesture motion is sent to the mobile terminal.

As a possible implementation manner of the fourth aspect of the present application, the mobile terminal further includes a terminal ID authentication module, configured to authenticate validity of an ID of the mobile terminal, and the terminal trajectory acquisition module is configured to acquire motion trajectory information of the mobile terminal whose ID is valid.

In addition, a fifth aspect of the present application provides a human-computer interaction device, which includes a processor and a memory, wherein the memory stores program instructions, and the program instructions, when executed by the processor, cause the processor to execute the method of any of the first aspect.

A sixth aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a computer, cause the computer to perform the method of any of the first aspects.

A seventh aspect of the present application provides a computer program which, when executed by a computer, causes the computer to perform the method of any of the first aspects.

An eighth aspect of the present application provides a vehicle control apparatus comprising a processor and a memory, the memory having stored therein program instructions that, when executed by the processor, cause the processor to perform the method of any of the third aspects.

A ninth aspect of the present application provides a computer readable storage medium having stored thereon program instructions which, when executed by a computer, cause the computer to perform the method of any of the third aspects.

A tenth aspect of the present application provides a computer program which, when executed by a computer, causes the computer to perform the method of any one of the third aspects.

By adopting the technical scheme of the application, when a user interacts with object equipment such as a vehicle, the user makes gesture actions by using a hand (or an arm) holding the mobile terminal, at the moment, on one hand, the gesture action information (namely first gesture action information) of the user is detected according to an optical sensor arranged on the object equipment, and on the other hand, the motion track information (namely terminal motion track information) of the mobile terminal moving along with the hand of the user is detected according to a motion sensor arranged on the mobile terminal. And then, comparing the first gesture action information with the terminal motion track information, judging whether the first gesture action information is matched with the terminal motion track information, and executing corresponding first control when the first gesture action information is judged to be matched with the terminal motion track information.

The mobile terminal moves along with the hand of the user, so that the motion track information of the mobile terminal has unique correspondence with the gesture action information of the hand of the user, whether the gesture action is effective or not can be reliably judged by judging whether the gesture action information is matched with the terminal track information or not, the interference of the gesture action of an invalid user is avoided, and the effective human-computer interaction can be further realized.

Therefore, by adopting the technical scheme of the application, effective human-computer interaction can be realized even if face recognition is not carried out, and the problems of invasion of privacy and the like caused by face recognition can be avoided.

Drawings

Fig. 1 is a scene explanatory diagram of a vehicle clearance control according to an embodiment of the present application;

FIG. 2 is a block diagram of a vehicle according to an embodiment of the present application;

fig. 3 is a block diagram of a smart phone according to an embodiment of the present application;

FIG. 4 is an illustrative diagram of a user interaction process with a vehicle via a smartphone in accordance with one embodiment of the subject application;

FIG. 5A is a flow chart of a vehicle-side process according to an embodiment of the present application;

fig. 5B is an explanatory diagram of the details of the gesture motion and terminal trajectory matching process in fig. 5A;

FIG. 6 is a flow diagram of a process on the smart phone side to which one embodiment of the present application relates;

FIG. 7 is a schematic illustration for explaining an orientation detection technique according to an embodiment of the present application;

fig. 8A is a diagram illustrating an example of a display screen of a smart phone according to an embodiment of the present application;

fig. 8B shows an example of a display screen of a smart phone according to an embodiment of the present application;

fig. 8C shows an example of a display screen of a smart phone according to an embodiment of the present application;

FIG. 9 is an illustration of a taxi taking scenario according to an embodiment of the present application;

FIG. 10 is a block diagram of a vehicle according to an embodiment of the present application;

fig. 11 is a block diagram illustrating a cloud server according to an embodiment of the present application;

fig. 12 is an explanatory diagram of a process in which a user reserves a vehicle through a smartphone according to an embodiment of the present application;

FIG. 13 is an illustration of a vehicle identifying a user in the vicinity of the user's pick-up location in accordance with an embodiment of the present application;

FIG. 14 shows an example of an image captured by the onboard camera in the scene shown in FIG. 13;

FIG. 15 is an illustrative diagram of a process by which a user interacts with a vehicle via a smartphone in the vicinity of the user's boarding location in accordance with one embodiment of the present application;

FIG. 16 is a flowchart of a process performed on the vehicle side in performing the interaction process shown in FIG. 15;

fig. 17 is a flowchart of processing performed on the cloud server side in performing the interaction process shown in fig. 15;

fig. 18 is an explanatory view of a car booking mode according to an embodiment of the present application;

fig. 19 is an explanatory diagram of a modification of the interaction procedure shown in fig. 15.

Fig. 20 is an explanatory view of a scenario in which a food delivery robot according to an embodiment of the present application delivers food;

fig. 21 is a diagram illustrating a scenario of space control for a smart tv according to an embodiment of the present application;

fig. 22 is an explanatory view of an identifying method of a vehicle for identifying a specific location of a user disclosed in the present specification;

fig. 23 is an explanatory diagram of a process of controlling a vehicle by voice by a user disclosed in the present specification.

Detailed Description

Next, a description is given of a technical solution of the embodiment of the present application.

In the following description, expressions such as "first" and "second" are used for distinguishing between similar things, but do not distinguish importance and express precedence.

The embodiment of the present application provides a human-computer interaction technology for implementing interaction between a user (i.e., "human" in human-computer interaction) and a target device (i.e., "machine" in human-computer interaction), wherein, in order to perform effective interaction, the user needs to hold a mobile terminal, and to establish a communication connection between the mobile terminal and the target device or to establish a communication connection between the mobile terminal and a common third-party device (e.g., a server), and then the user performs a gesture action with a hand (or an arm) holding the mobile terminal, wherein, on one hand, the mobile terminal detects a motion trajectory moving along with the hand of the user, on the other hand, for example, an optical sensor (e.g., a camera, a millimeter wave radar, a laser radar, etc.) of the target device detects the gesture action of the user, and then compares the motion trajectory information of the mobile terminal with the gesture action information of the user, and judging whether the two are matched, and executing corresponding control (called as first control) when the two are judged to be matched.

In such a human-computer interaction technology, it is essentially determined that a gesture is "valid" on the condition that the gesture matches the motion trajectory of the mobile terminal. The mobile terminal moves along with the hand of the user, so that the motion track information of the mobile terminal has unique correspondence with the gesture action information of the hand of the user, whether the gesture action is effective or not can be reliably judged by judging whether the gesture action information is matched with the terminal track information or not, the interference of the gesture action of irrelevant personnel is avoided, and the effective human-computer interaction can be realized.

Here, the object device interacting with the person may be a vehicle, a robot, a smart tv, or the like; the mobile terminal can be a smart phone, wearable equipment, an electronic car key, a remote controller and the like; the "subject device responds accordingly" may be, for example, a control instruction represented by performing a gesture action. It may be that the "user" in the image is authenticated as a valid user in the image recognition processing, at this time, when the target apparatus is a moving body such as a vehicle or a mobile robot, the vehicle or the mobile robot may be controlled to move toward the user, for example, according to the result of the image recognition processing, or the like.

Here, the expression "hold the mobile terminal" means that the mobile terminal moves together with the hand of the user, and does not mean that the form of the fingers when holding the mobile terminal is limited.

In addition, in order to implement the above-described human-computer interaction technology, as described in detail later, a human-computer interaction method, a human-computer interaction apparatus, a vehicle control method, a vehicle control apparatus, a vehicle, a mobile terminal control method, a mobile terminal control apparatus, a mobile terminal, a server, a computing device, a computer-readable storage medium, a computer program, and the like are provided in the embodiments of the present application.

Hereinafter, a plurality of embodiments of the present application will be described in detail with reference to the accompanying drawings.

[ example 1 ]

The embodiment relates to a method for controlling a vehicle in an air space through gesture actions.

First, an interactive scenario of the present embodiment is schematically described with reference to fig. 1.

As shown in fig. 1, in the present embodiment, an example of a person in human-computer interaction is a user 300, an example of an object device is a vehicle 100, and an example of a mobile terminal is a smartphone 200. Specifically, the vehicle 100 is parked in a parking space 601 of a parking lot, the user 300 intends to perform space control on the vehicle 100 through a gesture motion, at this time, the user 300 holds the smartphone 200 in hand and enters a bluetooth or UWB signal range of the vehicle 100, the smartphone 200 and the vehicle 100 initiate bluetooth connection or UWB connection, and after the vehicle 100 successfully authenticates an ID (identity, Identification) of the smartphone 200, the two establish connection. After that, the user 300 performs a predetermined operation on the smartphone 200, the predetermined operation indicating that the user 300 intends to cause the vehicle 100 to activate the air-separation control function, the smartphone 200 transmits an instruction requesting the vehicle 100 to activate the air-separation control function to the vehicle 100, and the smartphone 200 also transmits terminal position information indicating the position of itself to the vehicle 100.

After receiving the instruction sent by the smartphone 200, the vehicle 100 activates a rotatable camera (not shown in fig. 1), and turns the camera to the direction of the smartphone 200, so that the smartphone 200 is located within the detection range of the camera, and the preparation action of the gesture recognition function is completed. Also, the vehicle 100 transmits, to the smartphone 200, information indicating that "the direction in which the in-vehicle camera has turned to the location of the smartphone 200 and/or the gesture recognition has been activated". When receiving the information, the smartphone 200 displays a prompt message on the display screen to notify the user 300 of: the camera of the vehicle 100 has turned to the direction of its location and/or the gesture recognition function of the vehicle 100 has been activated.

After viewing the prompt message, the user 300 makes a predetermined gesture motion with the hand (or arm) holding the smartphone 200, where the predetermined gesture motion corresponds to the corresponding control instruction. In addition, the corresponding relationship between the predetermined gesture and the control command is known in advance by the user 300.

At this time, on the one hand, the smartphone 200 detects the movement track of the smartphone 200 through a built-in motion sensor capable of detecting the movement of the smartphone 200, and transmits the detected movement track and track time information to the vehicle 100 through a wireless communication method such as bluetooth, Wi-Fi, UWB, or infrared. Wherein the trajectory time information indicates a time when the smartphone 200 generates the motion trajectory. Examples of the motion sensor include an acceleration sensor and a gyro sensor.

On the other hand, the vehicle 100 detects a gesture motion of the user 300 by an optical sensor such as a camera. Then, the gesture motion information of the user 300 detected by the camera or the like is compared with the motion trajectory information of the smartphone 200 received by the wireless communication method, and it is determined whether the two match (which will be described later in detail), and if the two match, the vehicle 100 executes a control instruction corresponding to the gesture motion of the user 300.

The following describes a structure of the vehicle 100 with reference to fig. 2.

As shown in fig. 2, the vehicle 100 has a vehicle control device 10. The vehicle 100 also has a camera 20, a communication device 30, and a navigation device 40. In addition, the vehicle 100 also has a powertrain 50, a steering system 60, and a braking system 70. Further, in the present embodiment, the vehicle 100 also has a camera actuating device 80. Although vehicle 100 also includes components other than these components, the description thereof is omitted here.

The cameras 20 are used for detecting the environment outside the vehicle, and the number of the cameras can be one or more. In the present embodiment, the camera 20 is a rotatable camera that can be actuated by the camera actuation device 80 to change the orientation, thereby changing the detection range. The camera 20 is an example of an external environment sensor, and may be provided with a laser radar, a millimeter wave radar, or the like to detect an environment outside the vehicle. In addition, the camera 20, the laser radar, and the millimeter wave radar are examples of the optical sensor for detecting the gesture motion of the user in the present application.

The communication device 30 can perform wireless communication with an external object not shown. The external object may include, for example, a base station, a cloud server, a mobile terminal (smart phone, etc.), a road side device, another vehicle, and the like, which are not shown in the drawings.

The Navigation device 40 typically has a GNSS (Global Navigation Satellite System) receiver and a map database, which are not shown. The navigation device 40 can determine the position of the vehicle 100 from satellite signals received by the GNSS receiver, and can generate a route to a destination from map information in the map database and provide information on the route to the control device 10. In addition, the navigation device 40 may further include an IMU (Inertial Measurement Unit) that performs positioning by fusing information of the GNSS receiver and information of the IMU.

The power system 50 includes a drive ECU not shown and a drive source not shown. The drive ECU controls the driving force (torque) of the vehicle 100 by controlling the driving source. As examples of the driving source, an engine, a driving motor, and the like may be mentioned. The drive ECU can control the drive source in accordance with the operation of the accelerator pedal by the driver, thereby being able to control the driving force. The drive ECU can also control the drive source in accordance with an instruction transmitted from the vehicle control device 10, and can control the drive force. The driving force of the driving source is transmitted to wheels, not shown, via a transmission, not shown, or the like, to drive the vehicle 100 to travel.

The Steering system 60 includes an EPS (Electric Power Steering) ECU (Electric Power Steering) which is a Steering ECU not shown, and an EPS motor not shown. The steering ECU is capable of controlling the EPS motor in accordance with the operation of the steering wheel by the driver, thereby controlling the orientation of the wheels (specifically, the steered wheels). In addition, the steering ECU can also control the direction of the wheels by controlling the EPS motor in accordance with a command transmitted from the vehicle control device 10. In addition, steering may be performed by changing the torque distribution or the braking force distribution to the left and right wheels.

The brake system 70 includes a brake ECU not shown and a brake mechanism not shown. The brake mechanism operates the brake member by a brake motor, a hydraulic mechanism, and the like. The brake ECU can control the brake mechanism in accordance with the operation of the brake pedal by the driver, and can control the braking force. The brake ECU can also control the braking mechanism in accordance with a command transmitted from the vehicle control device 10, and can control the braking force. In the case where the vehicle 100 is an electric vehicle or a hybrid vehicle, the brake system 70 may further include an energy recovery brake mechanism.

The vehicle Control apparatus 10 may be implemented by one ECU (Electronic Control Unit), which is a computing device including a processor, a memory, and a communication interface connected via an internal bus, and may also be implemented by a combination of a plurality of ECUs, in which program instructions are stored, and when executed by the processor, the program instructions function as corresponding functional modules and functional units. The functional modules comprise a gesture action acquisition module 11, a gesture matching module 12, an automatic driving control module 13, a terminal ID authentication module 14, a terminal track acquisition module 15, an instruction identification module 16 and a user authentication module 17. That is, the vehicle control device 10 implements these functional modules and/or functional units by executing a program (software) by a processor, but the vehicle control device 10 may implement all or part of these functional modules and/or functional units by hardware such as an LSI (Large Scale Integrated Circuit) and an ASIC (Application Specific Integrated Circuit), or may implement all or part of these functional modules and/or functional units by a combination of software and hardware.

The terminal ID authentication module 14 is configured to authenticate validity of the ID of the mobile terminal and authenticate the mobile terminal. For example, for the owner's smartphone, the terminal ID authentication module 14 authenticates its ID as valid, authenticating it as a valid terminal. In addition, the terminal ID authentication module 14 may also authenticate the authority of the mobile terminal, for example, for the smart phone of the owner, authenticate that it has the highest authority, and may execute all the controls; for the smart phone of the owner's family, it is authenticated to have restricted rights, allowing it to perform certain controls, such as turning on the air conditioner, and restricting it from performing certain controls, such as controlling the vehicle 100 to run.

The gesture motion acquisition module 11 is configured to obtain gesture motion information indicating a gesture motion, and includes a terminal position acquisition unit 11a, a camera actuation control unit 11b, a gesture motion recognition unit 11c, a user recognition unit 11d, and an information generation unit 11 e.

The terminal position acquisition unit 11a is configured to acquire terminal position information, which is position information of a mobile terminal (e.g., a smartphone) whose ID is authenticated to be valid.

The camera actuation control unit 11b is configured to calculate an adjustment amount of the camera 20 according to the position information of the mobile terminal, i.e., the terminal position information, and the current orientation of the camera 20, and cause the camera actuation device 80 to actuate the camera 20 according to the adjustment amount, so that the position of the mobile terminal is within the detection range of the camera 20. The camera actuation control unit 11b corresponds to an optical sensor actuation control unit in the present application.

The gesture recognition unit 11c is configured to recognize a gesture of the user from the image captured by the camera 20, and obtain gesture information. In this embodiment, the gesture motion information includes gesture motion form information and gesture motion time information, the gesture motion form information represents a form of the gesture motion, and the gesture motion time information represents a time for making the gesture motion, where the time may be a time period from a gesture motion start time to a gesture motion end time.

The user recognition unit 11d is used for recognizing the user according to the image captured by the camera 20. Here, the gesture recognition unit 11c and the user recognition unit 11d may be integrated into one unit, and the user and the gesture thereof are recognized, so that the processing efficiency can be improved. The information generating unit 11e is configured to generate information to be sent to the mobile terminal, which includes information indicating that "the camera is turned on" or "the gesture recognition function is activated" and information for requesting the user to make a gesture action again, as described later.

The terminal trajectory acquisition module 15 is configured to receive terminal trajectory information indicating a power trajectory of the mobile terminal from the mobile terminal through the communication device 30. In this embodiment, the terminal trajectory information includes trajectory form information indicating a form of the motion trajectory and trajectory time information indicating a time when the motion trajectory is made, and may be a time period from a start time to an end time of the motion trajectory. As a modification, the terminal trajectory information may include only trajectory form information.

The gesture matching module 12 is configured to perform matching processing on the gesture motion information obtained by the gesture motion obtaining module 11 and the motion trajectory information obtained by the terminal trajectory obtaining module 15, that is, determine whether the gesture motion information matches the motion trajectory information. In this embodiment, the gesture matching module 12 includes a morphological similarity determination unit 12a and a time consistency determination unit 12 b.

The form similarity determination unit 12a is configured to determine whether the form of the gesture motion is similar to the form of the motion trajectory of the mobile terminal, for example, when the degree of similarity is greater than a certain degree or a predetermined similarity threshold value, the two are determined to be similar. The form similarity determining unit 12a may compare the motion trajectory and the gesture motion with a preset template to determine whether the form of the gesture motion is similar to the form of the motion trajectory of the mobile terminal. And matching judgment can be carried out through a trained track matching model. The trajectory matching model may be obtained by training a CNN (Convolutional Neural Networks) model or an MLP (Multi-layer perceptron) model using a motion trajectory of the intelligent terminal collected when a user performs a predetermined gesture motion with a hand holding the intelligent terminal and a gesture motion of the user collected by the camera as samples.

The time consistency determination unit 12b is configured to determine whether the time of the gesture movement is consistent with the time of the motion trajectory, for example, when the time of the gesture movement is equal to or greater than a certain consistency threshold, it is determined that the time of the gesture movement is consistent with the time of the motion trajectory.

In this embodiment, when the determination result of the morphological similarity determination unit 12a is "similar" and the determination result of the temporal consistency determination unit 12b is "consistent", the gesture matching module 12 determines that the gesture motion information matches the motion trajectory information.

The command recognition module 16 is configured to recognize a control command represented by the gesture motion, for example, the control command represented by the gesture motion may be recognized according to a preset correspondence table between a gesture motion template and the control command.

The user authentication module 17 is configured to authenticate a user corresponding to the gesture motion matched with the power trajectory of the mobile terminal as a valid user. Instruction recognition module 16 may also recognize control instructions represented by gesture actions of the user that are authenticated as valid by user authentication module 17. It should be noted that the user authentication module 17 authenticates the "user" in the image, which is the "user" in the information obtained by the sensor (in this embodiment, the camera 20), and the terminal ID authentication module 14 authenticates the terminal ID, which are different from each other.

The automated driving control module 13 is used to control the autonomous traveling (autonomous movement) of the vehicle 100, and includes an action planning unit 13a and a traveling control unit 13 b. The automatic driving control module 13 is an example of a control execution module in the present application.

The action planning means 13a calculates a target trajectory from the vehicle 100 to a destination, determines a traveling state of the vehicle 100 based on external environment information detected by an optical sensor such as the camera 20, updates the target trajectory, and determines various actions of the vehicle 100. The route calculated by the navigation device 40 is a rough route. In contrast, the target trajectory calculated by the action planning means 13a includes relatively detailed contents for controlling acceleration, deceleration, and steering of the vehicle 100 in addition to the rough path calculated by the navigation device 40.

The travel control means 13b generates control commands to be sent to the power system 50, the steering system 60, and the brake system 70 so as to control the power system 50, the steering system 60, and the brake system 70 in accordance with the action plan provided by the action planning means 13a, and causes the vehicle 100 to travel in accordance with the action plan.

The related structure of the smartphone 200 will be described with reference to fig. 3.

As shown in fig. 3, the smartphone 200 includes a processor 110 and an internal memory 190, and further includes a wireless communication module 120, a speaker 131, a receiver 132, a microphone 133, a display 140, a camera 150, a physical key 160, a gyro sensor 171, an acceleration sensor 172, a magnetic sensor 173, a touch sensor 174, and a pointing device 180. Note that the smartphone 200 includes other components in addition to these components, but the description thereof is omitted here.

Processor 110 may include one or more processing units. For example: the processor 110 may include one or any combination of an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a flight controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, or a neural Network Processor (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.

A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.

For one embodiment, processor 110 may include one or more interfaces. The interface may include one or any combination of an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, or a Universal Serial Bus (USB) interface.

The internal memory 190 may be used to store computer-executable program code, which includes instructions. The internal memory 190 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, a phonebook, etc.) created during use of the portable device, and the like. In addition, the internal memory 190 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications and data processing of the smartphone 200 by executing instructions stored in the internal memory 190 and/or instructions stored in a memory provided in the processor.

The wireless communication module 120 is configured to implement a wireless communication function of the smartphone 200, where the wireless communication function typically includes a wireless communication function such as 2G/3G/4G/5G, and may further include a Wireless Local Area Network (WLAN) (e.g., Wi-Fi network), an Ultra Wide Band (UWB), a bluetooth (registered trademark), a Global Navigation Satellite System (GNSS), a frequency modulation (frequency modulation, FM), a Near Field Communication (NFC), an infrared technology (infrared, IR), and other wireless communication functions.

The audio module comprises a speaker 131, a receiver 132 and a microphone 133, wherein the speaker 131 is used for providing sound reproduction, the receiver 132 is also called receiver and is used for providing sound reproduction in most cases, and the microphone 133 is used for receiving the voice of the user.

The display screen 140 is used to provide an image or video display function, and in addition, as a typical example, the display screen 140 is constructed as a touch screen, i.e., a touch sensor 174 is integrated therein, so that a user can perform a desired operation by touching the display screen 140.

The camera 150 is used to provide a function of taking an image or video, and typically may include a front camera and a rear camera.

The physical keys 160 include, for example, a switch key, a volume adjustment key, and the like.

The gyro sensor 171 may be used to determine a gesture during the movement of the smartphone 200. In some embodiments, the angular velocity of the portable device in the preset coordinate system may be determined by the gyro sensor 171.

The acceleration sensor 172 may detect a moving direction and a moving acceleration of the portable device. The magnitude and direction of gravity can be detected when the portable device is stationary. The method can also be used for recognizing the gesture of the portable equipment, and is applied to applications such as pedometers and the like.

The magnetic sensor 173 is a device for converting a change in magnetic property of the sensitive element caused by an external factor such as a magnetic field, a current, a stress strain, a temperature, a light, etc. into an electric signal in such a manner as to detect a corresponding physical quantity. In some embodiments, the angles between the portable device and the four directions of the south, the east, the west and the north can be measured through the magnetic sensor.

The positioning device 180 may provide the smartphone 200 with a positioning function by receiving signals of a global navigation satellite system.

Referring to fig. 4, the interaction process of the user interacting with the vehicle according to the present embodiment will be systematically described.

As shown in fig. 4, in step S1, the user 300 walks into the parking lot with the smartphone 200 in hand, and enters the coverage of the in-vehicle wireless network of the vehicle 100, such as bluetooth, Wi-Fi, or UWB (Ultra Wide Band). In the present embodiment, the user 300 is the owner of the vehicle 100, and the smartphone 200 held by the user is previously bound to the vehicle 100. Therefore, when the smartphone 200 comes within the connection range of the in-vehicle wireless network of the vehicle 100, the smartphone 200 automatically establishes a wireless connection with the vehicle 100.

Thereafter, the smartphone 200 monitors whether the smartphone 200 is pointed at the vehicle 100 using a directional detection technique such as bluetooth, Wi-Fi, or UWB. If the user 300 points the smartphone 200 at the vehicle 100, it indicates that the user 300 wants to perform the air separation control on the vehicle 100. Therefore, the smartphone 200 can determine whether the user 300 intends to perform the air-separation control on the vehicle 100 by determining whether the smartphone 200 is directed to the vehicle 100.

Here, "the smartphone 200 points at the vehicle 100" may be "rear-pointing at the vehicle", for example, a straight line perpendicular to the rear of the smartphone 200 intersects the vehicle 100; the "head of the vehicle" may be directed, and for example, an extension L1 (see fig. 1) of the smartphone 200 in the longitudinal direction may intersect the vehicle 100.

Referring to fig. 7, an orientation detection technique used by the smartphone 200 to detect whether it is pointed at the vehicle 100 will be briefly described.

Specifically, as shown in fig. 7, the signals are transmitted between the device a and the device B through the antennas, and since the signal strength is gradually attenuated along with the propagation distance, the receiver may receive the signals transmitted by the transmitter through a plurality of antennas (4 antennas N1-N4 in fig. 7), and calculate the difference in transmission time of different received signals according to the strength of different received signals, thereby calculating the azimuth angle α and the distance L of the device B with respect to the device a. By using such an orientation detection technology, the smartphone 200 can detect the azimuth angle α between the smartphone 200 and the vehicle 100, and then, in combination with a self coordinate system preset in factory settings of the smartphone, can determine whether the back surface or the head of the smartphone 200 points to the vehicle 100.

In the present embodiment, by determining whether or not the smartphone 200 is pointed at the vehicle 100, it can be determined whether or not the user 300 has an intention to perform the air-separation control on the vehicle 100. Therefore, when it is detected that the smartphone 200 is pointed to the vehicle 100, the user is prompted that the smartphone 200 is connected to the vehicle 100, and it is possible to avoid the user from being bored by being uselessly prompted when the user does not intend to perform the air-separation control on the vehicle 100.

As a modification, before detecting whether or not the smartphone 200 is directed to the vehicle 100, it may be detected whether or not the movement locus of the smartphone 200 is the "preset locus" by using the gyro sensor 171, the acceleration sensor 172, the magnetic sensor 173, and the like included in the smartphone 200, for example, when the smartphone 200 is changed from the horizontal state to the vertical state, and when the movement locus of the smartphone 200 satisfies the "preset locus", it may be detected whether or not the smartphone 200 is directed to the vehicle 100. Therefore, the situation that whether the vehicle 100 is pointed or not is detected after the smart phone 200 is automatically connected to the vehicle 100 can be avoided, power consumption can be reduced, false triggering when the vehicle 100 is pointed by the user 300 after the smart phone 200 is automatically connected to the vehicle 100 can be avoided, and accuracy of confirmation of the user intention is improved.

When the smartphone 200 detects that it is pointed at the vehicle 100, the process proceeds to step S2.

In step S2, as shown in fig. 8A, the smartphone 200 displays a prompt message 140a on the display screen 140 (e.g., on a lock screen interface) to prompt the user 300 that the pointed vehicle 100 has been identified and the wireless connection has been established, and the vehicle 100 can be remotely controlled by the smartphone 200. As a modification, the user may be prompted by emitting a voice through the speaker 131, or may be prompted by vibrating the smartphone 200.

As a modification, the process may be performed without "detecting whether or not the smartphone 200 is directed to the vehicle 100" in step S1, and after the smartphone 200 establishes a wireless connection with the vehicle 100, the process may proceed directly to step S2 to display the presentation information 140a on the display 140 of the smartphone 200 or to issue a voice prompt through the speaker 131 of the smartphone 200. As described above, in the present embodiment, the presentation information 140a is displayed on condition that the smartphone 200 is detected as being directed to the vehicle 100, so that the presentation of the presentation information 140a can be made more appropriate for the intention of the user 300, and the user 300 can be prevented from being bored.

After the presentation information 140a is displayed, the flow proceeds to step S3.

In step S3, the smartphone 200 sets the operation target of the physical key 160 included therein as the vehicle 100, defines, for example, a long-press of the power key for 3 seconds as a request for activating the air-separation control function of the vehicle 100, and monitors whether or not a preset operation such as a long-press of the power key for 3 seconds is received.

In addition, instead of pressing the power key for 3 seconds for a long time, the preset operation may be to click a corresponding virtual operation key on the operation interface of the vehicle control APP, where the virtual operation key is used to "activate the vehicle air-separation control function".

As a modification, in step S3, the operation target of physical key 160 may not be set as vehicle 100. In this case, when the user 300 performs a slide operation on the presentation information 140a, an operation interface of a vehicle control APP (Application) including a virtual key for "activating the vehicle clearance control function" may pop up on the display screen 140.

When receiving a preset operation such as a long press of the power key for 3 seconds by the user, the flow proceeds to step S4.

In step S4, the smartphone 200 issues an instruction to the vehicle 100 requesting activation of the air-separation control function, and the location information (i.e., terminal location information) and the ID information (i.e., terminal ID information) of the smartphone 200 may be transmitted simultaneously.

As a modification, steps S3 and S4 may be omitted, and without steps S3 and S4, the smartphone 200 may automatically transmit an instruction indicating "activate the vehicle space control function" to the vehicle 100 without the user 300 pressing the power key for 3 seconds longer, while the prompt information 140a is displayed in step S2. In the present embodiment, the steps S3 and S4 are employed, so that it is possible to prevent an erroneous operation, and to transmit an instruction indicating "activate the vehicle clearance control function" to the vehicle 100 after the user' S intention is clarified based on a preset operation (long power key press operation), so that it is possible to prevent the vehicle 100 from erroneously activating the clearance control function, and to reduce power consumption.

When the smartphone 200 issues an instruction indicating "activate the vehicle air-space control function", the vehicle 100 receives the instruction and the terminal position information and the terminal ID information transmitted simultaneously with the instruction in step S10, and thereafter, the vehicle 100 performs authentication of the user identity and/or authority based on the terminal ID information. Since the smartphone 200 is the phone of the owner of the vehicle 100, the vehicle 100 authenticates the ID of the smartphone 200 as a valid ID in step S10. After the authentication terminal ID is valid, the flow proceeds to step S20.

In step S20, the vehicle 100 turns on the rotatable camera 20, adjusts the orientation of the camera 20 according to the terminal position information transmitted from the smartphone 200, turns the camera 20 to the direction in which the smartphone 200 is located, that is, the direction in which the user is located, and activates the gesture recognition function.

As a modification, the camera 20 may be a fixed-angle camera, and in this case, the user needs to stand within the detection range of the camera 20 to perform a gesture motion. In addition, the present embodiment is described by taking a camera as an example, but other optical sensors capable of recognizing gesture actions may be used, for example, a millimeter wave radar.

The vehicle 100 completes the adjustment of the camera 20 (i.e., completes the preparation work for gesture recognition), and after the gesture recognition function is activated, the process proceeds to step S30.

In step S30, the vehicle 100 transmits information that the camera and/or the gesture motion recognition function has been activated to the smartphone 200.

In step S40, the smartphone 200 receives the message, and as shown in fig. 8B, displays a prompt message on the display screen 140 to let the user know that the gesture motion recognition function of the vehicle 100 is activated, so as to prompt the user 300 that a gesture motion can be made in the direction of the vehicle 100. The presentation may be performed by emitting a sound through the speaker 131 or by generating a vibration in the smartphone 200.

Then, the user 300 performs a predetermined gesture motion with the hand (or arm) holding the smartphone 200, the predetermined gesture motion being a gesture motion corresponding to the control command, for example, waving both hands, indicating that the summoning vehicle 100 is traveling to the location of the user.

At this time, on the one hand, in step S50, the smartphone 200 detects the movement trajectory of the smartphone 200.

On the other hand, in step S60, the vehicle 100 detects the gesture motion of the user 300 by the camera 20, and obtains gesture motion information. Optionally, the gesture motion information generated by the vehicle 100 includes time information for making the gesture motion, and the time information may be time period information from the start time to the end time of the gesture motion. In addition, the gesture motion made by the user 300 with the hand holding the smartphone 200 in the present embodiment corresponds to the first gesture motion in the present application, and accordingly, the gesture motion information about the first gesture motion obtained by the vehicle 100 through the camera 20 at this time corresponds to the first gesture motion information in the present application. The time information for performing the first gesture corresponds to "information on the first time for performing the first gesture" in the present application.

In addition, in step S70, after step S50, the smartphone 200 transmits terminal trajectory information indicating the movement trajectory of the smartphone 200 to the vehicle 100. Also, optionally, time information of the detected motion trajectory of the smartphone 200 is attached at the same time, that is, time information indicating the generation time of the motion trajectory of the smartphone 200 is transmitted to the smartphone 200. The time information may be time period information from a start time to an end time of the motion trajectory of the smartphone 200. Here, "time information of the motion trajectory of the smartphone 200" corresponds to "information on the second time at which the motion trajectory is generated" in the present application.

In step S80, the vehicle 100 compares the received motion trajectory information of the smartphone 200 with the detected gesture motion information of the user 300, and determines whether the gesture motion of the user 300 matches the motion trajectory of the smartphone 200. The preset template can be used for comparing the motion track and the gesture action so as to judge whether the gesture action is similar to the motion track of the mobile terminal or not. And matching judgment can be carried out through a trained track matching model. The trajectory matching model may be obtained by training a CNN (Convolutional Neural Networks) model or an MLP (Multi-layer perceptron) model using a motion trajectory of the smartphone and a user gesture collected by the camera as samples when the user performs a predetermined gesture with a hand holding the smartphone.

In this embodiment, the similarity between the form of the motion trajectory and the form of the gesture of the user is compared, and the consistency between the time information of the motion trajectory and the time information of the gesture motion is compared, so as to make a decision whether to match the motion trajectory or not. The gesture motion with the form similarity above the first similarity threshold and the time information consistency above the first consistency threshold can be judged to be matched with the motion track. When there are a plurality of gesture motions in which the similarity between the shape of the motion trajectory and the shape of the gesture motion is equal to or greater than a predetermined threshold, the gesture motion with the most matching time information is selected as the object matching the motion trajectory.

As a modification, it may be determined whether or not "match" only based on the similarity between the form of the motion trajectory and the form of the user gesture.

If it is determined that the gesture motion matches the terminal trajectory, the process proceeds to step S90.

In step S90, vehicle 100 executes a control command corresponding to the gesture motion. For example, if it is preset that the two-time waving corresponds to the summoning of the vehicle to the location of the user, at this time, the vehicle 100 is started, and the automatic driving function is activated, so that the vehicle 100 is controlled to drive to the location of the user 300 through the automatic driving function.

The overall process of the interaction of the user 300 with the vehicle 100 through the smartphone 200 is explained above, and the processing flow on the vehicle 100 side and the processing flow on the smartphone 200 side are respectively explained in detail below to describe the present embodiment in more detail.

First, an example of the processing flow on the vehicle 100 side will be described with reference to fig. 5A. The processing flow shown in fig. 5A is executed by the control device 10 included in the vehicle 100.

As shown in fig. 5A, the control device 10 determines in step S10 whether or not an instruction to activate the air-break control function of the active terminal is received. Specifically, the control device 10 monitors whether or not an instruction requesting activation of the air-gap control function (including terminal ID information and terminal position information) from the mobile terminal is received via the communication device 30, and when the instruction requesting activation of the air-gap control function is received from the mobile terminal, the terminal ID authentication module 14 determines whether or not the mobile terminal is a valid terminal based on the terminal ID of the mobile terminal, and when the terminal ID is determined to be a valid terminal, the flow proceeds to step S20, and when the terminal ID is determined not to be a valid terminal, the flow returns to continue the monitoring. In the present embodiment, when the smartphone 200 issues the above-described instruction to the vehicle 100, the terminal ID authentication module 14 authenticates the smartphone 200 as a valid terminal. In the following description, this is described as an example.

When the smartphone 200 is authenticated as a valid terminal, the terminal position acquisition unit 11a acquires the terminal position information of the smartphone 200.

In step S20, the control device 10 activates the camera 20 and determines whether or not the terminal position is within the detection range of the camera 20 based on the terminal position information, and when the terminal position is not within the detection range of the camera 20, the camera actuation control unit 11b adjusts the orientation of the camera 20 via the camera actuation device 80 so that the terminal position is within the detection range of the camera 20.

Then, in step S30, information indicating "camera is activated" or "gesture recognition function is activated" is generated by the information generation unit 11e, and the control device 10 transmits the information to the smartphone 200 via the communication device 30.

Then, in step S32, it is determined whether or not the terminal trajectory acquisition module 15 has acquired the terminal trajectory information transmitted from the smartphone 200 via the communication device 30. When the terminal trajectory information is acquired, the process proceeds to step S60, and when the terminal trajectory information is not acquired, the process proceeds to step S34. In step S34, it is determined whether or not a first predetermined time has elapsed from step S30, i.e., when the information "camera is turned on" or "gesture recognition function is activated" is transmitted to the mobile terminal to the present time, and if the first predetermined time has not elapsed, the process returns to step S32 to continue monitoring, and if the first predetermined time has elapsed, the process proceeds to step S62. In step S62, it is determined whether or not a second predetermined time period, which is greater than the first predetermined time period, has elapsed since the start of step S30 until the current time, and the process is ended when the second predetermined time period has elapsed. When the second predetermined time has not elapsed, the process proceeds to step S64, the message generation unit 11e generates information requesting the user to perform a gesture motion, the control device 10 transmits the message to the mobile terminal via the communication device 30, and thereafter, the process returns to step S32 to continue monitoring whether the mobile terminal trajectory information is acquired. At this time, as shown in fig. 8C, the mobile terminal prompts the user to request the user to perform a gesture operation, for example, by displaying.

In step S60, it is determined whether or not the gesture motion is recognized by the gesture motion recognition unit 11c based on the detection information of the camera 20. Here, in the present embodiment, after step S20, the gesture motion recognition unit 11c is activated, and continuously performs image processing or the like on the image information acquired by the camera 20 to recognize the gesture motion. However, as a modification, after it is determined in step S32 that the terminal trajectory information has been acquired (yes in step S32), a time period from the start time to the end time of the terminal movement is obtained from the time information of the terminal trajectory included in the terminal trajectory information, a time range for image recognition of the image information acquired by the camera 20 is set according to the time period, and only the image information within the time range is image-recognized to acquire the gesture operation information, whereby the amount of calculation work by the gesture operation recognition means 11c can be reduced, and power consumption can be reduced. Meanwhile, since the gesture actions made outside the time range (for example, the gesture actions of other users (referred to as invalid users) other than the user 300) are not acquired, the number of acquired gesture actions is reduced, and thus, the calculation workload of the gesture matching module 12 can be reduced, and the processing speed is increased.

If it is determined in step S60 that the gesture motion has been acquired, the process proceeds to step S80, if the gesture motion has not been acquired, the process proceeds to step S62, and if it is determined in step S62 that the second predetermined time has not elapsed, information requesting the user to perform the gesture motion is transmitted to the mobile terminal, and then the process returns to step S32 to continue monitoring whether or not the terminal trajectory information has been acquired.

In step S80, it is determined whether or not the acquired gesture operation information matches the terminal trajectory information. Here, it is determined whether all the acquired gesture information matches the terminal trajectory information one by one. That is, there may be other users (referred to as invalid users) near the user 300, and the other users also perform gesture motions, and at this time, the gesture motion recognition unit 11c recognizes not only the gesture motion of the user 300 but also the gesture motion of the invalid user. In this case, in step S80, it is determined whether or not all the acquired gesture motion information matches the terminal trajectory information one by one. It is to be understood that, when only one piece of gesture operation information is acquired, only the gesture operation information is determined.

The specific determination process in step S80 will be described later with reference to fig. 5B.

After step S80, the process proceeds to step S88, and it is determined whether or not there is a gesture motion matching the motion trajectory of the smartphone 200. When it is determined that there is a gesture operation matching the smartphone 200, the process proceeds to step S90, a control command corresponding to the gesture operation is acquired and executed, specifically, the command recognition module 16 recognizes a control command corresponding to a gesture operation matching successfully, and then the control device 10 performs a process of executing the control command. In the present embodiment, the gesture motion is taken as an example of "driving to the location of the user who made the gesture motion", at this time, the vehicle 100 continuously tracks the user 300 who made the gesture motion, which is recognized by the user recognition unit 11d, by using the detection information of the camera 20, and the automatic driving control module 13 controls the vehicle 100 to drive to the location of the user 300 until the vehicle drives to the location of the user 300.

After step S90, the present process ends.

When it is determined in step S88 that the matching is unsuccessful, the process proceeds to step S62, where it is determined whether or not the second predetermined time has elapsed, and if it is determined in step S62 that the second predetermined time has not elapsed, information requesting the user to perform a gesture operation is sent to the mobile terminal, and then the process returns to step S32 to continue monitoring whether or not the terminal trajectory information is acquired.

The details of the "gesture motion and terminal trajectory matching process" performed in step S80 will be described below with reference to fig. 5B. As shown in fig. 5B, in step S81, the time coincidence determination unit 12B determines whether or not the time of the gesture movement coincides with the time of the terminal trajectory based on the time information in the gesture movement information and the time information in the terminal trajectory information, and determines that the time of the gesture movement coincides with the time of the terminal trajectory when, for example, the coincidence degree (overlap degree) between the time of the gesture movement and the time of the terminal trajectory is equal to or higher than a predetermined coincidence degree threshold value.

If the determination result at step S81 is "match", the process proceeds to step S82, and if the determination result is "mismatch", the process proceeds to step S84, and it is determined that the gesture motion does not match the terminal trajectory.

In step S82, the form similarity determination unit 12a determines whether the form of the gesture operation is similar to the form of the terminal trajectory, based on the gesture operation form information in the gesture operation information and the trajectory form information in the terminal trajectory information, and determines that the two are similar when, for example, the similarity between the two is equal to or greater than a predetermined similarity threshold. If the determination result in step S82 is "similar", the process proceeds to step S83, where it is determined that the gesture matches the terminal trajectory, and if the determination result in step S82 is "dissimilar", the process proceeds to step S84, where it is determined that the gesture does not match the terminal trajectory.

After steps S83 and S84, the process advances to step S85, where a determination result is output. Here, fig. 5B shows a processing flow for determining whether or not one gesture matches the terminal trajectory, and when there are a plurality of gesture motions, such a processing flow is executed for each of these gesture motions, but only one gesture motion can be matched with the terminal trajectory in the determination result that is finally output, that is, only one gesture motion can be successfully matched with the terminal trajectory in the determination result that is output in step S85, and there are not a plurality of gesture motions. For example, when it is determined in step S81 and step S82 that the time coincidence between the plurality of gesture actions and the terminal trajectory is equal to or higher than a predetermined coincidence threshold and the morphological similarity is equal to or higher than a predetermined similarity threshold, the processing is further performed to determine which of the gesture actions has the highest time coincidence with the terminal trajectory or the highest morphological similarity with the terminal trajectory, and the gesture action is determined as the final gesture action successfully matched with the terminal trajectory.

An example of the processing flow on the smartphone 200 side is described below with reference to fig. 6. This processing flow is executed by the processor 110 included in the smartphone 200, and the start of this processing flow may be conditioned on the smartphone 200 successfully connecting to the vehicle 100 by bluetooth, UWB, Wi-Fi, or the like.

As shown in fig. 6, in step S1, the processor 110 monitors whether the smartphone 200 is pointed at the vehicle 100, i.e., whether it is detected that the smartphone 200 is pointed at the vehicle 100. As described above, it may be detected that the smart terminal changes from an initial posture to a back-pointing vehicle by using an acceleration sensor and/or a gyroscope sensor of the smart terminal and a bluetooth/UWB/Wi-Fi orientation/positioning technology, so as to determine whether the smart terminal is pointing to the vehicle.

When it is detected that the smartphone 200 points to the vehicle 100, the process proceeds to step S2, and a prompt is displayed on the display screen 140 of the smartphone 200 to notify the user that the smartphone 200 has identified the pointed vehicle 100 and successfully connected thereto, so that the user knows that the vehicle 100 can be controlled by the smartphone 200.

Then, the process proceeds to step S3, where it is monitored whether the user performs a predetermined operation on smartphone 200, which indicates that the user wants vehicle 100 to activate the air-separation control function, and may be, for example, pressing the power key for a long time (e.g., 3 seconds).

In step S3, upon receiving a predetermined operation performed by the user, the smartphone 200 transmits a control instruction for activating the air-separation control function to the vehicle 100 through the wireless communication module 120.

Then, the process proceeds to step S39, where the feedback information transmitted from the vehicle 100 is monitored, and upon receiving the information indicating that the air-separation control function transmitted from the vehicle 100 is activated, the process proceeds to step S40.

In step S40, a message is displayed on the display screen 140 to prompt the user that the vehicle 100 has activated the air-separation control function.

Then, the process proceeds to step S50, the movement trace of the smartphone 200 is detected from the sensor information of the acceleration sensor 172 and/or the gyro sensor 171, and when the movement trace of the smartphone 200 is detected and the movement trace information is obtained, the process proceeds to step S70, and the movement trace information is transmitted to the vehicle 100 through the wireless communication module 120. As described above, the motion trajectory information is used for comparison with the gesture motion information, and therefore, in order to improve the reliability of the comparison result, the motion trajectory information may be motion trajectory information generated by the smartphone 200 moving after the user performs a predetermined operation on the smartphone 200, for example, when the user issues a voice command "start", the motion trajectory of the smartphone 200 is detected, when the user issues a voice command "end", the detection of the motion trajectory of the smartphone 200 is stopped, and the motion trajectory information generated by the smartphone 200 moving during the period from "start" to "end" is transmitted to the vehicle 100.

After step S70, the process proceeds to step S71, where it is determined whether or not the third predetermined time has elapsed, and if the third predetermined time has elapsed, the present process flow is ended, and if the third predetermined time has not elapsed, the process proceeds to step S72.

In step S72, it is monitored whether request information is received from the vehicle 100. Specifically, sometimes the vehicle 100 side may not accurately recognize the gesture motion of the user, at this time, the vehicle 100 sends request information to the smartphone 200 to request the user to make the gesture motion again (step S64 in fig. 5A), and when receiving the request information, the vehicle proceeds to step S73 to display information on the display screen 140 as shown in fig. 8C to prompt the user to make the gesture motion again with the hand holding the smartphone 200.

Thereafter, the process returns to step S71, and it is continuously determined whether or not the third predetermined time has elapsed until the third predetermined time has elapsed, and the present process flow is ended.

In the above-described embodiment, when the vehicle 100 detects a gesture motion, the gesture motion is compared with the motion trajectory of the smartphone 200 held by the user 300, whether the two are matched is determined, and when the two are determined to be matched, a control instruction corresponding to the gesture motion is executed. Therefore, even if there is another user near the user 300 and the other user makes a predetermined gesture motion corresponding to the control instruction, the vehicle 100 does not erroneously respond according to the gesture motion. In this way, in the above-described embodiment, the control instruction corresponding to the gesture motion is determined to be executed on the condition that the gesture motion matches the motion trajectory of the smartphone 200, so that the vehicle 100 can effectively recognize an effective gesture motion even without performing face recognition, and from the viewpoint of human-computer interaction, effective interaction between the user 300 and the vehicle 100 can be achieved even without performing face recognition.

In the above description, the vehicle 100 adjusts the orientation of the rotatable camera 21 to position the smartphone 200 or the user 300 within the detection range of the camera 21, but as another embodiment, a plurality of cameras 21 having different orientations, that is, different detection ranges, may be arranged on the vehicle 100, and it may be determined which camera or cameras 21 are positioned within the detection range of the smartphone 200 based on the position of the smartphone 200, so that the gesture operation of the user 300 is recognized using the detection information of the corresponding camera 21.

In the above description, the control command for driving the vehicle 100 to the user is indicated by the gesture operation performed by the user 300, but the present embodiment can be applied to other control commands such as a control command for unlocking a door, a control command for opening an air conditioner, and the like. In this case, the control for traveling to the position where the user 300 is present, the control for unlocking the door, and the control for opening the air conditioner, which are executed on the vehicle 100 side, are all examples of the first control in the present application.

In the above description, the user recognition function and the gesture recognition function of vehicle 100 are integrated into one unit, that is, the user and the gesture recognition unit, but the user recognition unit and the gesture recognition unit may be separately provided as another embodiment.

In the above description, the user 300 performs a gesture motion indicating a corresponding control command with the hand holding the smartphone 200, and the vehicle 100 determines whether or not to execute the control command indicated by the gesture motion by matching the gesture motion with the motion trajectory of the smartphone 200. However, as a modification, after the user 300 makes a gesture motion with the hand holding the smartphone 200 for the first time, the vehicle 100 may determine whether or not the gesture motion matches the motion trajectory of the smartphone 200, and after determining that the gesture motion matches the motion trajectory, authenticate the user 300 who made the gesture motion as a valid user, and then continuously recognize the valid user by using a visual tracking technique, and execute a control command indicated by the gesture motion of the valid user. In this way, for example, the user 300 only needs to hold the smartphone 200 and perform the gesture motion with the hand holding the smartphone 200 when performing the first gesture motion, and the subsequent air separation operation does not need to be conditioned on holding the smartphone 200, thereby improving convenience of the air separation operation.

The gesture motion performed by the user 300 holding the smartphone 200 corresponds to the first gesture motion in the present application, and accordingly, the gesture motion information on the first gesture motion obtained by the vehicle 100 from the camera 20 or the like corresponds to the first gesture motion information in the present application. After the user 300 is authenticated as a valid user, the gesture motion performed by the user corresponds to the second gesture motion in the present application, and accordingly, the gesture motion information on the second gesture motion obtained by the vehicle 100 from the camera 20 or the like corresponds to the second gesture motion information in the present application.

In the above description, the smartphone 200 is described as an example of a mobile terminal, but the present application is not limited to this. Specifically, other mobile terminals having functions of detecting their own motion trajectory and establishing a communication connection with the vehicle 100 may be adopted instead of the smartphone 200, such as a wearable device such as a smart watch or a smart car key. At this time, a wearable device such as a smart watch or a smart car key incorporates a sensor such as an acceleration sensor or a gyro sensor to detect its own movement trace, and a communication module such as bluetooth to be able to communicate with the vehicle 100 and transmit its own movement trace information to the vehicle 100.

As is apparent from the above description, the present embodiment provides a human-computer interaction method for realizing interaction between a user and a vehicle, a vehicle control device 10 related to the human-computer interaction method, a vehicle control method (fig. 5A) executed by the vehicle control device 10, an on-vehicle computer device serving as the vehicle control device 10, a computer-readable storage medium provided in the computer device, and a computer program stored in the computer-readable storage medium, wherein the computer program, when executed by a processor, functions as the vehicle control device 10 to execute the method flow shown in fig. 5A. Meanwhile, the vehicle control device is also an example of the human-computer interaction device, so that the embodiment also provides the human-computer interaction device. In addition, the processing shown in fig. 5A and 5B can also be regarded as gesture recognition processing for recognizing whether the gesture motion recognized by the optical sensor is a valid gesture motion of the user, so that the embodiment can also be said to provide a gesture recognition method and a gesture recognition apparatus.

[ example 2 ]

The following describes example 2 of the present application.

The embodiment relates to a method for calling a vehicle through gesture actions of a user.

Specifically, in this embodiment, referring to fig. 9, a user 301 operates taxi taking software on a smartphone 201 to reserve an unmanned taxi (Robotaxi) through a cloud server 400, at this time, the smartphone 201 sends its own location information to the cloud server 400 through a communication network, the cloud server 400 selects a vehicle 101 as the unmanned taxi after scheduling processing, and sends the location information of the smartphone 201 to the vehicle 101 through the communication network, and the vehicle 101 travels to the user 301 according to the location information. Upon reaching the vicinity of user 301 (e.g., 100 meters or tens of meters), vehicle 101 may wish to know the precise location of user 301 in order to provide more detailed service, such as stopping exactly at the location (side of the body) where user 301 is located. However, the location transmitted by the smartphone 201 may be offset, and therefore, the user 301 cannot obtain an accurate location according to the location transmitted by the smartphone 201.

For this reason, in the present embodiment, the vehicle 101 sends a message to the smartphone 201 of the user 301 through the cloud server 400, requesting the user 301 to perform a predetermined or arbitrary gesture motion with the hand holding the smartphone 201. Then, the user 301 makes a gesture motion with a hand holding the smartphone 201, at this time, on one hand, the smartphone 201 obtains terminal trajectory information indicating a motion trajectory of the smartphone 201 by detection and sends the terminal trajectory information to the cloud server 400, and on the other hand, the vehicle 101 obtains gesture motion information indicating a gesture motion of the user 301 by detection information of the in-vehicle camera or the like and sends the gesture motion information to the cloud server 400. Then, the vehicle 101 compares the motion trajectory information received from the smartphone 201 with the gesture motion information detected by the onboard camera, determines whether the two match, and sends the determination result to the vehicle 101. When the determination result is "match", the vehicle 101 confirms the user 301 as a target passenger, continuously tracks the user 301 by a visual tracking technique based on an optical sensor (a camera, a millimeter wave radar, or the like), and travels toward the user 301 by the automatic driving function, thereby being able to stop at the position where the user 301 is located, for example, and provide a detailed service. At this time, the meaning of the predetermined or arbitrary gesture motion of the user 301 may be understood as "please authenticate me as a valid user", and accordingly, the vehicle 101 performs control of authenticating the user 301 as a valid user according to the gesture motion, which is an example of the first control in the present application.

This embodiment will be described in more detail with reference to fig. 10 to 18 and the like.

First, a structure related to the vehicle 101 will be described with reference to fig. 10.

The structure of the vehicle 101 shown in fig. 10 is different from the structure of the vehicle 100 shown in fig. 2 mainly in that the gesture matching module 12, the terminal ID authentication module 14, the terminal trajectory acquisition module 15, and the instruction recognition module 16 in the vehicle 100 are not provided, and the matching process result acquisition module 18 for acquiring the matching process result from the cloud server 400 is provided. The other structures are the same as those of vehicle 100, and the same reference numerals are given to the same structures, and detailed description thereof is omitted.

In this embodiment, the vehicle 101 does not perform matching processing for determining whether the gesture motion matches the terminal trajectory, the matching processing is performed by the cloud server 400, and the cloud server 400 performs the matching processing and then sends information indicating a matching processing result to the vehicle 101.

The related structure of the cloud server 400 is briefly described below with reference to fig. 11.

As shown in fig. 11, the cloud server 400 is a computer having a processor and a memory, and the memory stores program instructions, and the program instructions, when executed by the processor, perform the functions of corresponding functional modules, which include at least a gesture action obtaining module 411, a terminal ID authenticating module 414, a terminal track obtaining module 415, a gesture matching module 412, and a matching processing result outputting module 418. In addition, the cloud server 400 typically further includes a wireless communication unit (not shown) that can wirelessly communicate with the vehicle 101 and the smartphone 201.

The gesture motion acquisition module 411 is configured to acquire gesture motion information from the vehicle 101 through the wireless communication unit, where the gesture motion information is acquired by the vehicle 101 through a sensor such as a camera mounted on the vehicle.

The terminal ID authentication module 414 is configured to authenticate the ID information of the mobile terminal, and when the ID information of the smartphone 201 is received, since the smartphone 201 is a taxi-calling software registered user (terminal), the ID of the smartphone 201 is authenticated to be valid.

The terminal track acquiring module 415 is configured to acquire, through the wireless communication unit, terminal track information, which is obtained by the smart phone 201 according to an acceleration sensor and/or a gyro sensor of the smart phone 201 and represents a motion track of the smart phone 201, from the mobile terminal, i.e., the smart phone 201, whose ID is authenticated to be valid.

The gesture matching module 412 is configured to compare the gesture information acquired by the gesture action acquisition module 411 with the terminal trajectory information acquired by the terminal trajectory acquisition module 415, and determine whether the two are matched. Specifically, the gesture matching module 412 has a morphology similarity determination unit 412a and a time consistency determination unit 412 b.

The form similarity determination unit 412a is configured to determine whether the form of the gesture motion is similar to the form of the motion trajectory of the mobile terminal, for example, when the degree of similarity is greater than a certain degree or a predetermined similarity threshold value, the two are determined to be similar. The shape similarity determining unit 412a may compare the motion trajectory and the gesture motion with a preset template to determine whether the shape of the gesture motion is similar to the shape of the motion trajectory of the mobile terminal. And matching judgment can be carried out through a trained track matching model. The trajectory matching model may be obtained by training a CNN (Convolutional Neural Networks) model or an MLP (Multi-layer perceptron) model using a motion trajectory of the intelligent terminal collected when a user performs a predetermined gesture motion with a hand holding the intelligent terminal and a gesture motion of the user collected by the camera as samples.

The time consistency determination unit 412b is configured to determine whether the time of the gesture movement is consistent with the time of the motion trajectory, for example, when the time of the gesture movement is equal to or greater than a certain consistency threshold, it is determined that the time of the gesture movement is consistent with the time of the motion trajectory.

In this embodiment, when the determination result of the morphological similarity determination unit 412a is "similar" and the determination result of the temporal consistency determination unit 412b is "consistent", the gesture matching module 412 determines that the gesture motion information matches the motion trajectory information.

The matching processing result output module 418 is configured to output matching determination result information indicating a determination result of the gesture matching module 412 to the vehicle, i.e., the vehicle 101, through the wireless communication unit.

In this embodiment, the gesture matching process for determining whether the gesture motion information and the motion trajectory information match is performed by the server 400, so that the processing load on the vehicle side can be reduced, and the processing speed can be increased because the processing capability of the cloud server 400 is stronger than that of the vehicle.

Fig. 12 is an explanatory diagram of an interactive process in which a user reserves a taxi through a smartphone. A flow of reservation of a taxi by the user 301, that is, so-called "taxi taking" will be described with reference to fig. 12.

As shown in fig. 12, in step S101, the user 301 sends a car use request to the cloud server 400 through the smartphone 201, and sends the location information and ID information of the smartphone 201 to the cloud server 400.

In step S102, the cloud server 400 performs identity and/or authority authentication according to the ID information of the smartphone 201, and performs scheduling processing for selecting an appropriate vehicle from a plurality of vehicles, for example, selecting the vehicle 101, after the authentication is successful.

In step S103, the cloud server 400 transmits scheduling information to the selected vehicle 101.

In step S104, the vehicle 101 performs self-check of its own condition after receiving the scheduling command.

In step S105, when there is no problem in the self-test, the vehicle 101 sends feedback information that the vehicle is normal to the cloud server 400.

In step S106, after receiving the feedback information of the vehicle 101 indicating that the vehicle is normal, the cloud server 400 sends a vehicle arrangement success message to the self-smartphone 201, and sends the information (such as the license plate number) of the vehicle 101 to the smartphone 101.

In parallel with step S106, in step S107, the cloud server 400 transmits user information represented by the terminal position information and the terminal ID information to the vehicle 101.

In step S108, the vehicle 101 activates the automatic driving function after receiving the user information transmitted from the cloud server 400, and automatically travels to a place near the boarding location within a predetermined range (for example, 100 meters or tens of meters) from the terminal position according to the terminal position information.

When the vehicle 101 travels near the boarding point, for example, as shown in fig. 13, a crowd 330 exists beside the lane 500 where the vehicle 101 travels, and the crowd 330 includes the user 301, at this time, the vehicle 101 requests the user 301 to interact therewith as shown in fig. 15 in order to obtain the accurate position of the user 301 or to identify which one of the crowd 330 is the user 301.

Specifically, as shown in fig. 15, in step S110, when the vehicle 101 determines from the terminal position information of the smartphone 201 that the vehicle has traveled to a distance within a predetermined range from the user 301, that is, has reached the vicinity of the boarding point, the "user-specific boarding point recognition function" is activated, specifically, for example, if the camera 20 is not turned on at this time, the camera 20 is turned on, and it is determined from the terminal position information of the smartphone 201 whether or not the orientation of the camera 20 needs to be adjusted, so that the user 301 can be recognized by the camera 20 mounted in the vehicle. The "predetermined range" may be, for example, a range of 100 meters or several tens of meters from the position of the smartphone 201, and may be specifically set according to a detection range of a sensor such as the onboard camera 20. After step S110, the process proceeds to step S111.

In step S111, the vehicle 101 transmits information indicating that "the user and specific boarding location identification function is activated" to the cloud server 400.

In step S120, when receiving the message, the cloud server 400 sends a message to the smartphone 201 to notify the user 301: vehicle 101 has arrived near the boarding location and has activated the "user and specific boarding location identification function".

In step S130, when the smart phone 201 receives the message sent by the cloud server 400, for example, a prompt message is displayed on a display screen or a voice is played through a speaker, so as to inform the user 301: vehicle 101 has arrived near the boarding location and has activated the "user and specific boarding location identification function".

After seeing the prompt information on the display screen or hearing the voice played by the speaker, the user 301 makes a gesture motion, such as waving his hand, in the direction toward the vehicle 101 with the hand holding the smartphone 201.

At this time, in step S150, the vehicle 101 can obtain an image of the environment around the user 301 by the camera 20 mounted thereon, for example, as shown in fig. 14. Fig. 14 shows an image of the environment around the user 301 including the user 301 captured by the camera 20, and for convenience of explanation, only the user 301 and one other user 302 are shown in fig. 14 with respect to the crowd 330 in fig. 13. At this time, the vehicle 101 can detect the user 301 and the gesture motion made by the user from the environment image captured by the camera 20 (or may be combined with detection information of other sensors such as a millimeter wave sensor). However, as shown in fig. 14, another user 302 who wants to take a taxi is present near the user 301, and when the other user 302 sees the vehicle 101 as a taxi, the other user 302 does not know that the vehicle 101 is reserved by the user 301, and therefore also intends to call the vehicle 101 by waving his hand. At this time, since the vehicle 101 detects both the gesture motion of the user 301 and the gesture motion of the other user 302, the vehicle 101 cannot accurately recognize the user 301 only from the gesture motion information.

For this reason, in the present embodiment, in step S170, the vehicle 101 transmits the detected gesture motion information about the user 301 and the detected gesture motion information about the other user 302 to the cloud server 400.

On the other hand, when the user 301 performs a gesture motion with the hand holding the smartphone 201, the smartphone 201 detects and obtains terminal trajectory information indicating its own motion trajectory using the acceleration sensor and/or the gyro sensor provided therein.

Then, in step S140, the smartphone 201 sends the obtained terminal track information to the cloud server 400.

In step S180, the cloud server 400 compares the received gesture information with the terminal trajectory information, and determines whether the received gesture information and the terminal trajectory information are matched. The specific determination method may be the same as in the above-described embodiment (refer to fig. 5B), and a detailed description thereof is omitted here.

In the scenario shown in fig. 14, there are two pieces of gesture information, that is, the gesture information about the user 301 and the gesture information about the other user 302, and since the smartphone 201 moves along with the hand (or arm) of the user 301, the gesture information about the user 301 and the terminal track information of the smartphone 201 are actually matched (the similarity of the form and the consistency of the time are good), so the cloud server 400 determines that the gesture information about the user 301 matches the terminal track information, and determines that the gesture information about the other user 302 does not match the terminal track information.

In step S190, after completing the gesture matching process, the cloud server 400 sends information indicating a result of the gesture matching process to the vehicle 101.

In step S196, the vehicle 101 receives the gesture matching processing result transmitted from the cloud server 400, and authenticates the user 301 as a valid user according to the gesture matching processing result. Here, "authenticate the user 301 as a valid user" means that "the user 301" among information obtained by a sensor such as the camera 20 is authenticated as a valid user, or that information on the user 301 obtained by a sensor such as the camera 20 is authenticated as valid user information. Thereafter, the vehicle 101 continuously recognizes the user 301 by using the visual tracking technique based on the detection information of the sensor such as the camera 20, and drives the user 301 by using the automatic driving function based on this; alternatively, the vehicle 101 accurately recognizes the position where the user 301 is located based on the detection information of the sensor such as the camera 20, and based on this, travels to the user 301 using the automatic driving function.

With the present embodiment, for example, as shown in fig. 14, when two or more users, that is, a user 301 and another user 302, exist at a boarding location and both of the two or more users make gesture motions, by comparing gesture motion information with motion trajectory information of the smartphone 201 and determining whether the two are matched, the user 301 can be accurately identified as an effective user or a specific boarding location of the user 301 can be accurately identified, that is, with the present embodiment, effective human-computer interaction can be performed even without performing face recognition.

In addition, the scenario shown in fig. 14 is merely an example, however, there may be a case where there is no other user near the user 301, and at this time, the vehicle 101 detects only one gesture motion, and therefore only gesture motion information about one gesture motion is sent to the cloud server 400, or at this time, there are a case where there are a plurality of other users near the user 301 making gesture motions, and at this time, the vehicle 101 may send all detected gesture motion information to the cloud server 400.

The following describes the processing flow of the vehicle 101 side and the processing flow of the cloud server 400 side during interaction with reference to fig. 16 and 17, respectively, in order to describe the present embodiment in more detail.

First, a process flow on the vehicle 101 side will be described with reference to fig. 16.

As shown in fig. 16, when the vehicle 101 arrives near the boarding point, the camera 20 is activated and it is determined whether or not the orientation of the camera 20 needs to be adjusted in step S110, and if necessary, the orientation of the camera 20 is adjusted so that the detection range of the camera 20 covers the position of the user 301 and the user 301 can be detected well.

Thereafter, in step S111, the vehicle 101 transmits, to the cloud server 400, information indicating that "the user and specific boarding location identification function are activated".

Thereafter, in step S150, the vehicle 101 monitors whether or not a gesture motion is detected, and when a gesture motion is detected, the process proceeds to step S170.

In step S170, the vehicle 101 sends the obtained gesture motion information to the cloud server 400.

Then, in step S192, it is monitored whether or not the gesture matching processing result transmitted from the cloud server 400 is received, and when the gesture matching processing result is received, the process proceeds to step S193.

In step S193, it is determined whether the gesture matching processing result indicates that gesture motion information matching the terminal trajectory information exists, and if the gesture motion information exists, the process proceeds to step S196, and if the gesture motion information does not exist, it is determined that the gesture motion recognition for the user 301 has failed, and at this time, since the vehicle 101 may have traveled to a position close to the user 301, the user 301 is no longer required to continue to make a gesture motion, and the process is ended. At this time, the vehicle 101 can continue to travel to the user 301 following the terminal position information.

On the other hand, when the determination result in step S193 indicates that there is matching gesture operation information, in step S196, the vehicle 101 travels to the position where the user 301 is present, based on the recognition of the user 301 by the sensor such as the camera 20.

The following describes a processing flow on the cloud server 400 side when the user 301 interacts with the vehicle 101, with reference to fig. 17.

As shown in fig. 17, in step S178, the cloud server 400 monitors whether gesture motion information is acquired from the vehicle 101 and terminal trajectory information is acquired from the smartphone 201, and when gesture motion information and terminal trajectory information are acquired, the process proceeds to step S180.

In step S180, it is determined whether the gesture motion information matches the terminal trajectory information, and the process of determining whether the gesture motion information matches the terminal trajectory information may be consistent with the above-described embodiment (refer to fig. 5B), and a detailed description thereof is omitted here.

When the gesture matching processing in step S180 is completed, the process proceeds to step S190, and a determination result is output. Here, as described in the above-described embodiment, when there are a plurality of gesture motions, only one gesture motion can be matched with the terminal trajectory in the determination result finally output, and for example, when it is determined that the time coincidence degree of the plurality of gesture motions with the terminal trajectory is equal to or more than the predetermined coincidence degree threshold and the morphological similarity degree is equal to or more than the predetermined similarity degree threshold, further processing is performed to determine which of the gesture motions has the highest time coincidence degree with the terminal trajectory or the highest morphological similarity degree with the terminal trajectory, and the gesture motion is determined as the final gesture motion that successfully matches with the terminal trajectory.

As described above, in the present embodiment, when the vehicle 101 travels near the boarding location, the vehicle 101 transmits information to the smartphone 201 of the user 301, requests the user 301 to perform a gesture motion, and after the user 301 knows the content of the information, performs the gesture motion with the hand holding the smartphone 201. At this time, on the one hand, the vehicle 101 obtains gesture operation information of the user 301 by detection of a sensor such as the camera 20, and transmits the gesture operation information to the cloud server 400. On the other hand, when the smartphone 201 moves with the hand of the user 301, the terminal trajectory information, which is the motion trajectory information of the smartphone 201, is obtained by detecting the movement trajectory information by using an acceleration sensor and/or a gyro sensor included in the smartphone 201, and the obtained terminal trajectory information is transmitted to the cloud server 400. When receiving the gesture action information and the terminal track information, the cloud server 400 determines whether the gesture action information and the terminal track information are matched, and sends a matching processing result to the vehicle 101, the vehicle 101 authenticates the user 301 corresponding to the gesture action information matched with the terminal track information as a valid user, and then identifies the position of the user 301 according to detection information of sensors such as the camera 20, or performs visual tracking on the user 301 to continuously identify the user 301, so that the vehicle 101 can travel to the accurate position of the user 301.

In this way, according to the present embodiment, by comparing the gesture motion information with the terminal trajectory information, it is determined whether the gesture motion information is gesture motion information about a valid user, so that even if another person (e.g., another user 302 in fig. 14) near the user 301 performs a gesture motion when the user 301 performs a gesture motion, the vehicle 101 can accurately recognize the user 301 (and the gesture motion thereof) as a valid user (and a valid gesture motion), and from the viewpoint of human-computer interaction, effective human-computer interaction can be achieved without human face recognition.

In addition, in the present embodiment, the purpose of requiring the user 301 to make a gesture motion is to enable the vehicle 101 to authenticate it as a valid user, and therefore, the gesture motion made by the user 301 at this time is not limited and may be any motion, not necessarily a predetermined gesture motion. However, as a variant, the user 301 may also be required to make a predetermined gesture motion, for example "rounding" with a hand. However, in comparison with this modification, the manner of gesture motion is not limited, and it is possible to avoid the user from feeling troublesome, and it is also possible to avoid the user 301 from making a predetermined gesture motion in a crowd that is a "strange motion" seen by others and appearing embarrassed, for example.

Some modifications of the present embodiment are described below.

Fig. 12 provides a car-booking mode in which the user books an unmanned taxi, and other modes may be adopted to book the taxi.

For example, another car appointment is provided in fig. 18. The scenario envisioned by the approach provided in fig. 12 is that the user 301 is far from the vehicle 101, e.g., not within the coverage of the on-board network of the vehicle 101. The scenario assumed in the mode provided in fig. 18 is that the user 301 is relatively close to the vehicle 101 and can communicate with the vehicle 101. Such a scenario is, for example, a scenario in which a vehicle use request is issued to a vehicle by scanning a code while sharing the vehicle.

Specifically, referring to fig. 18, in step S101A, the smartphone 201 of the user 301 establishes a communication connection with the vehicle 101, and transmits the vehicle use request information to the vehicle 101 together with the terminal ID information of the smartphone 201.

In step S102A, vehicle 101 performs vehicle condition self-check after receiving the vehicle use request information.

In step S103A, when no problem is found in the self-test, the vehicle 101 sends information that the vehicle condition is normal to the cloud server 400, and sends the terminal ID information of the smartphone 201 to the cloud server 400, requesting authentication of the user identity/authority according to the terminal ID information.

In step S104A, the cloud server 400 authenticates the user identity/authority according to the received terminal ID information, and then sends the authentication result to the vehicle 101.

In step S105, when the vehicle 101 receives the authentication result information transmitted from the cloud server 400 and the authentication result indicates that the terminal ID information is authenticated, the vehicle 101 turns on the camera 20, activates the user identification function, and transmits information indicating that "the user identification function is activated" to the smartphone 201. The subsequent processing is the same as the steps of step S150 to step S196 in fig. 16.

In addition, in the above description, an unmanned taxi is taken as an example for description, however, the present application may also be applied to a taxi with a driver and a net reservation car, or an autonomous taxi with a security officer sitting in the car. At this time, after the control device of the vehicle authenticates the user 301 as a valid user, an image screen or a video screen, which is an image of the surrounding environment of the user 301 and which includes the image of the user 301, may be displayed on a display of the vehicle (for example, a display of a navigation device), and the image of the user 301 is highlighted to prompt the driver that the user 301 is a valid user. The highlighting may be performed by, for example, surrounding the user 301 with a rectangular frame or displaying the entire image or a part of the image (for example, the image of the head) of the user 301 in an enlarged manner.

In addition, although the above description has been made with the cloud server 400 determining whether or not the gesture operation information matches the terminal trajectory information, as a modification, the determination may be made by a vehicle control device of the vehicle 101, in which case the vehicle 101 receives the terminal trajectory information from the cloud server 400, compares the gesture operation information detected by itself with the terminal trajectory information received from the cloud server 400, and determines whether or not the gesture operation information and the terminal trajectory information match.

This modification will be briefly described with reference to fig. 19. The contents shown in fig. 19 are the same as those before step S160 in comparison with fig. 15, and therefore, the description thereof is omitted.

As shown in fig. 19, in step S162, when the cloud server receives the terminal motion trail information sent by the smartphone, the cloud server sends the terminal motion trail information to the vehicle, and the vehicle compares the terminal motion trail information with the gesture motion information obtained in step S150, and determines whether the gesture motion information matches the terminal motion trail information. Thereafter, in step S196, the user 301 corresponding to the gesture operation information matching the terminal trajectory information is authenticated as a valid user, and the vehicle is controlled to travel to the valid user.

In the present embodiment, "control of driving the vehicle 101 to the position of the user 301 using the visual tracking technique in the vicinity of the boarding location" and "control of highlighting the video of the user 301 on the image screen" are all examples of the first control in the present application.

[ example 3 ]

The embodiment relates to a method for interaction between a user and a food delivery robot.

Recently, more and more restaurants use a food delivery robot to deliver food. In this case, the food delivery robot needs to set the position of a specific table in advance to deliver food accurately, which results in that the customer cannot freely select the position or cannot change the position after selecting the position.

In addition, sometimes a plurality of customers on the same dining table order dishes separately, or some dining tables are long-row tables (often found in fast food restaurants), and at this time, the robot cannot accurately distinguish which customer is the correct food delivery object, and cannot provide more detailed services (for example, facing the customer at the best angle). If customers are required to wave their hands to their intentions, for example, at peak times of a meal, there may be multiple people waving their hands, which can confuse the robot.

To this end, the present embodiment provides a method for a user to interact with a meal delivery robot. An application scenario of the present embodiment is explained below with reference to fig. 20.

As shown in fig. 20, at the restaurant side, as network elements, a web server 401, an electronic number plate 202, a number plate switchboard 210, and a meal delivery robot 102 are included.

The web server 401 is a server of a restaurant local area network, and in this embodiment, the web server 401 also constitutes a computer device for meal delivery allocation, for example, by automatically scheduling or receiving an operation instruction of an operator, to arrange the corresponding meal delivery robot 102 to deliver a meal.

Each electronic number plate 202 is provided with different number identifiers, which can be observed by customers, and in addition, the electronic number plate 202 serving as a mobile terminal is also provided with a chip, a Bluetooth and other communication modules, which can be in communication connection with the number plate switchboard 210 through Bluetooth and other modes, and in addition, the electronic number plate 202 is also provided with an acceleration sensor and/or a gyroscope sensor, so that the motion track of the electronic number plate 202 can be detected to obtain motion track information.

The number plate switchboard 210 is provided with a plurality of corresponding electronic number plates 202, and is in communication connection with the electronic number plates 202 through Bluetooth or the like, and in addition, the number plate switchboard 210 is in communication connection with the network server 401 through wired or Wi-Fi or the like.

The meal delivery robot 102 has a built-in control unit and has a traveling system (a drive motor, wheels, and the like), and the meal delivery robot 102 has a head 102a, and a camera (not shown) capable of detecting the surrounding environment is provided in the head 102 a. Thus, under the control of the control unit, the food delivery robot 102 can autonomously walk or autonomously move according to the detection information obtained by detecting the surrounding environment by the camera. In addition, a detachable dinner plate 102b is arranged on the meal delivery robot 102, and food can be placed on the dinner plate 102 b. In the present embodiment, the food delivery robot 102 further includes a speaker (not shown), and can emit a voice through the speaker.

The meal delivery robot 102 also has a built-in communication means that can establish a communication connection with the web server 401 by means of Wi-Fi or the like, and can receive a scheduling command from the web server 401 to deliver a meal to a customer.

In the scenario shown in fig. 20, a long table 402 is provided in the restaurant, and three customers, namely customer 303, customer 304, and customer 305, are in front of the long table 402, wherein customer 303 is the active user in this embodiment. The customer 303 is dispensed by the restaurant service personnel to an electronic number plate 202 after ordering at a dining table (not shown), after which the customer 303 sits at any free position on the long table 402.

When the restaurant side completes preparation of the meal ordered by customer 303, the meal delivery to customer 303 is initiated by meal delivery robot 102. At this time, the food delivery robot 102 does not know which customer 303 is and where his location is, and therefore, the food delivery robot 102 sends out a voice through a speaker, and the content of the voice is, for example, "XX customer," the food delivery robot is looking for you, please hold the number plate and wave his hand ", to inform the customer 303 to make a gesture action with the hand holding the electronic number plate 202. After hearing the language, the customer 303 performs a gesture operation with the hand holding the number plate facing the direction of the food delivery robot 102. At this time, the food delivery robot 102 recognizes the customer 303 and the gesture thereof through the camera. On the other hand, the electronic number plate 202 obtains terminal motion trail information indicating its own motion trail through an acceleration sensor and/or a gyro sensor, and transmits the terminal motion trail information to the web server 401 through the number plate switchboard 210, and the web server 401 transmits the terminal trail information to the food delivery robot 102. The food delivery robot 102 determines whether or not the gesture motion information indicating the gesture motion matches the terminal trajectory information, and the specific determination method may be the same as that in embodiment 1 (see fig. 5B), and a detailed description thereof will be omitted. When the gesture action information is judged to be matched with the user identification information, the food delivery robot 102 authenticates the customer 303 corresponding to the gesture action information as a food delivery object, namely, an effective user, autonomously moves to the position of the customer through a visual tracking technology, and can accurately deliver food to the side of the customer in a posture facing the customer, so that accurate food delivery and fine service are realized.

With the present embodiment, the food delivery robot 102 determines whether the customer is a valid user by determining whether the gesture movement information matches the terminal trajectory information about the electronic number plate 202, so that the valid user can be accurately identified. In the process of identification, the sitting position of the customer is not limited, and the customer can freely select the sitting position; even if another customer (for example, the customer 304) is present near the customer (for example, the customer 303) to which the food delivery robot 102 seeks, the other customer performs a gesture operation, and the customer to seek can be identified as a valid user. From the perspective of human-computer interaction, human face recognition is not needed, and effective human-computer interaction can be carried out.

In the above description, the food delivery robot 102 utters a voice to request the customer 303 to perform a hand-waving gesture motion, but the customer 303 may perform another gesture motion, and in this case, the customer 303 may be authenticated as a valid user based on the gesture motion information and the terminal trajectory information.

In addition, in the above-described embodiment, the judgment of whether the gesture motion information matches the motion trajectory information is performed by the meal delivery robot, however, it may be performed by the web server 401 in the restaurant.

In the present embodiment, the electronic number plate is described as an example of the mobile terminal, however, the mobile terminal may be a smartphone held by a customer, and in this case, the smartphone needs to establish a communication connection with the web server 401 and transmit terminal trajectory information indicating a movement trajectory of the smartphone to the web server 401.

In addition, the present embodiment can be applied not only to restaurants but also to warehouses and the like having mobile robots.

[ example 4 ]

As an example of a human-computer interaction method, the present embodiment relates to a method for a user to interact with a smart television.

With the gradual popularization of smart homes, many smart televisions provide an air-isolated control function, and a user can control the smart televisions through gesture actions. However, in the case of watching tv by multiple people, if multiple people make gesture motions simultaneously, the smart tv cannot determine which gesture motion should be executed, and thus it is difficult to perform effective human-computer interaction.

Therefore, the embodiment provides a method for enabling effective human-computer interaction between a user and the smart television.

Specifically, as shown in fig. 21, the smart tv 103 and a remote controller 203 associated with the smart tv 103 are installed in a room, and a camera 103a is externally connected to the smart tv 103 by, for example, a USB (Universal Serial Bus) system. The smart television 103 can detect the user in the room and the gesture motion thereof through the camera 103 a. It is to be understood that the smart tv 103 may have a built-in camera, and the built-in camera may detect a user in a room and a gesture thereof. Therefore, both the external camera 103a and the internal camera are part of the smart television 103 and are subordinate to the smart television 103. The remote controller 203 is provided with a communication module such as bluetooth to enable wireless communication with the smart tv 103, and is provided with a chip (processor and memory) and an acceleration sensor/gyro sensor to detect a movement locus of the remote controller 203 and transmit terminal locus information indicating the movement locus of the remote controller 203 to the smart tv 103.

In addition, a table 403 is placed in the room, and a viewer 306 and a viewer 307 are seated around the table 403, wherein the viewer 306 holds the remote controller 203 with hands, which is an effective user in the present embodiment.

When the air-separating control function of the smart television 103 needs to be utilized, the viewer 306 operates the remote controller 203 to activate the air-separating control function of the smart television 103 (which may be triggered by pressing a dedicated key on the remote controller, or performing a predetermined operation on the remote controller, for example, pressing a certain key for a long time, or selecting and triggering on an operation interface of a television display screen through the remote controller), at this time, the smart television 103 turns on the camera 103a, and then, the viewer 306 makes an arbitrary gesture action with a hand holding the remote controller 203, and the smart television 103 recognizes the viewer 306 and the gesture action thereof through the camera 103a to obtain gesture action information. On the other hand, the remote controller 203 detects its own motion trajectory by the acceleration sensor/gyro sensor, and transmits motion trajectory information indicating the motion trajectory to the smart tv 103. The smart television 103 determines whether the gesture motion information matches the motion trajectory information, compares the gesture motion information with the motion trajectory information received from the remote controller when the gesture motion information matches the motion trajectory information, determines whether the gesture motion information matches the motion trajectory information, authenticates the viewer 306 corresponding to the gesture motion information as a valid user, i.e., a user having a blank control authority, when the gesture motion information matches the motion trajectory information, and then determines that the blank operation performed by another user, e.g., the viewer 307, is invalid only by responding to the gesture motion of the viewer 306 and/or other blank operations, e.g., an eye operation, using a visual tracking technique.

The smart television of the embodiment can be used in a home, and can also be used in an office meeting scene, and at the moment, slide picture demonstration and the like can be performed on the smart television.

With the present embodiment, the smart television 103 determines whether the viewer is a valid user by determining whether the gesture action information matches the terminal trajectory information about the remote controller 203, so that the valid user can be accurately identified. In the identification process, even if there are a plurality of viewers, the viewer to be correctly identified (for example, the viewer 306 described above) can be identified as a valid user. From the perspective of human-computer interaction, human face recognition is not needed, and effective human-computer interaction can be carried out.

In the above description, the remote controller 203 has been described as an example of the mobile terminal, however, other mobile terminals such as a smartphone may be used.

In the above description of the embodiments, the vehicle, the smart television, and the meal delivery robot are taken as examples of the target device, however, the human-computer interaction technology of the present application may also be applied to other various scenarios in which the air-separation control is performed through gesture actions or in which identity authentication is required.

[ further contents ]

In addition to the embodiments of the present application described above, the present specification also discloses the following.

Different from the technical concept of the embodiment 2, in a scene of unmanned taxi, in order to accurately find the position of the user at the boarding place, position sharing with the smartphone of the user may be requested.

In addition, a method for the vehicle to interact with the user in the vicinity of the boarding place, which is different from the technical concept of embodiment 2, is provided in fig. 22. Specifically, in some cases, a large deviation occurs in the positioning information of the smartphone due to weak satellite signals, and in this case, in order to find the user or a specific boarding location, the user may take a picture of the surrounding environment with the smartphone, send the taken image to the vehicle through the server, and the vehicle may identify the user or the specific upper location based on the received image information in combination with the high-precision map and the image taken by the vehicle-mounted camera.

Further description will be made with reference to fig. 22.

As shown in fig. 22, in step S410, the vehicle reaches the vicinity of the user position. Thereafter, the vehicle sends a request to the server to obtain information for a specific boarding location in step S420. In step S430, the server sends to the smartphone, upon receiving the information: the vehicle has arrived near the user's location, requesting the user information to photograph the surrounding environment. After obtaining the information, the user shoots with the smart phone. Then, in step S440, the smartphone captures and obtains image data. In step S450, the smartphone transmits the image data to the server. In step S460, the server transmits the received image data to the vehicle. In step S470, the vehicle-mounted camera captures an image of the position and direction of the user, and performs processing according to the captured image and the image received from the server to determine that the user knows the specific boarding location. The user is then identified based on high-precision map navigation or using visual tracking techniques, and the vehicle is driven toward the user using an autopilot function.

A method of remotely controlling a vehicle, which is different from the technical concept of embodiment 1, is provided in fig. 23. Specifically, in step S1, the smartphone detects that it is pointed at the vehicle. Thereafter, in step S2, a prompt message is displayed to inform the user that a connection has been established with the vehicle. And after the user knows the content of the prompt message, sending a voice instruction to the smart phone, wherein the voice instruction is used for controlling the vehicle to make a corresponding response. Thereafter, in step S503, the smartphone receives the voice instruction. Then, in step S504, the smartphone transmits the voice instruction to the vehicle. In step S505, the vehicle receives the voice command, identifies the voice command, and performs voiceprint verification to authenticate the identity and the authority of the user and identify the control command corresponding to the voice command. In step S506, when the verification is successful, the control instruction corresponding to the voice instruction is executed.

55页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:可折叠显示屏及其组装方法、显示装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类