Robot interaction method

文档序号:681853 发布日期:2021-04-30 浏览:2次 中文

阅读说明:本技术 一种机器人交互的方法 (Robot interaction method ) 是由 刘笑彤 李小山 陈伯行 黄海军 于 2020-12-29 设计创作,主要内容包括:本发明提供一种机器人交互的方法,包括:S1,接收用户触发的服务请求,所述服务请求中包括用户选择的交互服务标识;S2,通过安装在机器人上的识别模块进行用户交互意图识别和用户权限识别;S3,根据交互意图识别与用户权限识别结果,激活与用户选择的所述交互服务的标识对应的信息获取方式;S4,通过激活的所述信息获取方式获取所述用户的交互信息;S5,对所述交互信息进行处理,以完成所述服务请求。通过在机器人上安装人脸识别装置,判断用户是否具有与机器人进行交互的意图,从而主动判断用户是否想要与机器人交互,可以提高机器人在与人交互方面的智能程度,以提高交互效率。(The invention provides a robot interaction method, which comprises the following steps: s1, receiving a service request triggered by a user, wherein the service request comprises an interactive service identifier selected by the user; s2, identifying the user interaction intention and the user authority through an identification module arranged on the robot; s3, according to the interactive intention identification and user authority identification result, activating the information acquisition mode corresponding to the interactive service identification selected by the user; s4, acquiring the interaction information of the user through the activated information acquisition mode; s5, processing the interactive information to complete the service request. The face recognition device is arranged on the robot, so that whether a user has the intention of interacting with the robot or not is judged, whether the user wants to interact with the robot or not is judged actively, the intelligent degree of the robot in the aspect of interacting with people can be improved, and the interaction efficiency is improved.)

1. A method of robotic interaction, comprising:

s1, receiving a service request triggered by a user, wherein the service request comprises an interactive service identifier selected by the user;

s2, identifying the user interaction intention and the user authority through an identification module arranged on the robot;

s3, according to the interactive intention identification and user authority identification result, activating the information acquisition mode corresponding to the interactive service identification selected by the user;

s4, acquiring the interaction information of the user through the activated information acquisition mode;

s5, processing the interactive information to complete the service request.

2. The method of robot interaction of claim 1, wherein the activated information acquisition mode corresponding to the social service identifier comprises at least one of a keyboard acquisition mode, a voice acquisition mode, and a shooting acquisition mode.

3. The method of robot interaction of claim 2, wherein the recognition module comprises password recognition corresponding to the keyboard acquisition mode, voice recognition corresponding to the voice acquisition mode, and face recognition corresponding to the capture acquisition mode.

4. The method of robot interaction of claim 3, wherein the recognition module is a face recognition corresponding to the capture mode;

in a corresponding manner, the first and second optical fibers are,

determining the orientation of the user and the area of the face of the user which is shot according to the facial feature points;

judging whether the user has interaction intention recognition with the robot or not according to the orientation of the user and the photographed face area of the user;

if the interactive intention identification judgment result is yes, carrying out user authority identification;

and if the user permission identification judgment structure is passed, the robot interacts with the user.

5. The method of robotic interaction of claim 4, further comprising: if the interaction intention identification judgment result is negative, forbidding the robot to interact with the user;

or if the user authority identification judgment structure is not passed, the robot is forbidden to interact with the user.

6. The method of robot interaction of claim 4, wherein the determining whether the user has an intention to interact with the robot is based on an orientation of the user and a photographed face area of the user,

the method comprises the following steps:

if the user faces the robot and the area of the face of the user is larger than or equal to an area threshold value, judging that the user has the intention of interacting with the robot;

if the user is not facing the robot or the area of the face of the user is smaller than the area threshold, determining that the user does not have the intention of interacting with the robot.

7. The method of robotic interaction of claim 6, further comprising:

monitoring the orientation of the user in real time during the interaction of the robot and the user;

and if the monitored duration that the user does not face the robot is longer than the specified duration, controlling the robot to stop interacting with the user.

8. A method for robotic interaction as claimed in any one of claims 1 to 7, wherein the robot interacts with the user by voice.

9. The method of robotic interaction of claim 8, wherein controlling the robot to actively interact with the user comprises:

controlling the robot to output voice guidance information to introduce functions of the robot to the user; and/or

And controlling the robot to output an interactive page to the user so that the user can interact with the robot.

Technical Field

The invention relates to the technical field of robots, in particular to a robot interaction method.

Background

With the development of robot technology, more and more robots enter the lives of people to replace or assist the work of people, such as a sweeping robot, a greeting robot, a companion robot and the like.

In the prior art, when a user and a robot start to interact, the user is required to actively send an instruction, and the robot can act according to the instruction. For example, a user may initiate interaction with the robot by pressing a physical button provided external to the robot or touching a screen display interface of the robot itself. The mode that the user actively sends the instruction to carry out human-computer interaction enables the robot to be in a passive state in the human-computer interaction, and both the intelligence degree and the interaction efficiency are low.

Disclosure of Invention

In order to solve the above technical problems, the present invention provides a robot interaction method for improving the intelligence of a robot in the aspect of interacting with a person, so as to improve the interaction efficiency.

The invention adopts the following specific implementation modes: a method of robotic interaction, comprising: s1, receiving a service request triggered by a user, wherein the service request comprises an interactive service identifier selected by the user;

s2, identifying the user interaction intention and the user authority through an identification module arranged on the robot;

s3, according to the interactive intention identification and user authority identification result, activating the information acquisition mode corresponding to the interactive service identification selected by the user;

s4, acquiring the interaction information of the user through the activated information acquisition mode;

s5, processing the interactive information to complete the service request.

Preferably, the recognition module includes password recognition corresponding to the keyboard acquisition mode, voice recognition corresponding to the voice acquisition mode, and face recognition corresponding to the shooting acquisition mode.

Preferably, the recognition module is a face recognition module corresponding to the shooting acquisition mode;

in a corresponding manner, the first and second optical fibers are,

determining the orientation of the user and the area of the face of the user which is shot according to the facial feature points;

judging whether the user has interaction intention recognition with the robot or not according to the orientation of the user and the photographed face area of the user;

if the interactive intention identification judgment result is yes, carrying out user authority identification;

and if the user permission identification judgment structure is passed, the robot interacts with the user.

The method of robot interaction as described above, preferably, the method further comprises: if the interaction intention identification judgment result is negative, forbidding the robot to interact with the user;

alternatively, the first and second electrodes may be,

and if the user authority identification judgment structure is failed, prohibiting the robot from interacting with the user.

The method of robot interaction as described above, preferably, the determining whether the user has an intention to interact with the robot according to the orientation of the user and the photographed area of the face of the user;

the method comprises the following steps:

if the user faces the robot and the area of the face of the user is larger than or equal to an area threshold value, judging that the user has the intention of interacting with the robot;

if the user is not facing the robot or the area of the face of the user is smaller than the area threshold, determining that the user does not have the intention of interacting with the robot.

The method of robot interaction as described above, preferably, the method further comprises:

monitoring the orientation of the user in real time during the interaction of the robot and the user;

and if the monitored duration that the user does not face the robot is longer than the specified duration, controlling the robot to stop interacting with the user.

The robot interaction method as described above, preferably, the robot interacts with the user by voice.

The method for robot interaction as described above, preferably, controlling the robot to actively interact with the user, includes:

controlling the robot to output voice guidance information to introduce functions of the robot to the user; and/or

And controlling the robot to output an interactive page to the user so that the user can interact with the robot.

The beneficial technical effects are as follows: the face recognition device is installed on the robot, and whether the user has the intention of interacting with the robot or not is judged according to the shot face image of the user in the specified range, so that whether the user wants to interact with the robot or not can be actively judged before the user interacts with the robot, and the robot is controlled to actively interact with the user when the judgment result is yes. The method provided by the embodiment can improve the intelligence degree of the robot in the aspect of interaction with people so as to improve the interaction efficiency.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.

Wherein:

FIG. 1 is a wire-frame diagram of a method of robotic interaction provided in an embodiment of the present application.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In the description of the present invention, the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, which are for convenience of description of the present invention only and do not require that the present invention must be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. The terms "connected" and "connected" used herein should be interpreted broadly, and may include, for example, a fixed connection or a detachable connection; they may be directly connected or indirectly connected through intermediate members, and specific meanings of the above terms will be understood by those skilled in the art as appropriate.

A method of robotic interaction, comprising: s1, receiving a service request triggered by a user, wherein the service request comprises an interactive service identifier selected by the user;

s2, identifying the user interaction intention and the user authority through an identification module arranged on the robot;

s3, according to the interactive intention identification and user authority identification result, activating the information acquisition mode corresponding to the interactive service identification selected by the user;

s4, acquiring the interaction information of the user through the activated information acquisition mode;

s5, processing the interactive information to complete the service request.

Based on the interactive function provided by the robot, the user can trigger a corresponding service request by selecting a certain interactive service, and then the robot activates an information acquisition mode corresponding to the interactive service identifier selected by the user according to the preset corresponding relationship between the interactive service identifier and the information acquisition mode, so as to acquire the interactive information of the user through the activated information acquisition mode, and process the acquired interactive information to complete the service request. The application requirement of interaction of the user is realized through the robot, and the use mode of the robot is expanded.

The invention also has the following implementation mode, and the activated information acquisition mode corresponding to the social service identification comprises at least one of a keyboard acquisition mode, a voice acquisition mode and a shooting acquisition mode.

Various different acquisition modes can be set in the robot in advance, and the social service identification is enabled to be an interaction intention, for example, a name of the robot is shout or a preset instruction is set. And displaying the service functions which can be provided by the robot to the user in an interface display mode through a screen arranged on the robot, wherein each service function can be displayed in an icon form. In order to prevent misoperation, interactive intention recognition is set, and interactive intention recognition confirmation is carried out.

The invention also has the following implementation mode, and the recognition module comprises password recognition corresponding to the keyboard acquisition mode, voice recognition corresponding to the voice acquisition mode and face recognition corresponding to the shooting acquisition mode.

The present invention also has an embodiment, as above-mentioned robot interaction method, preferably, the recognition module is a face recognition module corresponding to the shooting acquisition mode;

in a corresponding manner, the first and second optical fibers are,

determining the orientation of the user and the area of the face of the user which is shot according to the facial feature points;

judging whether the user has interaction intention recognition with the robot or not according to the orientation of the user and the photographed face area of the user;

if the interactive intention identification judgment result is yes, carrying out user authority identification;

and if the user permission identification judgment structure is passed, the robot interacts with the user.

The method of robot interaction as described above, preferably, the method further comprises: if the interaction intention identification judgment result is negative, forbidding the robot to interact with the user;

alternatively, the first and second electrodes may be,

and if the user authority identification judgment structure is failed, prohibiting the robot from interacting with the user.

One or more cameras may be mounted on the head of the robot.

When one camera is installed, the camera can be controlled to rotate 360 degrees for shooting in order to shoot images in the specified range of the robot. When a plurality of cameras are installed, lenses of the plurality of cameras may be oriented in different directions to capture images in different directions.

The robot-specified range may refer to a sphere area having a radius of a specified distance with respect to the robot center. Users within a specified range are more likely to have an intent to interact with the robot; intent to interact with a robot means that the user has not interacted with the robot, but has the idea of interacting with the robot

The present invention also has an embodiment in which it is determined whether the user has an intention to interact with the robot based on the orientation of the user and the area of the face of the user that is photographed;

the method comprises the following steps:

if the user faces the robot and the area of the face of the user is larger than or equal to an area threshold value, judging that the user has the intention of interacting with the robot;

if the user is not facing the robot or the area of the face of the user is smaller than the area threshold, determining that the user does not have the intention of interacting with the robot.

The facial image of the user is further analyzed to determine if the user has an intent to interact with the robot. The photographed face image of the user is the face image of the user seen from the perspective of the robot. Generally, the face of a user when the user wants to interact with the robot is different from the face when the user does not want to interact with the robot. Based on this, it is possible to determine whether the user has an intention to interact with the robot from the face image of the user.

And if the judgment result is yes, namely the user has the intention of interacting with the robot, controlling the robot to actively interact with the user. That is, the robot is controlled to actively interact with the user, e.g., the robot is controlled to actively travel in the direction in which the user is located, before the user interacts with the robot.

The present invention also has embodiments wherein the method further comprises:

monitoring the orientation of the user in real time during the interaction of the robot and the user;

and if the monitored duration that the user does not face the robot is longer than the specified duration, controlling the robot to stop interacting with the user.

The method provided by the embodiment can improve the intelligence degree of the robot in the aspect of interaction with people so as to improve the interaction efficiency.

The invention also has an embodiment in which the robot interacts with the user by voice.

The invention also has the following implementation mode, and the control method controls the robot to actively interact with the user and comprises the following steps: controlling the robot to output voice guidance information to introduce functions of the robot to the user;

and/or

And controlling the robot to output an interactive page to the user so that the user can interact with the robot.

And when the user triggers the service request, the robot determines an information acquisition mode corresponding to the interactive service selected by the user according to the identification of the interactive service selected by the user and the corresponding relation, and activates the corresponding information acquisition mode. The activation means to enable the corresponding information acquisition mode, and generally speaking, the activation operation can be realized by controlling the corresponding information acquisition device to be started and enabled.

In some embodiments, the present application further has a step S6 of actively interacting with the user based on the self-monitoring data;

for example, the electric quantity of the robot is monitored by the electric quantity monitoring unit, when the electric quantity is lower than a preset value, the robot actively searches for a user, and if the user cannot be found in the visual field of the robot, autonomous charging is performed;

and if the user can be found in the visual field or the user is interacting with the user, sending a charging request to autonomously charge the user after the user agrees.

In some embodiments, the method further includes step S7, where the display module of the robot may display the two-dimensional code, and after the user scans the two-dimensional code using a mobile terminal (e.g., a mobile phone or a tablet computer), the user may communicate with the robot through a specific APP, so that the robot may transmit content or records of interaction with the user to the user through the specific APP, for example, a certain restaurant location where the user interacts with the robot, and after the user completes communication with the robot through the two-dimensional code, the restaurant location and navigation mode fed back by the robot are downloaded to the mobile phone.

Furthermore, the robot can judge the age of the user according to the facial image of the user, and provide different interaction modes according to the age of the user, for example, different voices are selected to be played in different voices, when the robot judges the age of the user to be 0-10 years according to the facial image of the user, the feedback voice of the robot can be played by using the child voice, and if the age is judged to be over 60 years according to the facial image of the user, the more mature feedback voice can be played, or voice packages of popular celebrity people in different age groups can be prestored in the robot, and the feedback voice can be played by automatically applying the popular celebrity voice package in the age group according to the age judged by the facial image of the user.

In addition, when a plurality of face images of users are judged in one picture, the robot can automatically and preferentially select the user with a larger face image area (meaning that the distance is close) to interact with the face image and feed back the interactive content of the user, when the face area of the user image is close to or the user with a larger face image area cannot be judged, the robot can select the interactive user according to the age judged by the face image, for example, preferentially interact with the user with a later age judged by the face image, or display head photos of two users in the display module, click and specify the interactive head portrait by the user, and track the head portrait position of the user.

It should be noted that, in the present specification, each embodiment is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.

The above embodiments are only used for illustrating the embodiments of the present application, and not for limiting the embodiments of the present application, and those skilled in the relevant art can make various changes and modifications without departing from the spirit and scope of the embodiments of the present application, so that all equivalent technical solutions also belong to the scope of the embodiments of the present application, and the scope of the embodiments of the present application should be defined by the claims.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种面向教学实验的虚实融合机器教师教学方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类