Man-machine interaction method and related equipment thereof

文档序号:1923543 发布日期:2021-12-03 浏览:23次 中文

阅读说明:本技术 一种人机交互方法及其相关设备 (Man-machine interaction method and related equipment thereof ) 是由 朱翠玲 刘丛刚 李守毅 杨训杰 张蔚 于 2021-09-10 设计创作,主要内容包括:本申请公开了一种人机交互方法及其相关设备,该方法包括:在获取到用户触发的情绪宣泄请求之后,先根据该用户输入的情绪宣泄内容,确定情绪表征对象,以使该情绪表征对象用于承载该情绪宣泄内容所携带的用户情绪;再显示该情绪表征对象沿着预设路线向情绪容纳位置进行移动,以便在该情绪表征对象到达该情绪容纳位置之后,按照预设清除方式,对该情绪容纳位置上的该情绪表征对象进行清除处理,以实现情绪宣泄的目的,如此能够辅助用户释放情绪,从而能够辅助用户降低心理压力,进而有利于降低该用户出现心理疾病的可能性。(The application discloses a man-machine interaction method and related equipment thereof, wherein the method comprises the following steps: after an emotion releasing request triggered by a user is acquired, determining an emotion representation object according to emotion releasing content input by the user, so that the emotion representation object is used for bearing user emotion carried by the emotion releasing content; and then displaying that the emotion representation object moves to the emotion accommodating position along a preset route, so that after the emotion representation object reaches the emotion accommodating position, the emotion representation object on the emotion accommodating position is cleared according to a preset clearing mode to achieve the purpose of emotion disclosure, and thus the user can be assisted to release emotion, the user can be assisted to reduce psychological pressure, and the possibility of psychological diseases of the user can be reduced.)

1. A human-computer interaction method, characterized in that the method comprises:

after an emotion releasing request triggered by a user is acquired, determining an emotion representation object according to emotion releasing content input by the user;

displaying that the emotion representation object moves to an emotion accommodating position along a preset route;

and after the emotion representation object reaches the emotion accommodating position, removing the emotion representation object on the emotion accommodating position according to a preset removing mode.

2. The method of claim 1, wherein the determining of the mood-characterizing object comprises:

extracting a abreaction subject of the user from the emotion abreaction content;

and determining the emotion representation object according to the emotion release content and the display object description information corresponding to the release subject.

3. The method according to claim 2, wherein the determining the emotion characterization object according to the display object description information corresponding to the emotion utterance and the utterance subject comprises:

performing emotion analysis on the emotion releasing content to obtain user emotion and emotion representation data of the user emotion;

determining the object display form of the user emotion according to the display object description information corresponding to the user emotion and the leakage subject;

and generating the emotion characterization object according to the object display form and the emotion characterization data.

4. The method of claim 3, wherein generating the mood characterizing object from the object display modality and the mood characterizing data comprises:

combining the emotion representation data with the display form of the object according to a first combination mode to obtain an emotion representation object;

alternatively, the first and second electrodes may be,

generating the emotion characterization object according to the object display morphology and the emotion characterization data, including:

performing mimicry processing on the emotion representation data to obtain an emotion mimicry image; and combining the emotion mimicry image with the object display form according to a second combination mode to obtain the emotion representation object.

5. The method of claim 3, wherein the determining of the user emotion comprises:

and performing emotion analysis on the emotion release content and the user face image corresponding to the emotion release content to obtain the user emotion.

6. The method according to any one of claims 2-5, wherein the emotional advisory content comprises advisory speech data, and the determination of the advisory topic comprises:

performing voice recognition processing on the released voice data to obtain a voice recognition text; analyzing the expiring theme of the voice recognition text by utilizing a preset theme analysis model to obtain the expiring theme of the user;

alternatively, the first and second electrodes may be,

the emotion releasing content includes releasing text data, and the determining process of the releasing subject includes:

analyzing the leakage text data by using a preset theme analysis model to obtain the leakage theme of the user;

alternatively, the first and second electrodes may be,

the emotion releasing content comprises releasing video data, and the determining process of the releasing subject comprises the following steps:

extracting video identification text from the unvoiced video data; and analyzing the leakage theme of the video recognition text by using a preset theme analysis model to obtain the leakage theme of the user.

7. The method of claim 1, further comprising:

determining a moving route of the emotion representation object according to the emotion disclosure content;

the displaying the emotion characterization object to move to an emotion accommodating location along a preset route, comprising:

displaying the movement of the emotion-characterizing object along the movement route to an emotion-accommodating location.

8. The method according to claim 7, wherein the emotion releasing contents include releasing voice data, and the determination process of the moving route includes:

sound ray fluctuation analysis is carried out on the releasing voice data to obtain a sound ray fluctuation trend graph;

and determining the moving route of the emotion representation object according to the sound ray fluctuation trend graph.

9. The method of claim 1, further comprising:

determining the moving speed of the emotion representation object according to the input speed of the emotion release content;

the displaying the emotion characterization object to move to an emotion accommodating location along a preset route, comprising:

and displaying that the emotion representation object moves to an emotion accommodating position along a preset route at the moving speed.

10. The method of claim 1, wherein prior to the emotion-characterizing object reaching the emotion-accommodating location, the method further comprises:

stopping moving the emotional characterization object in response to a first operation triggered by the user for the emotional characterization object.

11. The method of claim 10, wherein after the ceasing to move the emotion-characterizing object, the method further comprises:

updating the emotional characterization object in response to a second operation triggered by the user for the emotional characterization object; when a preset recovery condition is reached, displaying that the updated emotion representation object continues to move to an emotion accommodating position along a preset route;

alternatively, the first and second electrodes may be,

after the ceasing to move the emotion-characterizing object, the method further comprises:

deleting the emotional characterization object in response to a third operation triggered by the user for the emotional characterization object.

12. The method of claim 1, further comprising:

when the preset feedback condition is determined to be reached, displaying emotional feedback content; wherein the emotional feedback content is determined according to the emotional catharsis content.

13. The method of claim 1, further comprising:

when the preset alarm condition is determined to be reached, sending emotion alarm information to a guardian of the user; wherein the emotion warning information is determined according to the emotion release content.

14. The method of claim 13, further comprising:

when the preset alarm condition is determined to be reached, displaying alarm inquiry information; the alarm inquiry information is used for inquiring the user whether to give an alarm to a guardian of the user;

receiving a first feedback result of the user for the alarm inquiry information;

the sending of the emotional alert information to the guardian of the user includes:

and when the first feedback result is determined to meet the preset sending condition, sending the emotion warning information to a guardian of the user.

15. The method of claim 14, further comprising:

when the first feedback result is determined not to meet the preset sending condition, displaying emotion dredging content; wherein the emotion grooming content is determined according to the emotion catharsis content.

16. The method of claim 1, further comprising:

displaying cleaning inquiry information in response to a user-triggered release ending request; wherein the clearing inquiry information is for inquiring the user whether to delete the emotion clearing content;

receiving a second feedback result of the user for the cleaning inquiry information;

and deleting the emotion releasing content when the second feedback result meets the preset clearing condition.

17. The method of claim 1, further comprising:

displaying a mimicry container on the emotion-containing location;

the displaying the emotion characterization object to move to an emotion accommodating location along a preset route, comprising:

displaying the emotion representation object to move to the mimicry container along a preset route;

after the emotion representation object reaches the emotion accommodating position, removing the emotion representation object on the emotion accommodating position according to a preset removing mode, wherein the removing process comprises the following steps:

after the emotion representation object enters the mimicry container, removing the emotion representation object in the mimicry container according to a preset removing mode.

18. The method of claim 17, further comprising:

and updating the preset clearing mode in response to a fourth operation triggered by the user for the mimicry container.

19. A human-computer interaction device, comprising:

the first determining unit is used for determining an emotion representation object according to emotion releasing content input by a user after an emotion releasing request triggered by the user is acquired;

the first display unit is used for displaying that the emotion representation object moves to an emotion accommodating position along a preset route;

and the object removing unit is used for removing the emotion representation object on the emotion containing position according to a preset removing mode after the emotion representation object reaches the emotion containing position.

20. An apparatus, characterized in that the apparatus comprises: a processor, a memory, a system bus;

the processor and the memory are connected through the system bus;

the memory is for storing one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform the method of any of claims 1 to 18.

21. A computer-readable storage medium having stored therein instructions which, when run on a terminal device, cause the terminal device to perform the method of any one of claims 1 to 18.

22. A computer program product, characterized in that it, when run on a terminal device, causes the terminal device to perform the method of any one of claims 1 to 18.

Technical Field

The present application relates to the field of computer technologies, and in particular, to a human-computer interaction method and related devices.

Background

With the rapid development of human society, the attention on human mental health is higher and higher. However, since some people (e.g., children, etc.) cannot control their emotions well and do not perform self-stress well, it is difficult for them to eliminate their negative emotions in a self-regulating manner, so that they are likely to accumulate a large amount of negative emotions, which makes them vulnerable to psychological diseases.

Disclosure of Invention

The embodiment of the application mainly aims to provide a human-computer interaction method and related equipment, which can assist a user in reducing psychological stress, and therefore, the possibility of psychological diseases of the user can be reduced.

The embodiment of the application provides a man-machine interaction method, which comprises the following steps:

after an emotion releasing request triggered by a user is acquired, determining an emotion representation object according to emotion releasing content input by the user;

displaying that the emotion representation object moves to an emotion accommodating position along a preset route;

and after the emotion representation object reaches the emotion accommodating position, removing the emotion representation object on the emotion accommodating position according to a preset removing mode.

In one possible embodiment, the determining of the emotion characterizing object comprises:

extracting a abreaction subject of the user from the emotion abreaction content;

and determining the emotion representation object according to the emotion release content and the display object description information corresponding to the release subject.

In a possible implementation manner, the determining the emotion characterization object according to the display object description information corresponding to the emotion releasing content and the releasing subject includes:

performing emotion analysis on the emotion releasing content to obtain user emotion and emotion representation data of the user emotion;

determining the object display form of the user emotion according to the display object description information corresponding to the user emotion and the leakage subject;

and generating the emotion characterization object according to the object display form and the emotion characterization data.

In one possible embodiment, the generating the emotion characterization object according to the object display form and the emotion characterization data includes:

combining the emotion representation data with the display form of the object according to a first combination mode to obtain an emotion representation object;

alternatively, the first and second electrodes may be,

generating the emotion characterization object according to the object display morphology and the emotion characterization data, including:

performing mimicry processing on the emotion representation data to obtain an emotion mimicry image; and combining the emotion mimicry image with the object display form according to a second combination mode to obtain the emotion representation object.

In a possible embodiment, the determining of the user emotion includes:

and performing emotion analysis on the emotion release content and the user face image corresponding to the emotion release content to obtain the user emotion.

In one possible embodiment, the emotion announcing content includes announcing voice data, and the determination of the announcing subject includes:

performing voice recognition processing on the released voice data to obtain a voice recognition text; analyzing the expiring theme of the voice recognition text by utilizing a preset theme analysis model to obtain the expiring theme of the user;

alternatively, the first and second electrodes may be,

the emotion releasing content includes releasing text data, and the determining process of the releasing subject includes:

analyzing the leakage text data by using a preset theme analysis model to obtain the leakage theme of the user;

alternatively, the first and second electrodes may be,

the emotion releasing content comprises releasing video data, and the determining process of the releasing subject comprises the following steps:

extracting video identification text from the unvoiced video data; and analyzing the leakage theme of the video recognition text by using a preset theme analysis model to obtain the leakage theme of the user.

In one possible embodiment, the method further comprises:

determining a moving route of the emotion representation object according to the emotion disclosure content;

the displaying the emotion characterization object to move to an emotion accommodating location along a preset route, comprising:

displaying the movement of the emotion-characterizing object along the movement route to an emotion-accommodating location.

In one possible embodiment, the emotion announcing content includes announcing voice data, and the determination process of the movement route includes:

sound ray fluctuation analysis is carried out on the releasing voice data to obtain a sound ray fluctuation trend graph;

and determining the moving route of the emotion representation object according to the sound ray fluctuation trend graph.

In one possible embodiment, the method further comprises:

determining the moving speed of the emotion representation object according to the input speed of the emotion release content;

the displaying the emotion characterization object to move to an emotion accommodating location along a preset route, comprising:

and displaying that the emotion representation object moves to an emotion accommodating position along a preset route at the moving speed.

In one possible embodiment, before the emotion-characterizing object reaches the emotion-accommodating location, the method further comprises:

stopping moving the emotional characterization object in response to a first operation triggered by the user for the emotional characterization object.

In one possible embodiment, after said stopping moving the emotion-characterizing object, the method further comprises:

updating the emotional characterization object in response to a second operation triggered by the user for the emotional characterization object; when a preset recovery condition is reached, displaying that the updated emotion representation object continues to move to an emotion accommodating position along a preset route;

alternatively, the first and second electrodes may be,

after the ceasing to move the emotion-characterizing object, the method further comprises:

deleting the emotional characterization object in response to a third operation triggered by the user for the emotional characterization object.

In one possible embodiment, the method further comprises:

when the preset feedback condition is determined to be reached, displaying emotional feedback content; wherein the emotional feedback content is determined according to the emotional catharsis content.

In one possible embodiment, the method further comprises:

when the preset alarm condition is determined to be reached, sending emotion alarm information to a guardian of the user; wherein the emotion warning information is determined according to the emotion release content.

In one possible embodiment, the method further comprises:

when the preset alarm condition is determined to be reached, displaying alarm inquiry information; the alarm inquiry information is used for inquiring the user whether to give an alarm to a guardian of the user;

receiving a first feedback result of the user for the alarm inquiry information;

the sending of the emotional alert information to the guardian of the user includes:

and when the first feedback result is determined to meet the preset sending condition, sending the emotion warning information to a guardian of the user.

In one possible embodiment, the method further comprises:

when the first feedback result is determined not to meet the preset sending condition, displaying emotion dredging content; wherein the emotion grooming content is determined according to the emotion catharsis content.

In one possible embodiment, the method further comprises:

displaying cleaning inquiry information in response to a user-triggered release ending request; wherein the clearing inquiry information is for inquiring the user whether to delete the emotion clearing content;

receiving a second feedback result of the user for the cleaning inquiry information;

and deleting the emotion releasing content when the second feedback result meets the preset clearing condition.

In one possible embodiment, the method further comprises:

displaying a mimicry container on the emotion-containing location;

the displaying the emotion characterization object to move to an emotion accommodating location along a preset route, comprising:

displaying the emotion representation object to move to the mimicry container along a preset route;

after the emotion representation object reaches the emotion accommodating position, removing the emotion representation object on the emotion accommodating position according to a preset removing mode, wherein the removing process comprises the following steps:

after the emotion representation object enters the mimicry container, removing the emotion representation object in the mimicry container according to a preset removing mode.

In one possible embodiment, the method further comprises:

and updating the preset clearing mode in response to a fourth operation triggered by the user for the mimicry container.

An embodiment of the present application further provides a human-computer interaction device, including:

the first determining unit is used for determining an emotion representation object according to emotion releasing content input by a user after an emotion releasing request triggered by the user is acquired;

the first display unit is used for displaying that the emotion representation object moves to an emotion accommodating position along a preset route;

and the object removing unit is used for removing the emotion representation object on the emotion containing position according to a preset removing mode after the emotion representation object reaches the emotion containing position.

An embodiment of the present application further provides an apparatus, including: a processor, a memory, a system bus;

the processor and the memory are connected through the system bus;

the memory is used for storing one or more programs, and the one or more programs comprise instructions which, when executed by the processor, cause the processor to execute any implementation of the human-computer interaction method provided by the embodiment of the application.

The embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is enabled to execute any implementation of the human-computer interaction method provided in the embodiment of the present application.

The embodiment of the present application further provides a computer program product, and when the computer program product runs on a terminal device, the terminal device is enabled to execute any implementation manner of the human-computer interaction method provided by the embodiment of the present application.

Based on the technical scheme, the method has the following beneficial effects:

in the technical scheme provided by the application, for the human-computer interaction device, after a user triggers an emotion releasing request on the human-computer interaction device, the human-computer interaction device can determine an emotion representation object according to emotion releasing content (for example, real-time input releasing voice data or releasing text data and the like) input by the user, so that the emotion representation object is used for bearing the emotion of the user carried by the emotion releasing content; and then displaying that the emotion representation object moves to the emotion accommodating position along a preset route, so that after the emotion representation object reaches the emotion accommodating position, clearing the emotion representation object on the emotion accommodating position according to a preset clearing mode, and achieving the purpose of emotion disclosure. Therefore, the technical scheme provided by the application can assist the user in releasing the emotion, so that the user can be assisted in reducing the psychological pressure, and the possibility of psychological diseases of the user can be reduced.

Drawings

In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a flowchart of a human-computer interaction method according to an embodiment of the present disclosure;

fig. 2 is a schematic structural diagram of a human-computer interaction device according to an embodiment of the present disclosure.

Detailed Description

In the research on human mental health, the inventor finds that some people (such as children and the like) cannot well control own emotions and cannot well perform self-pressure elimination, so that the people can hardly eliminate some negative emotions of the people in a self-regulation way, and the people can easily accumulate a large amount of negative emotions, and thus the people are easy to suffer from mental diseases. It can be seen that in order to reduce the likelihood of psychological illness occurring in these people, a listener can be provided with a consistently conservative secret so that these people can self-appeal to the listener or to secrets they do not want to speak, to release mood and relieve psychological stress.

Based on the above findings, in order to solve the technical problems in the background art section, an embodiment of the present application provides a human-computer interaction method, including: for the human-computer interaction equipment, after an emotion releasing request triggered by a user is acquired, an emotion representation object is determined according to emotion releasing content input by the user, so that the emotion representation object is used for bearing user emotion carried by the emotion releasing content; and then displaying that the emotion representation object moves to the emotion accommodating position along a preset route, so that after the emotion representation object reaches the emotion accommodating position, the emotion representation object on the emotion accommodating position is cleared according to a preset clearing mode to achieve the purpose of emotion disclosure, and thus the user can be assisted to release emotion, the user can be assisted to reduce psychological pressure, and the possibility of psychological diseases of the user can be reduced.

The embodiment of the present application is not limited to the implementation of the "human-computer interaction device", and may be any kind of smart robot, or any kind of terminal device (for example, a smart phone, a computer, a Personal Digital Assistant (PDA), a tablet computer, or the like) capable of human-computer interaction.

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Method embodiment one

Referring to fig. 1, the figure is a flowchart of a human-computer interaction method provided in an embodiment of the present application.

The man-machine interaction method applied to the man-machine interaction equipment provided by the embodiment of the application comprises the following steps of S1-S3:

s1: after the emotion releasing request triggered by the user is acquired, the emotion representation object is determined according to the emotion releasing content input by the user.

Wherein, the user refers to the user of the man-machine interaction equipment.

The "emotion releasing request" is for requesting the start of execution of an emotion releasing assistance process for the user; and the 'emotion clearing request' is triggered by the user on the human-computer interaction device. In addition, the embodiment of the application does not limit the triggering manner of the emotion releasing request, for example, the emotion releasing request may be triggered by triggering an instruction for opening an emotion releasing system on the human-computer interaction device. For another example, the "emotion releasing request" may be triggered by clicking a release start button on the human-computer interaction device.

"emotion-clearing content" means multimedia data inputted by a user for carrying the user's emotion; also, the embodiment of the present application does not limit "emotion releasing content", and for example, it may specifically include at least one of releasing voice data, releasing text data, and releasing video data. The term "speech data to be released" refers to the emotion releasing speech recorded by the user with the aid of a sound pickup provided on the human-computer interaction device. The 'disclosure of text data' refers to the emotion disclosure of characters entered by a user by means of a character entry device (e.g., a keyboard) installed on the human-computer interaction device. "catharsis video data" refers to emotion catharsis video recorded by a user by means of a video recording device (e.g., a camera + a microphone, etc.) installed on a human-computer interaction device.

In addition, in order to improve the real-time performance of the human-computer interaction, the above-mentioned "emotion releasing content" may include at least one of releasing voice data entered in real time, releasing text data entered in real time, and releasing video data entered in real time.

The 'emotion representation object' is used for bearing the emotional state of the user when inputting the 'emotion declaration content'; furthermore, the embodiment of the present application does not limit the "emotion characterization object", and for example, the emotion characterization object may be a mimic object carrying emotion description data (for example, a mottled blackboard, a broken book, a broken pointer, a broken platform, or the like). Wherein the "emotion description data" is used to describe an emotional state that the user has when the above-mentioned "emotion releasing content" is input; furthermore, the embodiment of the present application does not limit "emotion description data", and for example, it may be written description data similar to "casualty", "principal," or the like; or the emotional mimicry images such as the mimicry figure which is crying and the mimicry figure with the expression of the refraction; or the anthropomorphic character description data (for example, the anthropomorphic processed heart hurting two characters are used for expressing the emotional state of heart hurting). It should be noted that the above-mentioned "injury after personification" has some features similar to those of a human (for example, has a head, hands, feet, a trunk and the like); and the user can also recognize the two words of 'hurt heart' from the 'hurt heart after personification processing'.

In addition, the number of the "emotion representation objects" is not limited in the embodiments of the present application, for example, if the number of the character information carried by the "emotion declaration content" is relatively small, the "emotion declaration content" is likely to mainly express a user emotion, and therefore the number of the "emotion representation objects" may be 1; if the number of the character information carried by the emotion release content is large, the emotion release content is likely to express a plurality of user emotions, and the number of the emotion characterization objects is likely to be N. Wherein, N is a positive integer, and N represents the number of the user emotions expressed by the emotion disclosure. It should be noted that, the content related to the "user emotion" is referred to the content related to the "user emotion" in step 21 below

In addition, the embodiment of the present application also does not limit the determination process of the "emotion characterization object", and for example, it may specifically include steps 11 to 12:

step 11: and extracting the abreaction subject of the user from the emotion abreaction content.

Wherein "the catharsis topic" is used to mean an item described by the user by means of the above-mentioned "emotional catharsis content" (e.g., tell someone, blame someone, complain someone, conceive someone, like someone, confession something, etc.); furthermore, the embodiments of the present application do not limit the "disclosure subject matter," and may specifically include at least one of a person (e.g., a teacher, a parent, a friend, a classmate, etc.), an event, a work, and an object (e.g., a blackboard, a desk, etc.), for example.

In addition, the embodiment of the present application does not limit the extraction process of the "leakage subject" (that is, the implementation manner of step 11), and for the convenience of understanding, the following description is made with reference to four examples.

Example 1, when the "emotion releasing content" includes releasing voice data, step 11 may specifically include: firstly, carrying out voice recognition processing on the released voice data to obtain a voice recognition text; and then, analyzing the expiring theme of the voice recognition text by utilizing a preset theme analysis model to obtain the expiring theme of the user.

The topic analysis model is used for performing the analysis of the leakage topic aiming at the input data of the topic analysis model; and the 'topic analysis model' can be constructed in advance according to the sample text data and the actual leakage topic of the sample text data. It should be noted that the embodiments of the present application do not limit the "topic analysis model", and may be any machine learning model, for example.

Example 2, when the "emotion declaration content" includes declaration text data, step 11 may specifically include: and analyzing the leakage text data by using a preset theme analysis model to obtain the leakage theme of the user.

Example 3, when the "emotion releasing content" includes releasing video data, step 11 may specifically include: extracting video identification text from the unvoiced video data; and analyzing the leakage theme of the video recognition text by using a preset theme analysis model to obtain the leakage theme of the user.

The video identification text is used for representing character information carried by the leakage video data; moreover, the present application implements the extraction process of the "video recognition text", for example, the extraction process may specifically include: firstly, carrying out voice recognition processing on audio data in the catharsis video data to obtain an audio text, and carrying out image text recognition processing on image data in the catharsis video data to obtain an image text; and then, carrying out set processing on the audio text and the image text to obtain a video identification text, so that the video identification text can accurately represent character information carried by the catharsis video data.

Example 4, when the above "emotion releasing content" includes releasing voice data, releasing text data, and releasing video data, step 11 may specifically include: firstly, carrying out voice recognition processing on the releasing voice data to obtain a voice recognition text, and extracting a video recognition text from the releasing video data; then, carrying out set processing on the voice recognition text, the video recognition text and the unveiled text data to obtain a text to be processed; and finally, analyzing the leakage theme of the text to be processed by utilizing a preset theme analysis model to obtain the leakage theme of the user.

Based on the related content in step 11, after the emotion release content is obtained, analysis on the release subject may be performed on the character information carried by the emotion release content to obtain the release subject of the user, so that the release subject can accurately represent the object (e.g., person, thing, things, etc.) surrounded by the emotion release content.

Step 12: and determining the emotion representation object according to the emotion release content and the display object description information corresponding to the release subject.

The display object description information corresponding to the abruption subject is used for describing features shared by the emotion representation objects used under the abruption subject (for example, all the display object description information belong to teaching types); moreover, the display object description information corresponding to the "disclosure subject" is not limited in the embodiment of the present application, for example, if the "disclosure subject" is a teacher, the display object description information corresponding to the "disclosure subject" may be: mimicry objects belonging to the teaching type (e.g. mottled blackboards, broken books, broken ferule, broken podium, etc.).

In addition, the embodiment of the present application also does not limit the determination process of "disclosure of the display object description information corresponding to the subject", and for example, the determination process may specifically include: and inquiring the display object description information corresponding to the leakage subject from the pre-constructed first mapping relation. The "first mapping relation" is used to record display object description information corresponding to each leakage subject.

In addition, the embodiment of step 12 is not limited in the examples of the present application, and for example, the method may specifically include steps 21 to 23:

step 21: and analyzing the emotion of the emotion releasing content to obtain the emotion of the user and emotion representation data of the emotion of the user.

Wherein, the emotion analysis is used for performing emotion extraction processing on multimedia data (e.g. voice data, text data, or video data); furthermore, the embodiment of the present application is not limited to the implementation of "emotion analysis", and for example, any existing or future method that can perform emotion extraction processing on multimedia data (e.g., voice data, text data, or video data) may be used.

"user emotion" means an emotional state that the user has when the above-mentioned "emotion release content" is input; moreover, embodiments of the subject application do not define a "user emotion," which can include, for example, at least one of likes, love, fascination, dislikes, deserts, dislikes, hats, complaints, happiness, joys, happiness, excitement, happiness, satisfaction, anger, annoyance, sadness, fear, timidity, fear, serenity, tension, embarrassment, impatience, boredom, regress, regret, surprise, shame, surprise, shame, photophobia, shame, guilt, justice, and the like.

In addition, the embodiment of the present application does not limit the determination process of "user emotion", for example, it may specifically be: and (3) performing emotion analysis on the emotion release content by using a pre-constructed emotion analysis model to obtain the emotion of the user. Wherein, the emotion analysis model is used for performing emotion analysis on input data of the emotion analysis model; and the "emotion analysis" may be constructed in advance based on the sample content and the actual emotion of the sample content.

Note that the "sample content" is the same as the "emotion release content" in data type. For example, if the data type of the above-described "emotion declaration content" is audio data, the "sample content" is also audio data. In addition, the above-mentioned "emotion analysis model" is a machine learning model.

In addition, in order to further improve the accuracy of the user emotion, the user emotion can be determined by referring to the facial expression of the user when the emotion explanation content is input. Based on this, the embodiment of the present application further provides another possible implementation manner of determining "user emotion", which may specifically include: and analyzing the emotion of the user and the emotion image corresponding to the emotion release content to obtain the emotion of the user.

The "user face image corresponding to the emotion release content" is used for representing the facial expression of the user when the emotion release content is input; moreover, the embodiment of the present application does not limit the obtaining manner of the "face image of the user corresponding to the emotion release content", for example, when the user inputs the "emotion release content" on the human-computer interaction device, the camera configured in the human-computer interaction device may collect the face image data of the user in real time, and determine the face image data as the face image of the user corresponding to the "emotion release content".

It should be noted that, in the embodiment of the present application, the implementation manner of the step "performing emotion analysis on the emotion declaration content and the user face image corresponding to the emotion declaration content to obtain the emotion of the user" is not limited, and for example, the implementation may be performed by using a mapping relationship that is constructed in advance, or may be performed by using a machine learning model that is trained in advance.

The emotion characterization data of the user emotion is used for characterizing the emotion of the user; in addition, the embodiment of the present application does not limit the "emotion characterization data of the user emotion," and for example, the emotion characterization data may specifically be all character information carried by the "emotion release content" or may also be partial character information (for example, keywords, etc.) carried by the "emotion release content".

Based on the related content in step 21, after the "emotion releasing content" is obtained, emotion analysis may be performed on the "emotion releasing content" to obtain the user emotion and emotion characterization data of the user emotion. For example, when M sentences are carried by the "emotion declaration content" and each sentence is used to represent one emotional state of the user, emotion analysis may be performed on the mth sentence carried by the "emotion declaration content" to obtain an mth user emotion, and all or part of the mth sentence (e.g., a keyword) is determined as emotion characterization data of the mth user emotion. Wherein M is a positive integer, M is less than or equal to M, and M is a positive integer.

Step 22: and determining the object display form of the user emotion according to the display object description information corresponding to the user emotion and the explaination theme.

The "object display form of the user emotion" refers to a display image (e.g., shape, color, etc.) of an emotion characterization object suitable for bearing expression of the user emotion.

In addition, the embodiment of step 22 is not limited in the examples of the present application, and for example, the method may specifically include steps 221 to 223:

step 221: and searching for the object display parameters corresponding to the emotion of the user from the second mapping relation to obtain the display parameters to be used.

The display parameter to be used refers to parameter information of an emotion representation object which is suitable for bearing and expressing the emotion of the user; furthermore, the embodiment of the present application does not limit "display parameters to be used", and for example, it may include: color, extent of damage, shape of damage, volume, decoration of objects (e.g., cage, etc.), etc

Step 222: and determining the display object to be used according to the display object description information corresponding to the leakage subject.

As an example, if the "display object description information corresponding to the declaration subject" is a mimicry object belonging to the teaching type, step 222 may specifically be: and randomly selecting one mimicry object from all the mimicry objects belonging to the teaching type, and determining the selected mimicry object as a display object to be used.

Step 223: and updating the display parameters of the display object to be used by using the display parameters to be used to obtain the object display form of the user emotion.

In the embodiment of the application, after the display parameter to be used and the display object to be used are obtained, the display parameter to be used can be utilized to configure the display object to be used, so that the display object to be used can be displayed according to the display parameter to be used subsequently.

Based on the related content in step 22, after the user emotion is obtained, the object display form of the user emotion may be determined by referring to the user emotion and the display object description information corresponding to the explaimer topic, so that the object display form may be more suitable for expressing the user emotion, which is beneficial to improving the expression effect of the user emotion.

Step 23: and generating the emotion representation object according to the object display form and the emotion representation data.

In the embodiment of the application, after the object display form and the emotion representation data are obtained, the emotion representation data and the object display form may be combined according to a first combination method to obtain an emotion representation object, so that the emotion representation object can bear an emotion state of a user when the user inputs the "emotion releasing content". The "first combination method" may be set in advance, and for example, when the "object display form" is a mottled blackboard, the emotion expression data may be displayed on the mottled blackboard in the form of chalk characters. For example, when the "object display form" is a mottled blackboard and a cage is decorated on the "mottled blackboard", the emotion characterization data may be statically displayed in the cage decorated on the "mottled blackboard".

In addition, in order to further improve the expression effect of the user's emotion, the above-described "emotion characterization data" may be expressed by means of a pseudograph. Based on this, the present application provides another possible implementation manner of step 23, which may specifically include: performing mimicry processing on the emotion representation data to obtain an emotion mimicry image; and combining the emotion mimicry image with the object display form according to a second combination mode to obtain an emotion representation object.

The simulation processing is used for simulating emotion representation data into a simulation character so that the simulation character can express the emotion state carried by the emotion representation data.

"emotional mimicry character" means a mimicry character used to express the above "emotional characterization data"; also, the embodiment of the present application is not limited to the "emotional mimicry figure", and it may be a crying mimicry doll, a collapsing mimicry doll, or the like, for example.

The "second combination mode" may be set in advance, for example, when the "object display form" is a mottled blackboard and a cage is decorated on the "mottled blackboard", the "emotion mimicry image" may be dynamically displayed in the cage decorated on the "mottled blackboard".

Based on the related contents in the above steps 21 to 23, after the display object description information corresponding to the emotional catharsis content and the catharsis subject is acquired, the emotional representation object may be generated by referring to the acquired display object description information and the obtained display object description information, so that the emotional representation object can bear the emotional state that the user has when inputting the emotional catharsis content.

Based on the related content of S1, for a user, when the user wants to perform emotion releasing via the human-computer interaction device, the user may first trigger an emotion releasing request on the human-computer interaction device, so that the human-computer interaction device can collect the emotion releasing content (e.g., releasing voice data, etc.) input by the user in real time, and generate an emotion characterizing object for the emotion releasing content in real time, so that the emotion characterizing object can bear the emotional state that the user has when inputting the emotion releasing content, so that the user emotion can be released later via the processing procedure for the emotion characterizing object.

S2: displaying the mood-characterizing object to move along a preset route to a mood-accommodating location.

The "preset route" refers to a preset moving route leading to the "emotion accommodating location".

"emotion-accommodating location" means a location set in advance for collecting and processing the emotion of the user; furthermore, the embodiment of the present application does not limit the expression of the "emotion accommodating position", and for example, the expression may be performed by means of a mimicry container character (e.g., a tree hole).

Based on the related content of S2, for the human-computer interaction device, after the "emotion representation object" is obtained, the "emotion representation object" may be directly displayed, and the "emotion representation object" is dynamically displayed to move to the emotion accommodating location along the preset route, so that the emotion accommodating location can collect the emotion of the user borne by the "emotion representation object", so that the emotion of the user borne by the "emotion representation object" can be processed at the emotion accommodating location later.

S3: and after the emotion representation object reaches the emotion accommodating position, removing the emotion representation object on the emotion accommodating position according to a preset removing mode.

The "preset clearing mode" may be preset, and may specifically be a phagocytosis mode, a pulverization mode, a smashing mode, and the like.

Therefore, for the human-computer interaction device, after the emotion representation object is determined to move to the emotion accommodating position, the emotion representation object on the emotion accommodating position can be cleared according to a preset clearing mode, so that a user can feel the pleasure that negative emotions (namely, the emotion of the user borne by the emotion representation object) are released at the moment that the emotion representation object is cleared, and therefore the human-computer interaction device is beneficial to assisting the user in reducing psychological stress, and therefore the possibility that the user suffers from psychological diseases is reduced.

Based on the related contents of S1 to S3, in the human-computer interaction method applied to the human-computer interaction device according to the embodiment of the present application, after the user triggers the emotion declaration request on the human-computer interaction device, the human-computer interaction device may first determine an emotion representation object according to the emotion declaration content input by the user (for example, the declaration voice data or the declaration text data input in real time), so that the emotion representation object is used for bearing the emotion of the user carried by the emotion declaration content; and then displaying that the emotion representation object moves to the emotion accommodating position along a preset route, so that after the emotion representation object reaches the emotion accommodating position, clearing the emotion representation object on the emotion accommodating position according to a preset clearing mode, and achieving the purpose of emotion disclosure. Therefore, the technical scheme provided by the application can assist the user in releasing the emotion, so that the user can be assisted in reducing the psychological pressure, and the possibility of psychological diseases of the user can be reduced.

Method embodiment two

In fact, in order to enhance the user experience, the above-mentioned "emotion accommodating location" may be simulated by using a mimicry container (e.g. a tree hole) so that the user can experience the pleasure that the negative emotion of the user is received and disposed by the mimicry container, based on which, according to another embodiment of the human-computer interaction method provided by the embodiment of the present application, the human-computer interaction method may further include, in addition to the above-mentioned S1, S4-S6:

s4: the mimicry container is displayed on the above-mentioned "emotion accommodating position".

Wherein, the mimicry container is used for simulating a collecting device of the emotion of the user; and the "mimicry container" should be displayed at the above-mentioned "emotion accommodating position" to achieve an effect of collecting the emotion of the user at the above-mentioned "emotion accommodating position".

In addition, the embodiment of the present application is not limited to the "mimicry container," and may be, for example, a tree hole, a ground hole, a code box, or the like.

It should be noted that the execution time of S4 is not limited in the embodiment of the present application, and for example, the execution may be performed immediately after the human-computer interaction device receives the "emotion declaration request," or immediately after the human-computer interaction device acquires the "emotion representation object.

S5: the method includes displaying movement of the emotional representation object along a preset route to a mimicry container.

In this embodiment of the application, for the human-computer interaction device, after the "emotion representation object" is obtained, the "emotion representation object" may be directly displayed, and the "emotion representation object" is dynamically displayed to move to the mimicry container along a preset route, so that the mimicry container can collect the user emotion borne by the "emotion representation object", so that the user emotion borne by the "emotion representation object" may be processed in the mimicry container in the following process.

For example, the present application example may be implemented by any one of the embodiments of the moving process with the emotion accommodating position as the moving destination (e.g., S2, S9, S11, etc.) provided in the present application example, and only "emotion accommodating position" in any one of the embodiments of the moving process with emotion accommodating position as the moving destination provided in the present application example needs to be replaced with "mimicry container".

S6: and after the emotion representation object enters the mimicry container, removing the emotion representation object in the mimicry container according to a preset removing mode.

In the embodiment of the application, for the human-computer interaction device, after it is determined that the emotion representation object moves to the mimicry container, the emotion representation object in the mimicry container may be removed according to a preset removal mode, so that a user can feel a sense of pleasure that negative emotions (that is, emotions of the user carried by the emotion representation object) are released at the moment that the emotion representation object is removed, which is beneficial to assisting the user in reducing psychological stress, and thus beneficial to reducing the possibility of psychological diseases of the user.

It should be noted that the present application example is not limited to the implementation of S6, and for example, it may be implemented by any implementation of the clearing process (for example, S3, etc.) provided in the present application example and using the emotion accommodating position as the clearing place, and it is only necessary to replace the "emotion accommodating position" with the "mimicry container" in any implementation of the clearing process provided in the present application example and using the emotion accommodating position as the clearing place.

Based on the related contents of the above-mentioned S4 to S6, it can be known that, for the human-computer interaction device, the above-mentioned "emotion accommodating position" can be marked and displayed by means of the mimicry container, so that the user can experience the pleasure that the negative emotion of the user is received and disposed by the mimicry container, which is beneficial to assisting the user to reduce the psychological stress and thus the possibility of psychological diseases of the user.

In addition, in order to improve the user experience, the user can also autonomously select a clearing mode used by the mimicry container for the emotion characterization object. Based on this, the embodiment of the present application further provides another implementation manner of the human-computer interaction method, in this implementation manner, the human-computer interaction method may further include, in addition to the above S1, S4-S6, S7:

s7: and updating the preset clearing mode in response to a fourth operation triggered by the user for the mimicry container.

The "fourth operation" is used to trigger the update process of the "preset clearing mode"; moreover, the "fourth operation" is not limited in the embodiment of the present application, for example, when a plurality of candidate clear buttons are displayed on the human-computer interaction device, the "fourth operation" may specifically include: clicking operation of a candidate clear button by a user.

Based on the related content of S7, for the human-computer interaction device, the "preset clearing manner" may be updated by means of a clearing process selection operation of the user, so that the updated preset clearing manner better meets the clearing requirement of the user, and the "mimicry container" may perform clearing processing on the collected emotion characterization objects according to the updated preset clearing manner, thereby facilitating improvement of emotion release pleasure of the user, facilitating assistance of the user in reducing psychological stress, and further facilitating reduction of the possibility of the user suffering from psychological diseases.

Method embodiment three

In order to improve the user experience, in another implementation manner of the human-computer interaction method provided in the embodiment of the present application, in this implementation manner, the human-computer interaction method may further include, in addition to all or part of the above steps, S8-S9:

s8: and determining the moving route of the emotion representation object according to the emotion disclosure content.

The "movement route of the emotion expression object" refers to a movement track that is required to be followed when the emotion expression object moves to the "emotion accommodating position".

In addition, the embodiment of S8 is not limited in this application, and for example, it may specifically include: firstly, performing emotion analysis on the emotion release content to obtain the emotion of the user; and then inquiring a moving route corresponding to the emotion of the user from a pre-constructed third mapping relation, and determining the moving route as the emotion representation object. The "third mapping relation" is used for recording the moving routes corresponding to various emotional states.

In addition, in order to further improve the user experience, the present application provides another possible implementation manner of S8, in which when the above "emotion declaration" includes declaration of voice data, S8 may specifically include S81 to S82:

s81: and carrying out sound wave analysis on the released voice data to obtain a sound wave trend graph.

The "sound wave trend graph" is used to describe sound wave information carried by the "leaked voice data" so that the "sound wave trend graph" can represent the emotional wave state presented when the user inputs the "leaked voice data".

The embodiment of the present invention is not limited to the implementation of the above-described "sound ray fluctuation analysis", and may be implemented, for example, by using a machine learning model trained in the related art.

S82: and determining the moving route of the emotion representation object according to the sound ray fluctuation trend graph.

In another example, the moving route corresponding to the sound wave trend graph may be queried from a fourth mapping relationship that is constructed in advance, and the moving route may be determined as the moving route of the emotion characterization object, so that the moving route is more suitable for the moving process of the emotion characterization object, which is beneficial to improving the flexibility of route determination.

Based on the related content of S8, after the emotion disclosing content input by the user is acquired, the emotion characterizing object and the moving route of the emotion characterizing object may be determined respectively by referring to the emotion disclosing content, so that the emotion characterizing object can move to the "emotion accommodating location" along the moving route in the following.

S9: the method further includes displaying the movement of the emotion-characterizing object along the movement path of the emotion-characterizing object to the emotion-containing location.

In the embodiment of the application, for the human-computer interaction device, after the emotion representation object and the moving route corresponding to the emotion representation object are obtained, the emotion representation object can be directly displayed, and the emotion representation object is dynamically displayed to move to the emotion accommodating position along the moving route, so that the emotion accommodating position can collect user emotion borne by the emotion representation object, and the user emotion borne by the emotion representation object can be processed at the emotion accommodating position in the following process.

Based on the related contents of S8 to S9, for the human-computer interaction device, after obtaining the emotion declaration content input by the user, the motion routes of the emotion characterization object and the emotion characterization object may be determined respectively by referring to the emotion declaration content; and dynamically displaying the emotion representation object moving along the moving route to the emotion accommodating position. The 'moving route' is determined according to the 'emotion disclosure content', so that the 'moving route' is more suitable for the moving process of the 'emotion representation object', the flexibility of man-machine interaction is improved, and the user experience is improved.

Method example four

In order to improve the user experience, in another implementation manner of the human-computer interaction method provided in the embodiment of the present application, in this implementation manner, the human-computer interaction method may further include, in addition to all or part of the above steps, S10-S11:

s10: and determining the moving speed of the emotion characterization object according to the input speed of the emotion explanation content.

The "input speed of the emotional advisory content" refers to an average speed of information entry that the user reaches when inputting the emotional advisory content.

In addition, the determination process of the "input speed of the emotion declaration content" is not limited in the embodiment of the present application, and for example, it may specifically be: and determining the ratio of the information amount of the emotion release content to the recording time length of the emotion release content as the input speed of the emotion release content. The 'information quantity of the emotion release content' is used for indicating the size of information carried by the emotion release content; the information amount of the emotion release content is not limited in the embodiment of the present application, and may refer to the number of characters carried by the emotion release content, for example. The "entry period of the emotional interpretation content" refers to a period of time consumed by the user when the emotional interpretation content is input.

The "moving speed of the emotion recognition object" refers to a speed of the emotion recognition object during moving, so that the "moving speed of the emotion recognition object" can indicate how eager the user presents when inputting the emotion excretion.

In addition, the embodiment of S10 is not limited in this application, and for example, it may specifically include: and determining the input speed of the emotion declaration content as the movement speed of the emotion characterization object.

S11: and displaying that the emotion representation object moves to the emotion accommodating position along the preset route at the moving speed.

In this embodiment of the application, for the human-computer interaction device, after the "emotion representation object" and the corresponding movement speed thereof are obtained, the "emotion representation object" may be directly displayed, and the "emotion representation object" is dynamically displayed to move towards the emotion accommodating position at the movement speed, so that the emotion accommodating position can collect the emotion of the user borne by the "emotion representation object", and the emotion of the user borne by the "emotion representation object" can be processed at the emotion accommodating position in the following process.

Based on the related contents in the above S10 to S11, for the human-computer interaction device, after the emotion utterance input by the user is obtained, the emotion representation object may be determined by referring to the emotion utterance, and the moving speed of the emotion representation object is determined according to the input speed of the emotion utterance, so that the moving speed can indicate how eager the user presents when inputting the emotion utterance; and dynamically displaying the emotion representation object to move towards the emotion accommodating position at the moving speed, so that the intensity of the emotion of the user can be simulated by means of the moving speed, and the user experience is improved.

Method example five

In fact, the user sometimes wants to modify (or retrieve) the content that he or she has input, so to achieve the above requirement, another implementation of the human-machine interaction method provided in the embodiment of the present application may further include, in addition to all or part of the above steps, S12:

s12: the "emotion characterizing object" is stopped from moving in response to a first operation triggered by the user for the "emotion characterizing object" before the "emotion characterizing object" reaches the emotion accommodating location.

Wherein, the "first operation" may be preset; the "first operation" is not limited in the embodiment of the present application, and may specifically be, for example, a click operation, and/or dragging the "emotion representation object" to a preset object editing area. The "object editing area" may be used to perform an editing process (e.g., an update parameter process, a deletion process, etc.) with respect to the above-described "emotion characterization object".

Based on the above-mentioned related content of S12, for the human-computer interaction device, when the human-computer interaction device displays that an emotion representation object is moving to an emotion accommodating location, if the user wants to perform some editing processing operations on the emotion representation object, the user may directly trigger a first operation on the emotion representation object, so that the human-computer interaction device can suspend the moving process of the emotion representation object, thereby enabling the user to perform editing processing on the emotion representation object in a static state (for example, the processing processes shown in S13-S14 below, or the processing process shown in S15).

Based on the foregoing S12, an embodiment of the present application further provides another implementation manner of the human-computer interaction method, in which the human-computer interaction method may further include, in addition to all or some of the steps described above, S13-S14:

s13: after stopping moving the emotional characterization object, updating the emotional characterization object in response to a second operation triggered by the user for the emotional characterization object.

Here, the "second operation" is used to trigger an update flow for the "emotion characterization object" described above, and the embodiment of the present application does not limit the "second operation", and includes, for example, a color selection operation, a shape selection operation, … ….

S14: and when the preset recovery condition is reached, displaying the updated emotion representation object to continue to move to the emotion accommodating position along the preset route.

The "preset recovery condition" may be preset, for example, it may specifically be that a user triggers a fifth operation; and the fifth operation may be preset, for example, clicking an update completion button.

Based on the above-mentioned related contents of S13 to S14, for the human-computer interaction device, after controlling an emotion representation object to be in a static state, the user may perform an editing and updating process on the emotion representation object, so that after the editing and updating process is finished, the updated emotion representation object is controlled again to continue to move along the preset route to the emotion accommodating position.

Based on the foregoing S12, an embodiment of the present application further provides another implementation manner of the human-computer interaction method, in which the human-computer interaction method may further include, in addition to all or part of the foregoing steps, S15:

s15: after stopping moving the emotional characterization object, deleting the emotional characterization object in response to a third operation triggered by the user for the emotional characterization object.

Wherein, the "third operation" may be preset; the "third operation" is not limited in this embodiment of the application, and may specifically be, for example, an operation of clicking a delete option, or an operation of dragging the emotion representation object to a trash box control. Wherein the "trash control" is used to collect what is deleted by the user.

Based on the above-mentioned related content of S15, for the human-computer interaction device, after controlling an emotion representation object to be in a static state, the user may perform a deletion process for the emotion representation object to achieve an effect of the user recovering the input content.

Method example six

In order to improve the user experience, the human-computer interaction device may perform feedback for the complaint of the user, so to implement the above requirement, in another implementation manner of the human-computer interaction method provided in this embodiment of the present application, in this implementation manner, the human-computer interaction method may further include, in addition to all or part of the above steps, S16:

s16: and displaying the emotional feedback content when the preset feedback condition is determined to be reached.

Wherein, the "preset feedback condition" can be preset; in addition, the embodiment of the present application does not limit the "preset feedback condition," and for example, the preset feedback condition may specifically be: the emotion of the user carried by the emotion feedback content belongs to a preset emotion state needing feedback.

The 'emotional feedback content' is used for feeding back aiming at the 'emotional catharsis content' so that the 'emotional feedback content' can achieve the effect of relieving the emotion of the user (especially, the emotion of the user carried by the 'emotional catharsis content').

In addition, the embodiment of the present application does not limit "emotional feedback content", and for example, it may be content having a comforting, or inspiring, or criticizing, or suggesting function. For example, when the above-mentioned "emotional catharsis content" is used to teach an innocent matter criticized by a teacher, the "emotional feedback content" may be a comforting content similar to "mingmy is right, mom can understand".

In addition, the "emotional feedback content" may be determined according to the "emotional catharsis content" described above; moreover, the determination process of the "emotional feedback content" is not limited in the embodiments of the present application, and for example, it may specifically include steps 31 to 34:

step 31: and analyzing the emotion of the emotion releasing content to obtain the emotion of the user.

It should be noted that the related content in step 31 refers to the related content of the user emotion in step 21 above.

Step 32: and extracting the abreaction subject of the user from the emotion abreaction content.

It should be noted that the relevant content of step 32 refers to the relevant content of step 11 above.

Step 33: and generating emotion feedback content according to the emotion disclosure content, the user emotion and the disclosure theme.

It should be noted that the embodiment of the present application is not limited to the implementation of step 33, for example, the emotion abreaction content, the user emotion, and the abreaction subject may be input into a feedback content generation model constructed in advance, and the emotion feedback content output by the feedback content generation model may be obtained.

The feedback content generation model is used for performing emotion feedback processing on input data of the feedback content generation model; and the feedback content generation model can be constructed according to the sample catharsis content, the actual emotion carried by the sample catharsis content, the catharsis subject of the sample catharsis content and the actual emotion feedback content of the sample catharsis content.

Based on the related contents in the above steps 31 to 34, after the emotion releasing content input by the user is obtained, the emotion feedback content for the emotion releasing content may be determined with reference to the emotion releasing content, the user emotion carried by the emotion releasing content, and the releasing subject of the emotion releasing content, so that the emotion feedback content can achieve the effect of releasing the emotion of the user (in particular, the user emotion carried by the "emotion releasing content").

Based on the related content of S16, for the human-computer interaction device, after obtaining the emotion excretion content input by the user, the emotion excretion content may be referred to first to generate emotion feedback content; and displaying the emotional feedback content to the user so that the emotional feedback content can achieve the effect of relieving the mood of the user.

Method example seven

In order to improve the user experience, the human-computer interaction device may perform a hazard warning for the complaint of the user, so to implement the above requirement, in another implementation manner of the human-computer interaction method provided in this embodiment of the present application, in this implementation manner, the human-computer interaction method may further include, in addition to all or part of the above steps, S17:

s17: and when the preset alarm condition is determined to be reached, sending the emotion alarm information to the guardian of the user.

Wherein, the preset alarm condition can be preset; in addition, the embodiment of the present application does not limit the "preset alarm condition," and for example, the preset alarm condition may specifically be: the "emotion release content" includes a preset dangerous word (e.g., going away from home, etc.). For another example, the "preset alarm condition" may specifically be: the occurrence frequency of the preset dangerous words in the emotion disclosure reaches a preset frequency threshold value.

The above-mentioned "emotional alert information" is used to inform the guardian of the user of a dangerous behavior that may occur to the user (e.g., going away from home, etc.), so that the guardian of the user can take a timely preventive action with respect to the dangerous behavior.

In addition, the emotion warning information can be determined according to the emotion release content; moreover, the embodiment of the present application does not limit the determination process of the "emotion warning information", and for example, the determination process may specifically include: extracting preset dangerous words from the emotion disclosure content; and generating emotion warning information according to the preset dangerous words.

In addition, the sending method of the "emotion warning information" is not limited in the embodiments of the present application, and for example, the sending method may be a short message sending method, a mail sending method, or a telephone dialing method.

Based on the related content of S17, for the human-computer interaction device, after the emotion disclosure content input by the user is obtained, it may be determined whether the emotion disclosure content meets a preset alarm condition, and if the emotion disclosure content meets the preset alarm condition, it may be determined that the user may have a dangerous behavior.

In addition, in order to further improve the user experience, another implementation of the human-computer interaction method provided in the embodiments of the present application may further include, in addition to all or part of the above steps, S18-S21:

s18: and displaying alarm inquiry information when the preset alarm condition is determined to be reached.

The "alarm query information" is used to query the user whether to alarm the guardian of the user.

S19: and receiving a first feedback result of the user for the alarm inquiry information.

The first feedback result refers to a feedback result of the user for the above-mentioned "alarm inquiry information", so that the "first feedback result" is used to indicate whether the user agrees to the human-computer interaction device to alarm the guardian.

S20: judging whether the first feedback result meets a preset sending condition, if so, sending emotion alarm information to a guardian of the user (namely, the above S17); if not, S21 is executed.

The "preset sending condition" may be preset; in addition, the embodiment of the present application does not limit the "preset sending condition," and for example, the preset sending condition may specifically be: the above-mentioned "first feedback result" indicates that the human-computer interaction device is agreed to alarm the guardian thereof.

Therefore, for the human-computer interaction device, after a first feedback result is obtained, whether the first feedback result meets a preset sending condition or not can be judged firstly, if yes, the user agrees to the human-computer interaction device to alarm the guardian, and the human-computer interaction device can directly send emotion alarm information to the guardian of the user, so that the guardian can know dangerous behaviors which may occur to the user by means of the emotion alarm information, and the guardian can take stopping actions in time according to the dangerous behaviors; if the user does not meet the preset condition, the fact that the user does not agree with the human-computer interaction device to give an alarm to the guardian of the user is indicated, so that in order to protect the privacy of the user, the human-computer interaction device does not give an alarm to the guardian of the user, and the human-computer interaction device can reduce the possibility of dangerous behaviors of the user as much as possible by adopting an emotion persuasion mode.

S21: and displaying the emotion dredging content.

The 'emotion grooming content' is used for conducting emotion grooming on the user; furthermore, the embodiment of the present application does not limit "emotion grooming content", and for example, it may be video data, audio data, or text data.

In addition, the 'emotion dispersion content' is determined according to the emotion excretion content; moreover, the embodiment of the present application does not limit the determination process of the "emotion grooming content", and for example, the determination process may specifically include: extracting preset dangerous words from the emotion disclosure content; then, according to a preset resource searching method, at least one resource data (such as news, psychological counseling videos and the like) related to the preset dangerous word is searched from a preset resource library (such as a pre-constructed database or the internet and the like); and finally, performing set processing on the resource data to obtain the emotion persuasion content.

Based on the above-mentioned related contents of S18 to S21, for the human-computer interaction device, after determining that the preset alarm condition is reached, it may be known whether the user would like to share the personal privacy to his guardian by means of the "alarm inquiry information" first, so that when determining that the user would not like to share the personal privacy to his guardian, the human-computer interaction device may give away emotion to the user, so as to reduce the possibility of dangerous behaviors of the user as much as possible. Therefore, the method is beneficial to protecting the personal privacy of the user, and is beneficial to improving the user experience.

Method example eight

In order to improve the user experience, in another implementation manner of the human-computer interaction method provided in the embodiment of the present application, in this implementation manner, the human-computer interaction method may further include, in addition to all or part of the above steps, S22-S24:

s22: in response to a user-triggered purge complete request, cleaning inquiry information is displayed.

Wherein the "disclosure end request" is for requesting to end the emotion disclosure assistance process for the user; moreover, the triggering mode of the "announce ending request" is not limited in the embodiments of the present application, for example, the "announce ending request" may be triggered by triggering the instruction of turning off the emotion announcing system on the human-computer interaction device. For another example, the "announce end request" may be triggered by clicking an announce end button on the human-computer interaction device.

The above-mentioned "cleaning inquiry information" is used to inquire of the user whether or not to delete the emotion declaration content.

S23: and receiving a second feedback result of the user for the cleaning inquiry information.

The "second feedback result" is a feedback result of the user with respect to the "cleaning inquiry information" so that the "second feedback result" indicates whether or not the user agrees to the deletion processing with respect to the "emotion declaration".

S24: judging whether the second feedback result meets preset clearing conditions or not, and if so, deleting the emotion releasing content; if not, storing the emotion disclosure content.

Wherein, the 'preset cleaning condition' can be preset; in addition, the embodiment of the present application does not limit the "preset cleaning condition," and for example, the preset cleaning condition may specifically be: the "second feedback result" indicates that the user agrees to the deletion processing with respect to the "emotion declaration".

As can be seen, for the human-computer interaction device, after the second feedback result is obtained, it may be determined whether the second feedback result meets a preset cleaning condition, and if so, it indicates that the user agrees to delete the "emotion declaration content", so that, in order to protect the privacy of the user, the human-computer interaction device may directly delete the emotion declaration content cached by the human-computer interaction device, so that the emotion declaration content does not exist in the human-computer interaction device; if the emotion is not met, the user does not agree to delete the emotion explanation content, and therefore the user wants to store the emotion explanation content, so that the human-computer interaction device can directly store the cached emotion explanation content to a preset storage position in order to meet the storage requirement of the user, and the emotion explanation content can be read from the preset storage position subsequently.

Based on the human-computer interaction method provided by the method embodiment, the embodiment of the application also provides a human-computer interaction device, which is explained and explained with reference to the attached drawings.

Device embodiment

The embodiment of the device introduces the human-computer interaction device, and please refer to the embodiment of the method for relevant contents.

Referring to fig. 2, the figure is a schematic structural diagram of a human-computer interaction device according to an embodiment of the present application.

The man-machine interaction device 200 provided by the embodiment of the application comprises:

the first determining unit 201 is configured to determine an emotion representation object according to emotion releasing content input by a user after an emotion releasing request triggered by the user is acquired;

a first display unit 202, configured to display that the emotion characterization object moves to an emotion accommodating location along a preset route;

and the object removing unit 203 is configured to remove the emotion representation object in the emotion accommodating position according to a preset removing manner after the emotion representation object reaches the emotion accommodating position.

In a possible implementation, the first determining unit 201 includes:

a theme extraction subunit, configured to extract a catharsis theme of the user from the emotion catharsis content;

and the first determining subunit is used for determining the emotion representation object according to the emotion abreaction content and the display object description information corresponding to the abreaction subject.

In one possible embodiment, the first determining subunit includes:

the emotion analysis subunit is used for carrying out emotion analysis on the emotion releasing content to obtain the emotion of the user and emotion representation data of the emotion of the user;

the second determining subunit is used for determining the object display form of the user emotion according to the user emotion and the display object description information corresponding to the leakage subject;

and the object generation subunit is used for generating the emotion representation object according to the object display form and the emotion representation data.

In a possible implementation manner, the root object generation subunit is specifically configured to: and combining the emotion representation data with the display form of the object according to a first combination mode to obtain the emotion representation object.

In a possible implementation manner, the root object generation subunit is specifically configured to: performing mimicry processing on the emotion representation data to obtain an emotion mimicry image; and combining the emotion mimicry image with the object display form according to a second combination mode to obtain the emotion representation object.

In one possible embodiment, the emotion analyzing subunit includes:

and the emotion determining subunit is used for performing emotion analysis on the emotion release content and the user face image corresponding to the emotion release content to obtain the emotion of the user.

In a possible implementation, the emotion announcing content includes announcing voice data, and the topic extraction subunit is specifically configured to: performing voice recognition processing on the released voice data to obtain a voice recognition text; and analyzing the leakage theme of the voice recognition text by using a preset theme analysis model to obtain the leakage theme of the user.

In a possible implementation manner, the emotion disclosing content includes disclosing text data, and the theme extracting subunit is specifically configured to: and analyzing the leakage text data by using a preset theme analysis model to obtain the leakage theme of the user.

In a possible implementation, the emotion releasing content includes releasing video data, and the theme extraction subunit is specifically configured to: extracting video identification text from the unvoiced video data; and analyzing the leakage theme of the video recognition text by using a preset theme analysis model to obtain the leakage theme of the user.

In a possible implementation, the human-computer interaction device 200 further includes:

the second determining unit is used for determining a moving route of the emotion representation object according to the emotion disclosure content;

the first display unit 202 is specifically configured to: displaying the movement of the emotion-characterizing object along the movement route to an emotion-accommodating location.

In a possible implementation, the emotion releasing content includes releasing voice data, and the second determining unit is specifically configured to: sound ray fluctuation analysis is carried out on the releasing voice data to obtain a sound ray fluctuation trend graph; and determining the moving route of the emotion representation object according to the sound ray fluctuation trend graph.

In a possible implementation, the human-computer interaction device 200 further includes:

a third determining unit, configured to determine a moving speed of the emotion representation object according to an input speed of the emotion declaration content;

the first display unit 202 is specifically configured to: and displaying that the emotion representation object moves to an emotion accommodating position along a preset route at the moving speed.

In a possible implementation, the human-computer interaction device 200 further includes:

an object stopping unit for stopping moving the emotion representation object in response to a first operation triggered by the user on the emotion representation object before the emotion representation object reaches the emotion accommodating position.

In a possible implementation, the human-computer interaction device 200 further includes:

an object updating unit for updating the emotional representation object in response to a second operation triggered by the user for the emotional representation object after the stop moving the emotional representation object; and when a preset recovery condition is reached, displaying that the updated emotion representation object continues to move to an emotion accommodating position along a preset route.

In a possible implementation, the human-computer interaction device 200 further includes:

and the object deleting unit is used for responding to a third operation triggered by the user aiming at the emotion representation object and deleting the emotion representation object.

In a possible implementation, the human-computer interaction device 200 further includes:

the second display unit is used for displaying the emotional feedback content when the preset feedback condition is determined to be reached; wherein the emotional feedback content is determined according to the emotional catharsis content.

In a possible implementation, the human-computer interaction device 200 further includes:

the information sending unit is used for sending emotion alarm information to a guardian of the user when the preset alarm condition is determined to be reached; wherein the emotion warning information is determined according to the emotion release content.

In a possible implementation, the human-computer interaction device 200 further includes:

the fourth display unit is used for displaying alarm inquiry information when the preset alarm condition is determined to be reached; the alarm inquiry information is used for inquiring the user whether to give an alarm to a guardian of the user;

the first receiving unit is used for receiving a first feedback result of the user aiming at the alarm inquiry information;

the information sending unit is specifically configured to: and when the first feedback result is determined to meet the preset sending condition, sending the emotion warning information to a guardian of the user.

In a possible implementation, the human-computer interaction device 200 further includes:

the fifth display unit is used for displaying the emotion dredging content when the first feedback result is determined not to meet the preset sending condition; wherein the emotion grooming content is determined according to the emotion catharsis content.

In a possible implementation, the human-computer interaction device 200 further includes:

a sixth display unit configured to display cleaning inquiry information in response to the user-triggered release end request; wherein the clearing inquiry information is for inquiring the user whether to delete the emotion clearing content;

a second receiving unit, configured to receive a second feedback result of the user for the cleaning inquiry information;

and the content deleting unit is used for deleting the emotion releasing content when the second feedback result meets the preset clearing condition.

In a possible implementation, the human-computer interaction device 200 further includes:

a seventh display unit for displaying a mimicry container on the emotion accommodating position;

the first display unit 202 is specifically configured to: displaying the emotion representation object to move to the mimicry container along a preset route;

the object removing unit 203 is specifically configured to: after the emotion representation object enters the mimicry container, removing the emotion representation object in the mimicry container according to a preset removing mode.

In a possible implementation, the human-computer interaction device 200 further includes:

and the mode updating unit is used for responding to a fourth operation triggered by the user aiming at the mimicry container and updating the preset clearing mode.

Further, an embodiment of the present application further provides a human-computer interaction device, including: a processor, a memory, a system bus;

the processor and the memory are connected through the system bus;

the memory is used for storing one or more programs, and the one or more programs comprise instructions which, when executed by the processor, cause the processor to execute any one of the implementation methods of the human-computer interaction method.

Further, an embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is caused to execute any implementation method of the above human-computer interaction method.

Further, an embodiment of the present application further provides a computer program product, which when running on a terminal device, causes the terminal device to execute any implementation method of the above-mentioned human-computer interaction method.

As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.

It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.

It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

25页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:语音数据处理方法、装置、设备、存储介质及程序产品

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!