Method for detecting user emotion, cloud server and terminal equipment

文档序号:1923673 发布日期:2021-12-03 浏览:10次 中文

阅读说明:本技术 一种检测用户情绪的方法、云服务器以及终端设备 (Method for detecting user emotion, cloud server and terminal equipment ) 是由 鲁霖 鲁鹏飞 曾宗云 于 2021-08-27 设计创作,主要内容包括:本申请涉及一种检测用户情绪的方法、云服务器以及终端设备,涉及情绪检测的领域,由云服务器执行,包括接收终端设备上传的用户情绪参数,用户情绪参数包括脑电波数据、心率数据、体温数据以及语音特征数据中的至少一项,确定与用户情绪参数相匹配的用户属性信息,用户属性信息存储在云服务器中,用户属性信息包括:用户性别信息以及用户年龄信息中的至少一项,基于确定出的用户属性信息以及用户情绪参数预估用户当前的情绪信息。本申请具有便于检测出用户的当前情绪的效果。(The application relates to a method for detecting user emotion, a cloud server and terminal equipment, relates to the field of emotion detection, is executed by the cloud server, and comprises receiving user emotion parameters uploaded by the terminal equipment, wherein the user emotion parameters comprise at least one of electroencephalogram data, heart rate data, body temperature data and voice characteristic data, user attribute information matched with the user emotion parameters is determined, the user attribute information is stored in the cloud server, and the user attribute information comprises: and at least one item of user gender information and user age information is used for estimating the current emotion information of the user based on the determined user attribute information and the user emotion parameters. The method and the device have the effect of conveniently detecting the current emotion of the user.)

1. A method of detecting a user emotion, performed by a cloud server, comprising:

receiving user emotion parameters uploaded by terminal equipment, wherein the user emotion parameters comprise at least one of electroencephalogram data, heart rate data, body temperature data and voice characteristic data;

determining user attribute information matched with the user emotion parameters, wherein the user attribute information is stored in the cloud server and comprises: at least one of user gender information and user age information;

and estimating the current emotion information of the user based on the determined user attribute information and the user emotion parameters.

2. The method of claim 1, wherein estimating the current emotional information of the user based on the determined user attribute information and the user emotional parameters comprises:

based on the electroencephalogram data and the determined user attribute information, estimating the current first sub-emotion information of the user, and/or,

estimating the current second sub-emotion information of the user based on the heart rate data and the determined user attribute information, and/or,

estimating the current third sub-emotion information of the user based on the body temperature data and the determined user attribute information, and/or,

estimating the current fourth sub-emotion information of the user based on the voice characteristic data and the determined user attribute information;

and predicting the current emotion information of the user based on at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information.

3. The method for detecting the emotion of a user according to claim 2, wherein the estimating of the current first sub-emotion information of the user based on the brain wave data and the determined user attribute information, and/or the estimating of the current fourth sub-emotion information of the user based on the speech feature data and the determined user attribute information, comprises at least one of:

performing time-frequency transformation on the brain wave data to obtain brain wave power spectrum data, performing feature extraction on the brain wave power spectrum data to obtain brain wave frequency data, wherein the brain wave frequency data comprises alpha waves, beta waves, delta waves, theta waves and power proportions of the alpha waves, the beta waves, the delta waves and the theta waves, predicting current first sub-emotion information of the user based on the brain wave frequency data and the determined user attribute information, and/or,

and extracting voice loudness information and voice frequency information from the voice feature data, and estimating the current fourth sub-emotion information of the user based on the voice loudness information, the voice frequency information and the determined user attribute information.

4. The method for detecting the emotion of the user, as claimed in claim 2, wherein the estimating the current emotion information of the user based on at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information, includes at least one of:

if the current emotion information contains first sub-emotion information, estimating that the current emotion information is first sub-emotion information;

determining weight information corresponding to the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information respectively, and estimating current emotion information of the user based on the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, the fourth sub-emotion information and the weight information corresponding to the third sub-emotion information and the fourth sub-emotion information respectively.

5. The method for detecting the emotion of the user according to claim 2, wherein the predicting the current first sub-emotion information of the user based on the electroencephalogram data and the determined user attribute information includes:

performing emotion detection processing through a first network model based on the electroencephalogram data and the determined user attribute information to obtain current first sub-emotion information of the user;

the estimating current second sub-emotion information of the user based on the heart rate data and the determined user attribute information comprises:

performing emotion detection processing through a second network model based on the heart rate data and the determined user attribute information to obtain current second sub-emotion information of the user;

the estimating current third sub-emotion information of the user based on the body temperature data and the determined user attribute information comprises:

and performing emotion detection processing through a third network model based on the body temperature data and the determined user attribute information to obtain current third sub-emotion information of the user.

6. The method of claim 1, wherein the estimating current mood information of the user based on the determined user attribute information and the user mood parameter further comprises:

if the current emotion information meets preset conditions, sending the current emotion information of the user to the terminal equipment so that the terminal equipment can display the received emotion information, wherein the preset conditions comprise: the emotion information estimated at the current time is different from the emotion information estimated at the last time, and the estimation times of the emotion information are not more than the preset times;

receiving a feedback signal sent by the terminal equipment, wherein the feedback signal is generated based on the current emotion information trigger of the user by the user;

updating at least one of the first network model, the second network model and the third network model based on the current emotion information of the user and a corresponding feedback signal;

if the feedback signal is a confirmation signal triggered based on the current emotional information of the user, then,

acquiring at least one of brain wave data, heart rate data and body temperature data corresponding to the current emotion information of the user;

updating the first network model based on the brain wave data corresponding to the current emotion information of the user and the current emotion information of the user, and/or,

updating the second network model based on heart rate data corresponding to the current mood information of the user and the current mood information of the user, and/or,

updating the third network model based on body temperature data corresponding to the current emotion information of the user and the current emotion information of the user;

if the feedback signal is a negative acknowledgement signal triggered based on the current emotional information of the user, then,

sending at least one alternative emotion information to the terminal device;

receiving a feedback signal aiming at the alternative emotion information sent by the terminal equipment, wherein the feedback signal aiming at the alternative emotion information is generated after a user triggers a selection operation aiming at the at least one alternative emotion information;

determining alternative emotion information selected by a user from the feedback signal aiming at the alternative emotion information;

acquiring at least one of electroencephalogram data, heart rate data and body temperature data corresponding to the alternative emotion information;

updating the first network model based on the electroencephalogram data corresponding to the alternative emotion information and the alternative emotion information, and/or,

updating the second network model based on the heart rate data corresponding to the alternative emotion information and the alternative emotion information, and/or,

and updating the third network model based on the body temperature data corresponding to the alternative emotion information and the alternative emotion information.

7. A method for detecting emotion of a user, performed by a terminal device, comprising:

receiving user emotion parameters uploaded by wearable equipment, wherein the user emotion parameters comprise at least one of electroencephalogram data, heart rate data, voice characteristic data and body temperature data;

sending the emotion parameters to a cloud server so that the cloud server determines current emotion information of the user based on the emotion parameters of the user and user attribute information, wherein the user attribute information is stored in the cloud server and is matched by the cloud server according to the emotion parameters, and the user attribute information comprises: at least one of user gender information and user age information;

and receiving the current emotion information of the user, which is sent by the cloud server.

8. The method of detecting a mood of a user as recited in claim 7, wherein the mood parameters include heart rate data and body temperature data, the method further comprising:

comparing the heart rate data with a preset heart rate range;

comparing the body temperature data with a body temperature preset range;

and if the heart rate data is not in the heart rate preset range and/or the body temperature data is not in the body temperature preset range, outputting alarm information.

9. A cloud server, characterized in that it comprises:

one or more processors;

a memory;

one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: a method of detecting a user's mood according to any of claims 1-6 is performed.

10. A terminal device, characterized in that it comprises:

one or more processors;

a memory;

one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: a method of detecting a user's mood according to any of claims 7-8 is performed.

Technical Field

The application relates to the field of emotion detection, in particular to a method for detecting user emotion, a cloud server and a terminal device.

Background

With the development of social economy and the improvement of the living standard of people, more and more people begin to pay attention to the self health management. This has led to the emergence of various smart wearing products on the market that can detect the state of physical health.

Research shows that the emotion of the user also affects the health condition of the user, but the current wearable device detects some health data of the user through a sensor, so that the emotion of the user is not convenient to judge.

Disclosure of Invention

In order to facilitate detection of the current emotion of a user, the application provides a method for detecting the emotion of the user, a cloud server and a terminal device.

In a first aspect, the present application provides a method for detecting a user emotion, which adopts the following technical scheme:

a method of detecting a user emotion, performed by a cloud server, comprising:

receiving user emotion parameters uploaded by terminal equipment, wherein the user emotion parameters comprise at least one of electroencephalogram data, heart rate data, body temperature data and voice characteristic data;

determining user attribute information matched with the user emotion parameters, wherein the user attribute information is stored in the cloud server and comprises: at least one of user gender information and user age information;

and estimating the current emotion information of the user based on the determined user attribute information and the user emotion parameters.

By adopting the technical scheme, the cloud server receives the user attribute information and the user emotion parameters acquired by the wearable device, at least one of the brain wave data, the heart rate data, the body temperature data and the voice characteristic data is used for representing the current emotion of the user, the cloud server determines the user attribute information matched with the user emotion parameters, and the cloud server combines the user attribute information and the user emotion parameters to pre-estimate the current emotion of the user, so that the current emotion of the user can be determined conveniently.

In another possible implementation manner, the predicting current emotion information of the user based on the determined user attribute information and the user emotion parameter includes:

based on the electroencephalogram data and the determined user attribute information, estimating the current first sub-emotion information of the user, and/or,

estimating the current second sub-emotion information of the user based on the heart rate data and the determined user attribute information, and/or,

estimating the current third sub-emotion information of the user based on the body temperature data and the determined user attribute information, and/or,

estimating the current fourth sub-emotion information of the user based on the voice characteristic data and the determined user attribute information;

and predicting the current emotion information of the user based on at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information.

By adopting the technical scheme, electroencephalogram data, heart rate data, body temperature data and voice characteristic data in emotion parameters of a user are combined with user attribute information, and then first sub-emotion information corresponding to the electroencephalogram data and/or second sub-emotion information corresponding to the heart rate data and/or third sub-emotion information and/or fourth sub-emotion information corresponding to the body temperature data are obtained respectively, so that the emotion of the user can be better determined according to the self condition of the user, and the emotion judgment is more accurate.

In another possible implementation manner, the estimating current first sub-emotion information of the user based on the electroencephalogram data and the determined user attribute information, and/or estimating current fourth sub-emotion information of the user based on the voice feature data and the determined user attribute information, includes at least one of:

performing time-frequency transformation processing on the brain wave data to obtain brain wave power spectrum data;

performing feature extraction on the brain wave power spectrum data to obtain brain wave frequency data, wherein the brain wave frequency data comprises alpha waves, beta waves, delta waves, theta waves and power proportions of the alpha waves, the beta waves, the delta waves and the theta waves;

estimating the current first sub-emotion information of the user based on the brain wave frequency data and the determined user attribute information, and/or,

and extracting voice loudness information and voice frequency information from the voice feature data, and estimating the current fourth sub-emotion information of the user based on the voice loudness information, the voice frequency information and the determined user attribute information.

By adopting the technical scheme, the brain wave power spectrum data is obtained after time-frequency transformation is carried out on the brain wave data, the alpha wave, the beta wave, the delta wave and the theta wave in the brain wave power spectrum data correspond to different emotions, and the emotion of the user is judged by identifying the four waveforms and the attribute information of the user, so that the emotion of the user can be judged more accurately through the brain wave data.

The voice characteristic data can be used for representing the current emotion of the user, and voice loudness information and voice frequency information extracted from the voice characteristic data represent the current emotion of the user more accurately.

In another possible implementation manner, the estimating current emotion information of the user based on at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, and the fourth sub-emotion information includes at least one of:

if the current emotion information contains first sub-emotion information, estimating that the current emotion information is first sub-emotion information;

determining weight information corresponding to the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information respectively, and estimating current emotion information of the user based on the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, the fourth sub-emotion information and the weight information corresponding to the third sub-emotion information and the fourth sub-emotion information respectively.

By adopting the technical scheme, the brain wave data is used for representing the current emotion of the user accurately, and when the first sub-emotion information is included, the first sub-emotion information is the current emotion of the user. And determining the current emotion of the user based on the weights corresponding to the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and/or the fourth sub-emotion information, so that the current emotion of the user is judged more accurately.

In another possible implementation manner, the estimating, based on the electroencephalogram data and the determined user attribute information, current first sub-emotion information of the user includes:

performing emotion detection processing through a first network model based on the electroencephalogram data and the determined user attribute information to obtain current first sub-emotion information of the user;

the estimating current second sub-emotion information of the user based on the heart rate data and the determined user attribute information comprises:

performing emotion detection processing through a second network model based on the heart rate data and the determined user attribute information to obtain current second sub-emotion information of the user;

the estimating current third sub-emotion information of the user based on the body temperature data and the determined user attribute information comprises:

and performing emotion detection processing through a third network model based on the body temperature data and the determined user attribute information to obtain current third sub-emotion information of the user.

By adopting the technical scheme, the brain wave frequency information and the user attribute information are input into a trained first network model for emotion recognition processing, the first network model outputs first sub-emotion information based on the input brain wave frequency information and the user attribute information, and the first sub-emotion information is the current emotion of the user judged based on the brain wave labeling data. The heart rate speed can also be used for representing the emotion of the user, the heart rate data and the user attribute information are input into the trained second network model for emotion recognition processing, and the second network model outputs second sub-emotion information represented by the heart rate data, so that the emotion of the user can be judged in multiple dimensions. The body temperature can be used for representing the emotion change of the user, the body temperature data and the user attribute information are input into a trained third network model for emotion recognition processing, and the third network model outputs third sub-emotion information represented by the body temperature data, so that the emotion of the user can be conveniently judged from multiple dimensions.

In a fourth aspect, the present application provides a terminal device for detecting a user emotion, which adopts the following technical scheme:

a terminal device that detects a user's emotion, comprising:

the third receiving module is used for receiving user emotion parameters uploaded by the wearable device, wherein the user emotion parameters comprise at least one of electroencephalogram data, heart rate data, voice characteristic data and body temperature data;

a second sending module, configured to send the emotion parameter to a cloud server, so that the cloud server determines current emotion information of the user based on the emotion parameter of the user and user attribute information, where the user attribute information is stored in the cloud server, and the user attribute information is matched by the cloud server according to the emotion parameter, where the user attribute information includes: at least one of user gender information and user age information;

and the fourth receiving module is used for receiving the current emotion information of the user, which is sent by the cloud server.

By adopting the technical scheme, the third receiving module receives the emotion parameters of the user collected by the wearable device of the user, the third sending module sends the emotion parameters of the user to the cloud server, and the cloud server predicts the current emotion information of the user based on the emotion parameters of the user and the matched user attribute information of the emotion parameters of the user. The fourth receiving module receives the current emotion of the user determined by the cloud server, so that the current emotion of the user can be determined conveniently.

In a fifth aspect, the present application provides a cloud server, which adopts the following technical solution:

a cloud server, the cloud server comprising:

one or more processors;

a memory;

one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: a method of detecting a user's mood is performed as shown in any one of the possible implementations of the first aspect.

In a sixth aspect, the present application provides a terminal device, which adopts the following technical solution:

a terminal device, the terminal device comprising:

one or more processors;

a memory;

one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: a method of detecting a user's mood is performed as shown in any one of the possible implementations of the second aspect.

In a seventh aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:

a computer-readable storage medium, comprising: there is stored a computer program that can be loaded by a processor and that executes a method of detecting a mood of a user as shown in any of the possible implementations of the first aspect.

In an eighth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:

a computer-readable storage medium, comprising: there is stored a computer program that can be loaded by a processor and that executes a method of detecting a mood of a user as shown in any of the possible implementations of the second aspect.

In summary, the present application includes at least one of the following beneficial technical effects:

1. the cloud server receives user attribute information and user emotion parameters acquired by the wearable device, at least one of electroencephalogram data, heart rate data, body temperature data and voice feature data is used for representing the current emotion of the user, the cloud server determines the user attribute information matched with the user emotion parameters, and the cloud server combines the user attribute information and the user emotion parameters to estimate the current emotion of the user, so that the current emotion of the user can be determined conveniently;

2. whether the heart rate of the user is normal or not is determined by judging whether the heart rate data is located in a preset heart rate range, whether the body temperature of the user is normal or not is determined by judging whether the body temperature data is located in a preset body temperature range, and an alarm is given when the body temperature and/or the heart rate of the user are abnormal. Therefore, the user can know the abnormal health state of the user in time.

Drawings

Fig. 1 is a flowchart illustrating a method for detecting a user emotion according to an embodiment of the present application.

Fig. 2 is another flowchart of a method for detecting a user emotion according to an embodiment of the present application.

Fig. 3 is a schematic structural diagram of a cloud server for detecting a user emotion according to an embodiment of the present application.

Fig. 4 is a schematic structural diagram of a terminal device for detecting a user emotion according to an embodiment of the present application.

Fig. 5 is a schematic structural diagram of a cloud server according to an embodiment of the present application.

Detailed Description

The present application is described in further detail below with reference to the attached drawings.

A person skilled in the art, after reading the present description, may make modifications to the embodiments as required, without any inventive contribution thereto, but shall be protected by the patent laws within the scope of the claims of the present application.

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.

The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.

The embodiment of the application provides a method for detecting user emotion, which is executed by electronic equipment, wherein the electronic equipment can be a server or terminal equipment, the server can be an independent physical server, a server cluster or distributed system formed by a plurality of physical servers, and a cloud server for providing cloud computing service. The terminal device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, etc., but is not limited thereto, the terminal device and the server may be directly or indirectly connected through a wired or wireless communication manner, and as shown in fig. 1, the method is performed by a cloud server, the method includes step S101, step S102, and step S103, wherein,

s101, receiving user emotion parameters uploaded by the terminal equipment, wherein the user emotion parameters comprise at least one of electroencephalogram data, heart rate data, body temperature data and voice characteristic data.

For the embodiment of the application, the emotion parameters of the user received by the cloud server are sent by the user terminal device, and the emotion parameters of the user can be collected by wearable equipment worn on the body of the user. Wearable devices such as bluetooth headsets provided with sensors and bracelets provided with sensors. The wearable device of the user collects the user emotion parameters of the user and sends the collected user emotion parameters to the terminal device corresponding to the user, the terminal device sends the received user emotion parameters to the cloud server, and the cloud server can store the user emotion parameters after receiving the user emotion parameters.

Wherein, user's wearable equipment uses bluetooth headset as an example, and the electroencephalogram data accessible sets up electroencephalogram detection TGAM chip and dry electrode piece on bluetooth headset, dry electrode piece and chip electrical connection, electrode piece and user scalp contact to reach the effect of gathering the brain wave. The more the number of the dry electrode plates is, the more accurate the collected brain waves are. Heart rate data accessible sets up photoelectric reflection formula sensor on bluetooth headset and gathers, and body temperature data accessible sets up infrared temperature sensor on bluetooth headset and gathers, and the pronunciation characteristic data accessible sets up the microphone device on bluetooth headset and gathers.

When a user wears the Bluetooth headset for the first time, the terminal equipment and the Bluetooth headset need to be connected and configured. The user may select basic information for determining the user through the terminal device, for example, "gender: male, age: age 23 ". The terminal equipment uploads the basic information of the user to the cloud server, and the cloud server stores the attribute information of the user, so that the basic information of the user is combined when the emotion parameters of the user are analyzed and processed, and the emotion judgment of the user is more accurate.

S102, determining user attribute information matched with the emotion parameters of the user, wherein the user attribute information is stored in a cloud server and comprises the following steps: at least one of user gender information and user age information.

For the embodiment of the application, the user attribute information can be uploaded to the cloud server by the user through the terminal device, and the user attribute information of a plurality of users is stored in the cloud server, so that after receiving the emotion parameters of the users, the cloud server needs to determine the user attribute information corresponding to the emotion parameters of the users, so that the emotion recognition of the users is more accurate.

S103, estimating the current emotion information of the user based on the determined user attribute information and the user emotion parameters.

For the embodiment of the application, when people of different genders and ages are in the same emotion, at least one of electroencephalogram data, heart rate data, body temperature data and voice characteristic data of different people is different, so that the cloud server judges the current emotion of the user more accurately based on the user attribute information and the emotion parameters of the user. For example, a man, age 23, the cloud server estimates that the user is currently stressed.

In a possible implementation manner of the embodiment of the present application, the step S103 specifically includes at least one of the step S1031 (not shown in the figure), the step S1032 (not shown in the figure), the step S1033 (not shown in the figure), and the step S1034 (not shown in the figure) and the step S1035 (not shown in the figure) when estimating the current emotion information of the user based on the determined user attribute information and the user emotion parameter, wherein,

and S1031, estimating the current first sub-emotion information of the user based on the brain wave data and the determined user attribute information.

The brain wave data belongs to non-stationary random signals in the time domain, so that the brain wave data is preprocessed after being collected, so that the brain wave data is more accurate and higher in quality, and high-frequency noise is filtered out by low-pass filtering. After the brain wave data is preprocessed, the preprocessing result of the brain wave data is combined with the user attribute information to predict the current emotion of the user to obtain first sub-emotion information, so that the emotion of the user is more accurately recognized.

S1032, estimating the current second sub-emotion information of the user based on the heart rate data and the determined user attribute information.

The heart rate is the characteristic that the heart rate of a user is reflected in unit acquisition time. The heart rate may reflect the emotional condition of the user to some extent, for example, corresponding to a user being nervous when the heart rate of the user is too fast, and corresponding to a user being calm when the heart rate of the user is slow. The heart rate data of the user and the user attribute information are combined to estimate the current emotion of the user to obtain second sub-emotion information, so that the emotion of the user can be better reflected.

And S1033, estimating the current third sub-emotion information of the user based on the body temperature data and the determined user attribute information.

The body temperature data is the body temperature information of the user in unit acquisition time. Body temperature levels may also reflect the emotional state of the user to some extent. For example, a user with a high body temperature corresponds to a user in an excited and stressed emotional state, and a user with a low body temperature corresponds to a user in a calm emotional state. The body temperature data of the user is combined with the user attribute information to predict the current emotion of the user to obtain third sub-emotion information, so that the emotion of the user can be reflected better.

S1034, estimating the current fourth sub-emotion information of the user based on the voice characteristic data and the determined user attribute information.

For the embodiment of the application, because the user is not in a real-time speaking state in the process of using the wearable device, the microphone device cannot acquire the voice feature information of the user in real time, when the voice feature information of the user is acquired, the cloud server predicts the current emotion of the user by combining the voice feature data of the user to obtain the fourth sub-emotion information, and when the voice feature data of the user is not acquired, the cloud server only performs emotion recognition through at least one of electroencephalogram data, heart rate data and body temperature data.

S1035, estimating current emotion information of the user based on at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, and the fourth sub-emotion information.

For the embodiment of the application, the current emotion of the user is estimated and determined through at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information.

In the embodiment of the present application, when at least two of step S1031, step S1032, step S1033, and step S1034 are involved, the execution sequence of step S1031, step S1032, step S1033, and step S1034 is not limited.

In a possible implementation manner of the embodiment of the present application, step S1031 specifically includes at least one of step S10311 (not shown in the figure) and step S10341 (not shown in the figure) when estimating the current first sub-emotion information of the user based on the brain wave data and the determined user attribute information, and/or step S1034 when estimating the current fourth sub-emotion information of the user based on the voice feature data and the determined user attribute information,

and S10311, performing time-frequency transformation processing on the brain wave data to obtain brain wave power spectrum data, performing feature extraction on the brain wave power spectrum data to obtain brain wave frequency data, wherein the brain wave frequency data comprises alpha waves, beta waves, delta waves, theta waves and power proportions of the alpha waves, the beta waves, the delta waves and the theta waves, and estimating current first sub-emotion information of the user based on the brain wave frequency data and the determined user attribute information.

For the embodiment of the application, after the brain wave data is preprocessed, the preprocessing result of the brain wave data is subjected to time-frequency transformation processing through Fourier transformation to obtain the brain wave power spectrum data. The proportion of various frequency waveforms in the brain wave data is convenient to observe through the brain wave power spectrum data.

The different proportion of different waveforms in the brain wave power spectrum data reflects different emotions of the user. The effective frequency of the electroencephalogram signals is 0-30 Hz, and according to scientific research, the four characteristic waves of alpha waves, beta waves, delta waves and theta waves correspond to four different states presented by the brain.

The range of the delta wave is 0.5-3 HZ, and the delta wave corresponds to a state that a user is in deep sleep and unconscious; the range of the theta wave is 4-8 HZ, and the theta wave corresponds to a state that a user is in a deep relaxed and unstressed subconscious state; the range of the alpha wave is 8-13 HZ, wherein when the alpha wave is within the range of 8-9 HZ, the alpha wave corresponds to the state that a user is in a state of being at a head of a user before sleeping and in a state of being at a room, when the alpha wave is within the range of 9-12 HZ, the alpha wave corresponds to the state that the user is in a state of inspiration, intuition or power exertion, and when the alpha wave is within the range of 12-13HZ, the alpha wave corresponds to the state that the user is in a state of being highly alert and without other patronage. The beta wave is above 14HZ, corresponding to the state of brain wave that the user is under tension, stress and brain fatigue.

And performing emotion prediction on the brain wave power spectrum data and the user attribute information matched with the brain wave data to obtain first sub-emotion information, wherein the first sub-emotion information is the predicted current emotion state of the user corresponding to the determined brain wave data.

S10341, extracting the voice loudness information and the voice frequency information from the voice feature data, and estimating the current fourth sub-emotion information of the user based on the voice loudness information, the voice frequency information and the determined user attribute information.

For embodiments of the present application, the voice characteristic data may be collected by a microphone device on the wearable device. And after receiving the voice characteristic data, the cloud server extracts voice loudness information and voice frequency information from the voice characteristic data. For example, when the user is in a state of being relatively violent, the speaking voice of the user is relatively loud, the corresponding loudness information of the voice is relatively large, and when the user is in a state of being relatively excited, the speaking tone of the user is relatively high, and the corresponding voice frequency information is relatively high.

The current emotion of the user is estimated from the speaking dimension of the user, so that the mode of estimating the emotion is more diversified. And performing emotion recognition processing on the voice loudness information and the voice frequency information to obtain fourth sub-emotion information.

For the embodiment of the application, the fourth sub-emotion information can be estimated through the Gaussian mixture model, the voice loudness information and the voice frequency information are input into the Gaussian mixture model for emotion recognition processing, and the Gaussian mixture model outputs the voice loudness information and the emotion corresponding to the voice frequency information.

For example, after the microphone device collects the voice feature data, the voice loudness information in the voice feature data is "60 db", and the voice frequency information is "2900 Hz". The Gaussian mixture model outputs corresponding fourth sub-emotion information as 'excitement'.

In a possible implementation manner of the embodiment of the present application, when the step S1035 estimates the current emotion information of the user based on at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, and the fourth sub-emotion information, the step S1035 includes at least one of the steps S1035a and S1035 b:

s1035a, if the first sub-emotion information is included, it is estimated that the current emotion information is the first sub-emotion information.

For the embodiment of the application, for example, the wearable device can acquire electroencephalogram data of a user, and after the user wears the wearable device, the wearable device can acquire the electroencephalogram data of the user in real time and continuously. The first sub-emotion information obtained by predicting the emotion of the user is used for representing the current emotion of the user most accurately by combining brain wave data with user attribute information.

Therefore, the determination of the current emotion of the user can be determined only by the first sub-emotion information and the fourth sub-emotion information. When the user speaks, the current emotion of the user is determined through the voice feature data and the brain wave data. For example, if the first sub-emotion information is "tension", and the fourth sub-emotion information is also "tension", it is determined that the current emotion of the user is "tension". And if the first sub-emotion information and the fourth sub-emotion information are different emotions, representing the current emotion of the user more accurately by the electroencephalogram data than the voice characteristic data. For example, if the first sub-emotion information is "tension", and the fourth sub-emotion information is "excitement", it is determined that the first sub-emotion information is the current emotion of the user.

S1035b, determining weight information corresponding to the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, and the fourth sub-emotion information, and estimating current emotion information of the user based on the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, the fourth sub-emotion information, and the weight information corresponding thereto.

For the embodiment of the present application, for example, the first sub-emotion information accounts for 35% by weight, the second sub-emotion information accounts for 15% by weight, the third sub-emotion information accounts for 15% by weight, and the fourth sub-emotion information accounts for 35% by weight. The current emotion of the user is determined mainly by the first sub-emotion information and the fourth sub-emotion information at this time. The second sub-emotion information and the third sub-emotion information are used as auxiliary emotion recognition. For example, the first sub-emotional information is "tension", the second sub-emotional information is "excitement", the third sub-emotional information is "mania", and the fourth sub-emotional information is "tension". The current emotion of the user determined by the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information is 'nervous and slightly excited and violent', so that the determination of the current emotion of the user is more diversified.

Step S1031 is to estimate current first sub-emotion information of the user based on the brain wave data and the determined user attribute information, and includes step S1031 a:

and S1031a, performing emotion detection processing through the first network model based on the brain wave data and the determined user attribute information to obtain current first sub-emotion information of the user.

For the embodiment of the present application, the first network model is a neural network model, the first network model may be a convolutional neural network or a cyclic neural network, and the type of the first network model is not limited herein. Before training and learning the initial first network model, a training sample set is determined, wherein the training sample set comprises a plurality of groups of electroencephalogram data, emotions corresponding to the plurality of groups of electroencephalogram data respectively, and user attribute information corresponding to the plurality of groups of electroencephalogram data. And inputting the training sample set into the first network model for training and learning to obtain the trained first network model.

And inputting the electroencephalogram frequency data and the determined user attribute information into a trained first network model for emotion recognition processing, wherein first sub-emotion information output by the first network model is the current emotion of the user.

For example, the user attribute information is "male, 23 years old", and the electroencephalogram power spectrum data corresponding to the electroencephalogram data collected within 30s is "β wave accounts for 80%, and α wave accounts for 20% in the range of 12 to 13 HZ". The trained first network model outputs corresponding first sub-emotion information of the user as tension.

Step S1032 includes, when estimating the current second sub-emotion information of the user based on the heart rate data and the determined user attribute information, step S1032 a:

s1032a, performing emotion detection processing through the second network model based on the heart rate data and the determined user attribute information to obtain current second sub-emotion information of the user.

For the embodiment of the present application, the second network model is a neural network model, the second network model may be a convolutional neural network or a cyclic neural network, and the type of the second network model is not limited herein. Before the initial second network model is trained and learned, a training sample set corresponding to the heart rate is determined, wherein the training sample set comprises multiple groups of heart rate data, emotions corresponding to the multiple groups of heart rate data and user attribute information of the multiple groups of heart rate data respectively. For example, one of the training samples is "male, 23 years old, 70 times/min", and the training sample set is input into the second network model for training and learning to obtain the trained second network model.

And inputting the heart rate data and the matched user attribute information into a trained second network model, carrying out emotion recognition processing on the heart rate data by the trained second network model, and outputting second sub-emotion information by the trained second network model. The second sub-emotion information is the current emotion of the user corresponding to the heart rate data.

For example, the user attribute information is "male, 23 years old", and the heart rate data collected within 30s is "65 times/minute". The trained second network model outputs corresponding second sub-emotion information of the user as 'relaxation'.

Step S1033, when the current third sub-emotion information of the user is estimated based on the body temperature data and the determined user attribute information, includes step S1033 a:

and S1033a, performing emotion detection processing through a third network model based on the body temperature data and the determined user attribute information to obtain current third sub-emotion information of the user.

For the embodiment of the present application, the third network model is a neural network model, the third network model may be a convolutional neural network or a cyclic neural network, and the type of the third network model is not limited herein. Before training and learning the initial third network model, a training sample set corresponding to the body temperature is determined, wherein the training sample set comprises a plurality of groups of body temperature data, emotions corresponding to the plurality of groups of body temperature data and user attribute information corresponding to the plurality of groups of body temperature data. For example, one of the training samples is "woman, 22 years old, 36.7 ℃", and the training sample set is input into the third network model for training and learning to obtain the trained third network model.

And inputting the body temperature data and the matched user attribute information into a trained third network model, carrying out emotion recognition processing on the body temperature data by the trained third network model, outputting third sub-emotion information by the trained third network model, wherein the third sub-emotion information is the current emotion of the user corresponding to the heart rate data.

For example, the user attribute information is "woman, 22 years old", and the heart rate data collected within 30s is "36.2 ℃". And outputting the corresponding third sub-emotion information of the user as 'relaxation' by the trained third network model.

In a possible implementation manner of the embodiment of the present application, step S103 further includes step S104 (not shown), step S105 (not shown), and step S106 (not shown), wherein,

s104, if the preset conditions are met, sending the current emotion information of the user to the terminal equipment so that the terminal equipment can display the received emotion information, wherein the preset conditions comprise: the emotion information estimated at the current time is different from the emotion information estimated at the last time, and the estimated times of the emotion information are not more than the preset times.

For the embodiment of the application, for example, when the current emotion of the user changes from "relaxed" to "tense", the cloud server detects that the emotion of the current user changes. The cloud server pushes the current tension to the terminal equipment corresponding to the user, and the terminal equipment displays the tension, so that the terminal equipment reminds the user of the change of the emotion, and the user can judge whether the current tension is the current real emotion of the user.

And when the prediction times of the emotion information reach the preset times, stopping sending the emotion information after the current change to the terminal equipment. For example, when the predicted current emotion is different from the last predicted emotion information by 100 times, the current changed emotion is not sent to the terminal device.

And S105, receiving a feedback signal sent by the terminal equipment, wherein the feedback signal is generated based on the current emotion information trigger of the user.

For the embodiment of the application, the feedback signal is sent by the terminal device based on whether the current emotion pushed by the cloud server selected by the user is correct or not, and the cloud server judges whether the emotion after the change of the user is recognized or not by receiving the feedback signal. For example, the emotion of the user is changed from "tension" to "calm", and the cloud server pushes the "calm" to the terminal device corresponding to the user. The terminal equipment reminds the user whether the user is in the 'calm' emotion currently, and sends a confirmation signal if the user determines that the user is in the 'calm' emotion currently. The cloud server can know that emotion recognition is correct after receiving the confirmation signal. Assuming that the user determines that it is not currently in a "calm" mood, the terminal device sends a negative acknowledgement signal. The cloud server can know that the emotion recognition is wrong after receiving the negative acknowledgement signal.

S106, at least one of the first network model, the second network model and the third network model is updated based on the current emotion information of the user and the corresponding feedback signal.

For the embodiment of the application, the cloud server can know whether emotion recognition of the user is correct or not according to the feedback signal, and the cloud server can update at least one of the first network model, the second network model and the third network model according to the feedback signal, so that the cloud server can accurately recognize the emotion of the user.

In a possible implementation manner of the embodiment of the present application, the step S106, when updating at least one of the first network model, the second network model and the third network model based on the current emotion information of the user and the corresponding feedback signal, includes a step S1061 (not shown), and further includes at least one of a step S1062 (not shown), a step S1063 (not shown), and a step S1064 (not shown), wherein,

s1061, if the feedback signal is a confirmation signal triggered based on the current emotion information of the user, then,

and acquiring at least one item of brain wave data, heart rate data and body temperature data corresponding to the current emotion information of the user.

For the embodiment of the application, the user determines that the emotion after the current change is the current real emotion of the user through the terminal device, the user outputs the confirmation signal through the terminal device, and the cloud server receives the confirmation signal and then determines that the emotion after the emotion of the user is changed, which is recognized by the cloud server, is correct.

After receiving the confirmation signal, the cloud server acquires at least one of electroencephalogram data, heart rate data and body temperature data corresponding to the current emotion information of the user, so that the cloud server can better identify the emotion of the user.

S1062, updating the first network model based on the electroencephalogram data corresponding to the current emotion information of the user and the current emotion information of the user.

For the embodiment of the application, after the cloud server knows that the current emotion of the user after the emotion change is judged to be correct, the cloud server determines a first brain wave training sample set based on the emotion of the user after the emotion change, the collected brain wave data of the user and the attribute information of the user. The first brain wave training sample set is used to update the first network model. For example, electroencephalogram data within 30s after emotion change is acquired, wherein the electroencephalogram data within 30s includes 100 groups of electroencephalogram data, and each group of electroencephalogram data is 0.3 s. Each set of brain wave data is combined with the user attribute information and the changed emotion to form 100 first brain wave training samples.

The cloud server inputs the first brain wave training sample set into the first network model so as to update the first network model, so that the first network model can better recognize emotion of a user, and the quality of the first network model is gradually improved.

And S1063, updating the second network model based on the heart rate data corresponding to the current emotion information of the user and the current emotion information of the user.

For the embodiment of the application, the cloud server determines a first heart rate training sample set based on the changed emotion of the user, the collected heart rate data of the user and the user attribute information. The first set of heart rate training samples is used to update the second network model. For example, heart rate data is collected for the time period after a change in mood until the next change in mood. Assume that a time period from a change in emotion to the next change in emotion is 30min, where 30min of heart rate data is 1min in unit time, and thus includes 30 sets of heart rate data, each set of heart rate data being the number of heartbeats within 1 min. Each set of heart rate data is combined with the user attribute information and the changed emotion to form 30 first heart rate training samples.

And inputting the first heart rate training sample set into a second network model so as to update the second network model, so that the second network model can better recognize the emotion of the user and gradually improve the quality of the second network model.

And S1064, updating the third network model based on the body temperature data corresponding to the current emotion information of the user and the current emotion information of the user.

And determining a first body temperature training sample set based on the changed emotion of the user, the collected body temperature data of the user and the attribute information of the user. The first body temperature training sample set is used for updating the third network model. For example, body temperature data is collected for the time period after a mood change until the next mood change. The time period from the emotion change to the next emotion change is also 30min, wherein the body temperature data of 30min is 1min as a unit, so that 30 groups of body temperature data are included, and each group of body temperature data is the body temperature mean value within 1 min. And combining each group of body temperature data with the attribute information of the user and the changed emotion to form 30 first body temperature training samples.

And inputting the first body temperature training sample set into the third network model so as to update the third network model. Therefore, the third network model can better recognize the emotion of the user, and the quality of the third network model is gradually improved.

By updating at least one of the first network model, the second network model and the third network model, the cloud server can better identify the emotion of the user.

In the embodiment of the present application, when at least two of the steps S1062, S1063, and S1064 are involved, the execution sequence of the steps S1062, S1063, and S1064 is not limited.

In a possible implementation manner of the embodiment of the application, the step S106, when updating at least one of the first network model, the second network model and the third network model based on the current emotion information of the user and the corresponding feedback signal, includes the steps S1065, S1066, S1067 and S1068, and further includes at least one of the steps S1069, S10610 and S10611:

s1065, if the feedback signal is a negative signal triggered based on the current emotional information of the user, then,

and transmitting the at least one alternative emotion information to the terminal equipment.

For the embodiment of the application, for example, when the current emotion of the user changes from "nervous" to "calm", the cloud server detects that the current emotion of the user changes. The cloud server pushes the emotion 'calm' after the current change to the terminal equipment corresponding to the user, and the terminal equipment reminds the user of the emotion change and inquires whether the user is really in the changed emotion.

And the user determines whether the changed emotion recognized by the cloud server is correct through the terminal equipment, and if the user sends a negative signal through the terminal equipment, the changed emotion recognized by the cloud server is incorrect. The cloud server pushes alternative emotion information to the terminal device, for example, the cloud server sends "excited", "nervous", and "happy" to the terminal device.

And S1066, receiving a feedback signal aiming at the alternative emotion information sent by the terminal device, wherein the feedback signal aiming at the alternative emotion information is generated after the user triggers a selection operation aiming at least one alternative emotion information.

For the embodiment of the application, after the user selects the current real emotion through the terminal equipment, the terminal equipment sends the feedback signal based on the real emotion selected by the user, so that the cloud server can know the current real emotion of the user.

S1067, determining the alternative emotion information selected by the user from the feedback signal for the alternative emotion information.

For the embodiment of the present application, for example, if the emotion selected by the user is "happy," the feedback signal includes information of the "happy" emotion. And the cloud server determines the current happy emotion of the user from the feedback signal.

S1068, acquiring at least one of electroencephalogram data, heart rate data and body temperature data corresponding to the alternative emotion information.

For the embodiment of the application, taking step S1067 as an example, after determining the current real emotion of the user, the cloud server obtains the user emotion parameter corresponding to the real emotion of the user, so that the cloud server can better recognize the emotion of the user.

And S1069, updating the first network model based on the brain wave data corresponding to the alternative emotion information and the alternative emotion information.

And the cloud server generates a second brain wave training sample set based on the user attribute information, the acquired brain wave data of the user and the current real emotion determined by the user. And the second brain wave training sample set updates the first network model by the user. For example, electroencephalogram data within 30s after emotion change is acquired, wherein the electroencephalogram data within 30s includes 100 groups of electroencephalogram data, and each group of electroencephalogram data is 0.3 s. Each set of brain wave data is combined with the attribute information of the user and the current real emotion of the user, so as to form 100 first brain wave training samples.

The cloud server inputs the second brain wave training sample set into the first network model to train the first network model, so that the effect of updating the first network model is achieved, and the first network model can better recognize the emotion of the user.

And S10610, updating the second network model based on the heart rate data corresponding to the alternative emotion information and the alternative emotion information.

And the cloud server generates a second heart rate training sample set based on the user attribute information, the collected heart rate data of the user and the current real emotion determined by the user. The second set of heart rate training samples is used to update the second network model. For example, heart rate data is collected for the time period after a change in mood until the next change in mood. Assume that a time period from a change in emotion to the next change in emotion is 30min, where 30min of heart rate data is 1min in unit time, and thus includes 30 sets of heart rate data, each set of heart rate data being the number of heartbeats within 1 min. Each set of heart rate data is combined with the user attribute information and the current real emotion of the user to form 30 first heart rate training samples.

The cloud server inputs the second heart rate training sample set into the second network model to train the second network model, so that the effect of updating the second network model is achieved, and the second network model can better recognize the emotion of the user.

And S10611, updating the third network model based on the body temperature data corresponding to the alternative emotion information and the alternative emotion information.

And the cloud server generates a second body temperature training sample set based on the user attribute information, the collected body temperature data of the user and the current real emotion after the user determines. The second body temperature training sample set is used to update the third network model. For example, body temperature data is collected for the time period after a mood change until the next mood change. The time period from the emotion change to the next emotion change is also 30min, wherein the body temperature data of 30min is 1min as a unit, so that 30 groups of body temperature data are included, and each group of body temperature data is the body temperature mean value within 1 min. And combining each group of body temperature data with the attribute information of the user and the current emotion of the user to form 30 first body temperature training samples.

The cloud server inputs the second body temperature training sample set into the second network model to train the third network model, so that the effect of updating the third network model is achieved, and the third network model can better recognize the emotion of the user.

In the embodiment of the present application, when at least two of the steps S1069, S10610, and S10611 are involved, the execution order of the steps S1069, S10610, and S10611 is not limited.

In order to facilitate the alleviation of the emotion of the user, in one possible implementation manner of the embodiment of the present application, the method further includes step S107 (not shown in the figure) and step S108 (not output in the figure), and step S107 may be performed after step S103, wherein,

s107, determining a music library corresponding to the current emotion of the user based on the current emotion information of the user, wherein the music library comprises at least one audio file.

For the embodiment of the application, the cloud server controls the wearable device to play the audio files in the music library corresponding to the current emotion based on the current emotion of the user, so that the effect of improving the current emotion of the user is achieved. For example, the cloud server recognizes that the current emotion of the user is "nervous", the cloud server determines a music library corresponding to the "nervous", and a plurality of audio files capable of relieving the "nervous" emotion are stored in the music library.

S108, controlling the terminal equipment to play audio files in the music library in a preset mode;

the preset mode comprises at least one of the following modes:

randomly playing audio files in a music library;

sequentially playing a plurality of audio files in the music library according to the arrangement sequence of the plurality of audio files;

and circularly playing one audio file in the music library.

For the embodiment of the application, for example, after the cloud server determines the music library corresponding to the "nervous" emotion, the file in the music library corresponding to the "nervous" emotion may be played in a manner of randomly playing the audio files in the music library, sequentially playing a plurality of audio files according to the arrangement of the plurality of audio files, or circularly playing one of the audio files. The cloud server controls the wearable device to play the audio file, so that the effect of relieving the nervous emotion is achieved, and the use experience of a user is improved.

In order to improve the mood of the user by the light, a possible implementation manner of the embodiment of the present application further includes step S109 (not shown in the figure) and step S110 (not shown in the figure) after step S103, wherein,

and S109, determining the lighting strategy information of the environment lamp based on the current emotion information of the user.

For the embodiment of the application, for example, the cloud server recognizes that the current emotion of the user is "nervous", the cloud server determines the lighting strategy information of the environment lamp corresponding to the "nervous" emotion, and the lighting strategy scheme corresponding to the "nervous" emotion is "blue light". The blue light helps the user to relieve a "stressful" mood.

And S110, sending the lighting strategy information to the environment lamp so that the environment lamp operates according to the lighting strategy information.

For the embodiment of the application, the cloud server can be connected with the environment lamp through WIFI, can be further connected through Mesh networking, and can also be connected with the environment lamp through other connection modes without limitation. The cloud server sends lighting strategy information of "blue light emission" to the ambient light. And after receiving the lighting strategy information, the environment lamp works according to the lighting strategy information of 'emitting blue light'. The user can gradually become a calm emotion in the environment with blue light, so that the nervous emotion of the user is relieved, and the use experience of the user is improved.

In order to facilitate pushing of medical advice to the user, which is suitable for the current mood of the user, a possible implementation manner of the embodiment of the present application further includes step S111 (not shown in the figure) and step S112 (not shown in the figure) after step S103, wherein,

and S111, determining corresponding medical advice information based on the current emotion information of the user.

For the embodiment of the application, the cloud server identifies the current emotion of the user and then determines the medical suggestion corresponding to the current emotion of the user. For example, if the current mood of the user is "tension", the medical advice corresponding to the "tension" mood is "relax, breathe deeply and drink warm water".

S112, pushing medical advice information to the terminal device.

To this application embodiment, the cloud server will "put relax, breathe deeply and drink warm water"'s medical suggestion propelling movement to the terminal equipment that the user corresponds. The user views the medical advice through the terminal device, so that the emotion of the user is improved according to the medical advice.

In order to determine the volume and sound effect more suitable for the user according to the current emotion of the user, a possible implementation manner of the embodiment of the present application further includes step S113 (not shown in the figure) and step S114 (not shown in the figure) after step S103, wherein,

s113, determining playing strategy information of the wearable device based on the current emotion information of the user.

For the embodiment of the application, for example, the cloud server recognizes that the current emotion of the user is "nervous", and the cloud server determines the playing strategy information of the wearable device corresponding to the "nervous" emotion, where the playing strategy information corresponding to the "nervous" emotion is "volume 50%, bass is enhanced, and surround effect is enhanced". The volume is adjusted to moderate 50%, the bass enhancement enables the sound played by the wearable device to be more powerful, so that the user can be helped to relieve the nervous emotion, the surround effect is enhanced, the user can be more immersed in the sound played by the wearable device, and the user can be helped to relieve the nervous emotion.

S114, pushing the playing strategy information to the terminal device, so that the terminal device controls the wearable device to operate according to the playing strategy information.

For the embodiment of the application, the cloud server sends playing strategy information of '50% of volume, bass enhancement and surround effect enhancement' to the terminal equipment. And after receiving the playing strategy information, the terminal equipment controls the wearable equipment to work according to the playing strategy information of '50% of volume, bass enhancement and surround effect enhancement'. Wearable equipment according to "volume 50%, reinforcing bass, reinforcing surround effect" broadcast sound and can make the user become "calm" mood gradually to alleviate user's "nervous" mood, and then make promotion user's use experience.

The embodiment of the present application provides a method for detecting a user emotion, which is executed by a terminal device as shown in fig. 2, and includes step S201, step S202, and step S203, wherein,

s201, receiving user emotion parameters uploaded by the wearable device, wherein the user emotion parameters comprise at least one of electroencephalogram data, heart rate data, voice characteristic data and body temperature data.

For the embodiment of the application, wearable equipment on a user body collects the user emotion parameters of the user, and after the wearable equipment is connected and paired with the terminal equipment of the user, the terminal equipment receives the user emotion parameters collected by the wearable equipment.

S202, sending the emotion parameters to a cloud server so that the cloud server determines current emotion information of the user based on the emotion parameters of the user and user attribute information, wherein the user attribute information is stored in the cloud server and is matched by the cloud server according to the emotion parameters, and the user attribute information comprises: at least one of user gender information and user age information.

For the embodiment of the application, when a user uses the wearable device for the first time, the wearable device needs to be connected and adapted with the terminal device, and the user selects the user attribute information of the user through the terminal device, for example, the user attribute information determined by the user through the terminal device is "male, 23 years old". The terminal equipment sends the user attribute information and the user emotion parameters to the cloud server, so that the cloud server can identify the current emotion of the user based on the user attribute information and the user emotion parameters.

S203, receiving the current emotion information of the user, which is sent by the cloud server.

For the embodiment of the application, after the cloud server identifies the current emotion of the user, the terminal equipment receives the emotion identified by the cloud server, so that the effect of determining the current emotion of the user is achieved.

In a possible implementation manner of the embodiment of the present application, the emotional parameters include heart rate data and body temperature data, and step S201 further includes step S204 (not shown in the figure), step S205 (not shown in the figure), and step S206 (not shown in the figure), wherein,

and S204, comparing the heart rate data with a preset heart rate range.

For the embodiment of the application, the heart rate data acquired by the wearable device can also be used for representing the heartbeat state of the user. The heart rate preset range represents a normal heart rate range, for example, the heart rate preset range is 60-100 times/min. And if the heart rate data collected within 30s is 90 times/min, comparing the collected heart rate data of 90 times/min with the preset heart rate range of 60-100 times/min.

And S205, comparing the body temperature data with a body temperature preset range.

For the embodiment of the application, the body temperature data collected by the wearable device can also be used for representing the body temperature state of the user. The preset body temperature range represents the normal body temperature range, for example, the preset body temperature range is 36.0-37.3 ℃. And if the body temperature data collected within 30s is 36.5 ℃, comparing the collected body temperature data of 36.5 ℃ with a preset body temperature range of 36.0-37.3 ℃.

And S206, if the heart rate data is not in the preset heart rate range and/or the body temperature data is not in the preset body temperature range, outputting alarm information.

For the embodiment of the application, taking step S204 as an example, if the user heart rate data acquired within 30S is that "90 times/min" is within the preset heart rate range of "60-100 times/min", it indicates that the heart rate of the user is normal. If the collected heart rate data of the user is more than 100 times/minute or less than 60 times/minute, the heart rate data of the user is not in the preset heart rate range, and the terminal equipment sends an alarm to remind the user of heart rate abnormity. The terminal equipment can send out voices such as 'heart rate too fast/slow' and 'heart rate abnormity' to remind the user.

Taking the step S205 as an example, if the user body temperature data acquired within 30S is that the temperature is "36.5 ℃" within the preset temperature range of "36.0-37.3 ℃", it is indicated that the user body temperature is normal. If the collected body temperature of the user is higher than 37.3 ℃ or lower than 36.0 ℃, the body temperature data of the user is not in the preset body temperature range, and the terminal equipment gives an alarm to remind the user of abnormal body temperature. The terminal equipment can send out voices such as 'high/low body temperature' and 'abnormal body temperature' to remind the user.

In the embodiment of the present application, the execution sequence of step S204 and step S205 is not limited.

In order to facilitate the user to more conveniently know the heart rate data and the body temperature data at different time points, a possible implementation manner of the embodiment of the present application further includes step S207 (not shown), step S208 (not shown), step S209 (not shown), and step S210 (not shown) after step S201, wherein,

and S207, generating heart rate marking data based on the heart rate data and the time corresponding to the received heart rate data.

For the embodiment of the application, after the terminal equipment receives the heart rate data of the user, the time for acquiring the heart rate data is marked with the heart rate data. For example, the heart rate data is collected for 30s at "70 beats/minute" and at a time point of "8: 20:00-8:20: 30". The terminal device labels the 70 times/minute and the 8:20:00-8:20:30 together to form the heart rate label data.

And S208, generating body temperature labeling data based on the body temperature data and the time corresponding to the received body temperature data.

For the embodiment of the application, after the terminal equipment receives the body temperature data of the user, the time for acquiring the body temperature data and the body temperature data are marked. For example, the body temperature data collected within 30s is "36.5 ℃", and the time points of collection are "8: 20:00-8:20: 30". The terminal equipment marks 36.5 ℃ and 8:20:00-8:20:30 together to form body temperature marking data.

And S209, controlling and displaying the heart rate labeling data and the body temperature labeling data.

To this application embodiment, terminal equipment control display screen waits equipment to show heart rate marking data and body temperature marking data, and the user looks over heart rate data and body temperature data of different time points through the display screen to just, have the heart rate condition and the body temperature condition of oneself to make better understanding.

And S210, uploading the heart rate annotation data and the body temperature annotation data to a cloud server.

To this application embodiment, terminal equipment uploads heart rate marking data and body temperature marking data to the cloud ware and saves to be convenient for the user to transfer historical heart rate data and body temperature data.

In the embodiment of the present application, the execution sequence of step S207 and step S208 is not limited.

When a user listens to an audio file using a wearable device, if the user takes off the wearable device and puts the wearable device on again after a period of time, the user may miss some segments of listening to the audio file, thereby reducing the user experience. In order to improve the user experience, a possible implementation manner of the embodiment of the present application further includes step S210 (not shown in the figure) and step S211 (not shown in the figure) after step S203, wherein,

and S210, if a picking-off signal sent by the wearable device is received, controlling the currently played audio file to pause.

To this application embodiment, the user listens to the audio file through wearing equipment to bluetooth headset is the example, sets up pressure sensor in bluetooth headset's ear muff department, and when the user wore bluetooth headset and took bluetooth headset, the pressure data that pressure sensor gathered was different. When the user takes off the Bluetooth headset, the pressure data collected by the pressure sensor on the Bluetooth headset changes, and the Bluetooth headset sends out a take-off signal. The terminal equipment can judge that the user takes off the earphone after receiving the take-off signal, and controls the audio file played currently to pause, so that the consumption of electric energy is reduced.

And S211, if a wearing signal sent by the wearing equipment is received, controlling the currently played audio file to continue playing.

For the embodiment of the present application, taking step S211 as an example, after the user wears the earphone, the pressure data collected by the pressure sensor changes, and the bluetooth earphone sends a wearing signal. The terminal equipment receives the wearing signal and then judges that the Bluetooth headset is worn by the user, and the terminal equipment controls the Bluetooth headset to play the audio file, so that the experience of the user in using the Bluetooth headset is improved.

In order to facilitate the user to switch the call from the wearable device to the terminal device when the user uses the wearable device to answer the call, a possible implementation manner of the embodiment of the present application further includes step S212 (not shown in the figure) after step S203, wherein,

and S212, if the picking-up signal sent by the wearable device is received and the wearable device is in a call state at present, switching the call playing end from the wearable device to the terminal device.

For the embodiment of the application, a user answers a call through the Bluetooth headset, in the process of a call, the user wants to change the Bluetooth headset answering into the terminal equipment answering, the user takes the Bluetooth headset off and the Bluetooth headset outputs the take-off signal, and meanwhile, the terminal equipment detects that the user is in a call state, and the user switches the Bluetooth headset answering call into the terminal equipment answering call. Thereby the suitability of wearing equipment has been improved, user experience of using is improved.

The foregoing embodiments describe a method for detecting a user emotion from the perspective of a method flow, and the following embodiments describe a cloud server and a terminal device from the perspective of a virtual module or a virtual unit, which are described in detail in the following embodiments.

As shown in fig. 3, the cloud server 30 for detecting a user emotion specifically may include:

the first receiving module 301 is configured to receive a user emotion parameter uploaded by a terminal device, where the user emotion parameter includes at least one of electroencephalogram data, heart rate data, body temperature data, and voice feature data;

a first determining module 302, configured to determine user attribute information that matches the user emotion parameter, where the user attribute information is stored in the cloud server, and the user attribute information includes: at least one of user gender information and user age information;

and the estimating module 303 is configured to estimate current emotion information of the user based on the determined user attribute information and the user emotion parameter.

For the embodiment of the application, the first receiving module 301 receives the user emotion parameters, the first determining module 302 determines the user attribute information matched with the user emotion parameters in the cloud server, and the estimating module 303 combines the user attribute information and the user emotion parameters to estimate the current emotion of the user, so that the effect of conveniently identifying the current emotion of the user is achieved.

In a possible implementation manner of the embodiment of the present application, the estimation module 303 is specifically configured to, when estimating the current emotion information of the user based on the determined user attribute information and the user emotion parameter:

based on the electroencephalogram data and the determined user attribute information, estimating the current first sub-emotion information of the user, and/or,

estimating the current second sub-emotion information of the user based on the heart rate data and the determined user attribute information, and/or,

estimating the current third sub-emotion information of the user based on the body temperature data and the determined user attribute information, and/or,

estimating the current fourth sub-emotion information of the user based on the voice characteristic data and the determined user attribute information;

and predicting the current emotion information of the user based on at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information.

In a possible implementation manner of the embodiment of the application, the estimation module 303 is specifically configured to estimate, based on the electroencephalogram data and the determined user attribute information, current first sub-emotion information of the user, and/or estimate, based on the voice feature data and the determined user attribute information, current fourth sub-emotion information of the user, at least one of the following items:

performing time-frequency transformation on the brain wave data to obtain brain wave power spectrum data, performing feature extraction on the brain wave power spectrum data to obtain brain wave frequency data, wherein the brain wave frequency data comprises alpha waves, beta waves, delta waves, theta waves and power proportions of the alpha waves, the beta waves, the delta waves and the theta waves, predicting current first sub-emotion information of the user based on the brain wave frequency data and the determined user attribute information, and/or,

and extracting voice loudness information and voice frequency information from the voice feature data, and estimating the current fourth sub-emotion information of the user based on the voice loudness information, the voice frequency information and the determined user attribute information.

In a possible implementation manner of the embodiment of the application, the estimation module 303 estimates current emotion information of the user based on at least one of the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, and the fourth sub-emotion information, and is specifically configured to:

if the current emotion information contains first sub-emotion information, estimating that the current emotion information is first sub-emotion information;

determining weight information corresponding to the first sub-emotion information, the second sub-emotion information, the third sub-emotion information and the fourth sub-emotion information respectively, and estimating current emotion information of the user based on the first sub-emotion information, the second sub-emotion information, the third sub-emotion information, the fourth sub-emotion information and the weight information corresponding to the third sub-emotion information and the fourth sub-emotion information respectively.

In a possible implementation manner of the embodiment of the present application, the estimation module 303 is specifically configured to, when estimating the current first sub-emotion information of the user based on the electroencephalogram data and the determined user attribute information:

performing emotion detection processing through a first network model based on the electroencephalogram data and the determined user attribute information to obtain current first sub-emotion information of the user;

the estimation module 303 is specifically configured to, when estimating the current second sub-emotion information of the user based on the heart rate data and the determined user attribute information:

performing emotion detection processing through a second network model based on the heart rate data and the determined user attribute information to obtain current second sub-emotion information of the user;

the estimation module 303 is specifically configured to, when estimating the current third sub-emotion information of the user based on the body temperature data and the determined user attribute information, estimate:

and performing emotion detection processing through a third network model based on the body temperature data and the determined user attribute information to obtain current third sub-emotion information of the user.

In a possible implementation manner of the embodiment of the present application, the cloud server 30 further includes:

the first sending module is configured to send the current emotion information of the user to the terminal device when a preset condition is met, so that the terminal device displays the received emotion information, where the preset condition includes: the emotion information estimated at the current time is different from the emotion information estimated at the last time, and the estimation times of the emotion information are not more than the preset times;

the second receiving module is used for receiving a feedback signal sent by the terminal equipment, wherein the feedback signal is generated based on the current emotion information trigger of the user;

an updating module, configured to update at least one of the first network model, the second network model, and the third network model based on the current emotion information of the user and a corresponding feedback signal;

the update module, when updating at least one of the first network model, the second network model, and the third network model based on the current emotional information of the user and the corresponding feedback signal, is specifically configured to:

if the feedback signal is a confirmation signal triggered based on the current emotional information of the user, then,

acquiring at least one of brain wave data, heart rate data and body temperature data corresponding to the current emotion information of the user;

updating the first network model based on the brain wave data corresponding to the current emotion information of the user and the current emotion information of the user, and/or,

updating the second network model based on heart rate data corresponding to the current mood information of the user and the current mood information of the user, and/or,

updating the third network model based on body temperature data corresponding to the current emotion information of the user and the current emotion information of the user;

the update module, when updating at least one of the first network model, the second network model, and the third network model based on the current emotional information of the user and the corresponding feedback signal, is specifically configured to:

if the feedback signal is a negative acknowledgement signal triggered based on the current emotional information of the user, then,

sending at least one alternative emotion information to the terminal device;

receiving a feedback signal aiming at the alternative emotion information sent by the terminal equipment, wherein the feedback signal aiming at the alternative emotion information is generated after a user triggers a selection operation aiming at the at least one alternative emotion information;

determining alternative emotion information selected by a user from the feedback signal aiming at the alternative emotion information;

acquiring at least one of electroencephalogram data, heart rate data and body temperature data corresponding to the alternative emotion information;

updating the first network model based on the electroencephalogram data corresponding to the alternative emotion information and the alternative emotion information, and/or,

updating the second network model based on the heart rate data corresponding to the alternative emotion information and the alternative emotion information, and/or,

and updating the third network model based on the body temperature data corresponding to the alternative emotion information and the alternative emotion information.

In a fourth aspect, as shown in fig. 4, the present application provides a terminal device for detecting a user emotion, which adopts the following technical solution:

a terminal device that detects a user's emotion, comprising:

the third receiving module 401 is configured to receive a user emotion parameter uploaded by the wearable device, where the user emotion parameter includes at least one of electroencephalogram data, heart rate data, voice feature data, and body temperature data;

a second sending module 402, configured to send the emotion parameter to a cloud server, so that the cloud server determines current emotion information of a user based on the emotion parameter of the user and user attribute information, where the user attribute information is stored in the cloud server, and the user attribute information is matched by the cloud server according to the emotion parameter, where the user attribute information includes: at least one of user gender information and user age information;

a fourth receiving module 403, configured to receive the current emotion information of the user sent by the cloud server.

For the embodiment of the application, the third receiving module 401 receives the emotion parameters of the user collected by the wearable device of the user, the third sending module 402 sends the emotion parameters of the user to the cloud server, and the cloud server predicts the current emotion information of the user based on the emotion parameters of the user and the user attribute information matched with the emotion parameters of the user. The fourth receiving module 403 receives the current emotion of the user determined by the cloud server, so as to determine the current emotion of the user.

In a possible implementation manner of the embodiment of the present application, the terminal device 40 further includes:

the first comparison module is used for comparing the heart rate data with a preset heart rate range;

the second comparison module is used for comparing the body temperature data with a body temperature preset range;

and the alarm module is used for outputting alarm information when the heart rate data is not in the heart rate preset range and/or the body temperature data is not in the body temperature preset range.

For the embodiment of the present application, the first receiving module 301 and the second receiving module may be the same receiving module, or may be different receiving modules. The third receiving module 401 and the fourth receiving module 403 may be the same receiving module or different receiving modules. The first comparison module and the second comparison module may be the same comparison module or different comparison modules.

The embodiment of the present application provides a cloud server 30 for detecting a user emotion and a terminal device 40 for detecting a user emotion, which are applicable to the foregoing method embodiments and are not described herein again.

In an embodiment of the present application, there is provided a cloud server, as shown in fig. 5, a cloud server 50 shown in fig. 5 includes: a processor 501 and a memory 503. Wherein the processor 501 is coupled to the memory 503, such as via the bus 502. Optionally, cloud server 50 may also include transceiver 504. It should be noted that the transceiver 504 is not limited to one in practical applications, and the structure of the cloud server 50 does not constitute a limitation to the embodiment of the present application.

The Processor 501 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 501 may also be a combination of implementing computing functionality, e.g., comprising one or more microprocessors, a combination of DSPs and microprocessors, and the like.

Bus 502 may include a path that transfers information between the above components. The bus 502 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 502 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.

The Memory 503 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.

The memory 503 is used for storing application program codes for executing the scheme of the application, and the processor 501 controls the execution. The processor 501 is configured to execute application program code stored in the memory 503 to implement the content shown in the foregoing method embodiments.

Among them, cloud servers include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. But also a server, etc. The cloud server shown in fig. 5 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.

The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, in the embodiment of the application, the cloud server receives the user attribute information and the user emotion parameters acquired by the wearable device, at least one of the brain wave data, the heart rate data, the body temperature data and the voice characteristic data is used for representing the current emotion of the user, the cloud server determines the user attribute information matched with the user emotion parameters, and the cloud server combines the user attribute information and the user emotion parameters to pre-estimate the current emotion of the user, so that the current emotion of the user can be determined conveniently.

In the embodiment of the present application, a terminal device is provided, where the structure of the terminal device refers to fig. 5, and when fig. 5 is used to represent the terminal device 50, the terminal device 50 includes: a processor 501 and a memory 503. Wherein the processor 501 is coupled to the memory 503, such as via the bus 502. Optionally, the terminal device 50 may also include a transceiver 504. It should be noted that the transceiver 504 is not limited to one in practical application, and the structure of the terminal device 50 is not limited to the embodiment of the present application.

The Processor 501 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 501 may also be a combination of implementing computing functionality, e.g., comprising one or more microprocessors, a combination of DSPs and microprocessors, and the like.

Bus 502 may include a path that transfers information between the above components. The bus 502 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 502 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.

The Memory 503 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.

The memory 503 is used for storing application program codes for executing the scheme of the application, and the processor 501 controls the execution. The processor 501 is configured to execute application program code stored in the memory 503 to implement the content shown in the foregoing method embodiments.

Wherein, the terminal device includes but is not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. But also a server, etc. The terminal device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.

The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, the terminal equipment in the embodiment of the application receives the user emotion parameters of the user and then sends the user emotion parameters of the user to the cloud server, the cloud server identifies the current emotion of the user based on the user attribute information and the user emotion parameters, and the terminal equipment receives the current emotion of the user identified by the cloud server, so that the current emotion of the user is determined.

It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

30页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:网络问诊方法、计算机装置和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!