User emotion recognition and reply method, system, device and storage medium

文档序号:276379 发布日期:2021-11-19 浏览:2次 中文

阅读说明:本技术 用户情绪识别及回复方法、系统、设备及存储介质 (User emotion recognition and reply method, system, device and storage medium ) 是由 叶帅 向凌阳 于 2021-08-20 设计创作,主要内容包括:本发明提供了一种用户情绪识别及回复方法、系统、电子设备及存储介质,所述方法包括以下步骤:实时获取客服和用户会话过程中的流式语音流;从所述流式语音流中分离出用户语音流;根据所述用户语音流获得用户文本信息和用户情绪类型;根据用户文本信息和用户情绪类型匹配至少一种回复文本;向客服推送至少一种所述回复文本。本发明的用户情绪识别及回复方法通过在会话过程中实时识别用户情绪,并根据用户情绪为客服提供参考回复文本,当客户处于负面情绪时,帮助客服在第一时间迎合客户的需求、安抚客户负面情绪,提高客服服务质量,提升用户体验度。(The invention provides a method, a system, electronic equipment and a storage medium for recognizing and replying user emotion, wherein the method comprises the following steps: acquiring streaming voice flow in the conversation process of customer service and a user in real time; separating a user voice stream from the streaming voice stream; obtaining user text information and a user emotion type according to the user voice stream; matching at least one reply text according to the text information of the user and the emotion type of the user; and pushing at least one reply text to the customer service. According to the user emotion recognition and reply method, the emotion of the user is recognized in real time in the conversation process, the reference reply text is provided for the customer service according to the emotion of the user, when the customer is in the negative emotion, the customer service is helped to meet the requirement of the customer in the first time, the negative emotion of the customer is pacified, the service quality of the customer service is improved, and the user experience degree is improved.)

1. A method for recognizing and replying emotion of a user is characterized by comprising the following steps:

acquiring streaming voice flow in the conversation process of customer service and a user in real time;

separating a user voice stream from the streaming voice stream;

obtaining user text information and a user emotion type according to the user voice stream;

matching at least one reply text according to the text information of the user and the emotion type of the user;

and pushing at least one reply text to the customer service.

2. The method for recognizing and replying to user emotion according to claim 1, wherein said obtaining the type of user emotion according to said user voice stream comprises the steps of:

converting the user voice stream into user text information;

obtaining a sentence vector representation of the emotion of the user text information;

carrying out emotion classification on the sentence vector representation through a trained text emotion classifier to obtain a first emotion recognition result of the user;

and obtaining the emotion type of the user according to the first emotion recognition result.

3. The method for recognizing and replying to user emotion according to claim 1, wherein said obtaining the type of user emotion according to said user voice stream comprises the steps of:

extracting audio features from the user voice stream based on the trained audio recognition model, and recognizing the audio features to obtain a second emotion recognition result of the user;

and obtaining the emotion type of the user according to the second emotion recognition result.

4. The method for recognizing and replying to user emotion according to claim 1, wherein said obtaining the type of user emotion according to said user voice stream comprises the steps of:

converting the user voice stream into user text information;

obtaining a sentence vector representation of the emotion of the user text information;

carrying out emotion classification on the sentence vector representation through a trained text emotion classifier to obtain a first emotion recognition result of the user;

obtaining the emotion type of the user according to the first emotion recognition result;

carrying out emotion classification on the sentence vector representation through a trained text emotion classifier to obtain a first emotion recognition result of the user;

obtaining the emotion type of the user according to the first emotion recognition result;

extracting audio features from the user voice stream based on the trained audio recognition model, and recognizing the audio features to obtain a second emotion recognition result of the user;

and obtaining the emotion type of the user according to the first emotion recognition result and the second emotion recognition result.

5. The method for recognizing and replying to the emotion of a user as claimed in claim 1, wherein said matching at least one reply text based on the text information of the user and the emotion type of the user comprises the steps of:

obtaining a session context label corresponding to the user text information based on the trained session context model;

and obtaining at least one corresponding reply text according to the matching of the conversation context labels.

6. The method for user emotion recognition and reply of claim 5, wherein said step of pushing at least one of said reply texts to the customer service is preceded by sorting the obtained at least one of said reply texts according to said user emotion type.

7. The method of claim 1, further comprising pushing the user emotion type to a customer service.

8. The method for recognizing and replying to the emotion of a user as recited in claim 1, further comprising the steps of:

separating the customer service voice flow from the streaming voice flow;

acquiring customer service text information and a customer service emotion type according to the user voice stream;

and judging whether the type of the customer service emotion is a negative emotion or not, and if so, pushing warning information to the customer service.

9. The method for emotion recognition and reply to a user as claimed in claim 8, wherein, before the step of separating the voice stream of customer service from the streaming voice stream, the method further comprises the steps of:

and judging whether the emotion type of the user is negative emotion or not, and if the emotion type of the user is negative emotion, starting a step of separating the customer service voice stream from the streaming voice stream.

10. A system for recognizing and replying emotion of a user, for implementing the method for recognizing and replying emotion of a user as claimed in any one of claims 1 to 9, comprising a voice stream acquiring module, a voice separating module, an analyzing module, a replying matching module and an interacting module, wherein:

the voice flow acquisition module is used for acquiring a streaming voice flow in the conversation process of the customer service and the user in real time;

the voice separation module is used for separating a user voice stream from the streaming voice stream;

the analysis module is used for obtaining user text information and a user emotion type according to the user voice stream;

the reply matching module is used for matching at least one reply text according to the text information of the user and the emotion type of the user;

the interaction module is used for pushing at least one reply text to the customer service.

11. An electronic device, comprising:

a processor;

a memory having stored therein executable instructions of the processor;

wherein the processor is configured to perform the steps of the method of emotion recognition and reply for a user of any of claims 1 to 9 via execution of the executable instructions.

12. A computer-readable storage medium storing a program, wherein the program when executed by a processor implements the steps of the method for emotion recognition and reply for a user according to any one of claims 1 to 9.

Technical Field

The invention relates to the field of data processing, in particular to a method and a system for recognizing and replying emotion of a user, electronic equipment and a storage medium.

Background

In the Online Travel (OTA) industry, the number of customers is large, and 50 ten thousand communication calls are available every day, wherein ten thousand people are involved, so that the service quality of the customer service telephone is improved, and the difficulty of users is solved more effectively.

The emotion of the user is correctly recognized, and different conversation speeds or different dialogues are adopted for the clients with different emotions, so that the client can be better served, and the quality of telephone service is improved.

It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present invention and therefore may include information that does not constitute prior art known to a person of ordinary skill in the art.

Disclosure of Invention

The invention aims to provide a user emotion recognition and reply method, a system, an electronic device and a storage medium, aiming at the problems in the prior art, wherein the user emotion recognition and reply method recognizes the emotion of a user in real time in the conversation process and provides a reference reply text for a customer service according to the emotion of the user, so that the quality of the customer service is improved, and the user experience is improved.

Some embodiments of the present invention provide a method for recognizing and replying to a user emotion, including the steps of:

acquiring streaming voice flow in the conversation process of customer service and a user in real time;

separating a user voice stream from the streaming voice stream;

obtaining user text information and a user emotion type according to the user voice stream;

matching at least one reply text according to the text information of the user and the emotion type of the user;

and pushing at least one reply text to the customer service.

According to some examples of this invention, said obtaining a user emotion type from said user voice stream comprises the steps of:

converting the user voice stream into user text information;

obtaining a sentence vector representation of the emotion of the user text information;

carrying out emotion classification on the sentence vector representation through a trained text emotion classifier to obtain a first emotion recognition result of the user;

and obtaining the emotion type of the user according to the first emotion recognition result.

According to some examples of this invention, said obtaining a user emotion type from said user voice stream comprises the steps of:

extracting audio features from the user voice stream based on the trained audio recognition model, and recognizing the audio features to obtain a second emotion recognition result of the user;

and obtaining the emotion type of the user according to the second emotion recognition result.

According to some examples of this invention, said obtaining a user emotion type from said user voice stream comprises the steps of:

converting the user voice stream into user text information;

obtaining a sentence vector representation of the emotion of the user text information;

carrying out emotion classification on the sentence vector representation through a trained text emotion classifier to obtain a first emotion recognition result of the user;

obtaining the emotion type of the user according to the first emotion recognition result;

extracting audio features from the user voice stream based on the trained audio recognition model, and recognizing the audio features to obtain a second emotion recognition result of the user;

and obtaining the emotion type of the user according to the first emotion recognition result and the second emotion recognition result.

According to some examples of this invention, matching at least one reply text according to the user text information and the user emotion type comprises:

obtaining a session context label corresponding to the user text information based on the trained session context model;

and obtaining at least one corresponding reply text according to the matching of the conversation context labels.

According to some examples of the invention, before the step of pushing at least one of the reply texts to the customer service, the step of pushing further comprises sorting the obtained at least one reply text according to the emotion type of the user.

According to some examples of the invention, the user emotion recognition and reply method further comprises pushing the user emotion type to a customer service.

According to some examples of the invention, the method for recognizing and replying to the emotion of the user further comprises the steps of:

separating the customer service voice flow from the streaming voice flow;

acquiring customer service text information and a customer service emotion type according to the user voice stream;

and judging whether the type of the customer service emotion is a negative emotion or not, and if so, pushing warning information to the customer service.

According to some examples of the invention, before the step of separating the voice service stream from the streaming voice stream, the method further comprises the following steps:

and judging whether the emotion type of the user is negative emotion or not, and if the emotion type of the user is negative emotion, starting a step of separating the customer service voice stream from the streaming voice stream.

Some embodiments of the present invention further provide a system for recognizing and replying user emotion, which is used for implementing the method for recognizing and replying user emotion, and includes a voice stream obtaining module, a voice separating module, an analyzing module and an interaction module, wherein:

the voice flow acquisition module is used for acquiring a streaming voice flow in the conversation process of the customer service and the user in real time;

the voice separation module is used for separating a user voice stream from the streaming voice stream;

the analysis module is used for obtaining user text information and a user emotion type according to the user voice stream;

the reply matching module is used for matching at least one reply text according to the text information of the user and the emotion type of the user;

the interaction module is used for pushing at least one reply text to the customer service.

Some embodiments of the present invention also provide an electronic device, comprising:

a processor;

a memory having stored therein executable instructions of the processor;

wherein the processor is configured to perform the steps of the user emotion recognition and reply method via execution of the executable instructions.

Some embodiments of the present invention also provide a computer-readable storage medium storing a program, wherein the program, when executed, implements the steps of the user emotion recognition and reply method.

According to the user emotion recognition and reply method, the emotion of the user is recognized in real time in the conversation process, the reference reply text is provided for the customer service according to the emotion of the user, when the customer is in the negative emotion, the customer service is helped to meet the requirement of the customer in the first time, the negative emotion of the customer is pacified, the service quality of the customer service is improved, and the user experience degree is improved.

Drawings

Other features, objects, and advantages of the invention will be apparent from the following detailed description of non-limiting embodiments, which proceeds with reference to the accompanying drawings and which is incorporated in and constitutes a part of this specification, illustrating embodiments consistent with the present application and together with the description serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.

FIG. 1 is a flow chart of a method for recognizing and replying to a user's emotion according to an embodiment of the present invention;

FIG. 2 is a schematic diagram of a user emotion recognition and reply system according to an embodiment of the present invention;

FIG. 3 is a schematic structural diagram of a user emotion recognition and reply device according to an embodiment of the present invention;

fig. 4 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.

Detailed Description

Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.

Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.

Fig. 1 is a flowchart of a method for recognizing and replying a user emotion according to an embodiment of the present invention, and specifically, the method for recognizing and replying a user emotion includes the following steps:

s100: acquiring streaming voice flow in the conversation process of customer service and a user in real time;

s200: separating a user voice stream from the streaming voice stream; the end point monitoring technology and the sound separation technology are adopted, so that a mute part in a voice stream can be removed, and a user voice stream can be separated from a streaming voice stream;

s300: obtaining user text information and a user emotion type according to the user voice stream;

s400: matching at least one reply text according to the text information of the user and the emotion type of the user;

s500: and pushing at least one reply text to the customer service.

And pushing at least one reply text corresponding to the emotion type of the user to the customer service.

According to the user emotion recognition and reply method, the streaming voice stream in the conversation process of the customer service and the user is obtained in real time, the user voice stream is separated, the real-time emotion state of the user is obtained according to the user voice stream, a customer service reply scheme is provided according to the emotion of the user, the customer service is helped to meet the requirements of the customer at the first time, the negative emotion of the customer is pacified, the customer service quality is improved, and the user experience degree is improved.

In some real-time examples, the step S300 of obtaining the emotion type of the user according to the voice stream of the user includes the following steps:

s310: converting the user voice stream into user text information, namely acquiring user semantic information from the user voice stream; the Speech-to-text conversion of the user Speech stream may be Streaming Speech Recognition (SSR), which is understood to be that Speech-to-text Recognition is performed while collecting a session Speech signal of a customer service or a user, that is, when a customer or a user speaks, the Speech spoken by the user is converted into text.

S320: obtaining a sentence vector representation of the emotion of the user text information; the sentence vector representation can be obtained by performing word segmentation-based sentence vector representation processing on the user text information;

s330: carrying out emotion classification on the sentence vector representation through a trained text emotion classifier to obtain a first emotion recognition result of the user; the trained text emotion classifier can be obtained by a deep learning method based on a Word2VecEmbeddings neural network.

S340: and obtaining the emotion type of the user according to the first emotion recognition result. The user emotion types may be classified into positive emotions, neutral emotions, negative emotions, and the like, that is, the user emotion types may include at least one of positive emotions, neutral emotions, or negative emotions. The positive emotions may include happiness, love or pleasure, etc., the neutral emotions may include surprise or doubt, etc., and the negative emotions include complaints, anger, disgust, fear or sadness, etc. In the above embodiment, in step S300, the emotion type of the user is obtained through semantic information of the user session in the user voice stream.

In other embodiments, the obtaining of the emotion type of the user according to the voice stream of the user in step S300 includes the following steps:

extracting audio features from the user voice stream based on the trained audio recognition model, and recognizing the audio features to obtain a second emotion recognition result of the user; the trained audio recognition model can judge the emotional state of the user by recognizing the voice signal in the voice stream of the user, namely the prosodic attribute of the voice during the conversation of the user.

S320': and obtaining the emotion type of the user according to the second emotion recognition result. In the above embodiment, the S300 step obtains the emotion type of the user through the intonation of the user in the user voice stream during the user session, that is, the voice signal in the voice stream

In some embodiments, the obtaining of the emotion type of the user according to the voice stream of the user in step S300 includes the following steps:

s310: converting the user voice stream into user text information;

s320: obtaining a sentence vector representation of the emotion of the user text information; the sentence vector representation can be obtained by performing word segmentation-based sentence vector representation processing on the user text information;

s330: carrying out emotion classification on the sentence vector representation through a trained text emotion classifier to obtain a first emotion recognition result of the user;

s310': extracting audio features from the user voice stream based on the trained audio recognition model, and recognizing the audio features to obtain a second emotion recognition result of the user;

s350: and obtaining the emotion type of the user according to the first emotion recognition result and the second emotion recognition result. In the step S350, the emotion type of the user can be finally determined by the preset respective confidence degrees for recognizing the emotion of the user from the semantic information (text information) of the user and from the voice signal of the user, that is, in the step S350, the emotion type of the user is determined by simultaneously evaluating the semantic of the conversation of the user and the intonation during the conversation, so that the emotion classification errors occurring when the emotion type of the user is obtained from the semantic information of the user alone or from the voice signal of the user alone can be reduced.

The step of matching at least one reply text according to the text information of the user and the emotion type of the user in the step S400 may specifically include the steps of:

s410: obtaining a session context label corresponding to the user text information based on the trained session context model;

s420: and obtaining at least one corresponding reply text according to the matching of the conversation context labels.

After the at least one reply text is obtained in the step S420, the obtained plurality of reply texts can be sorted according to the emotion type of the user, in the step S500, the sorted reply texts are pushed to the customer service, and the customer service can quickly determine the mode of replying the user according to the sorted reply texts, so that the customer service can meet the requirements of the customer at the first time, soothe the emotion of the customer, improve the service quality of the customer service, and improve the user experience.

The steps from S100 to S500 of the user emotion recognition and reply method of the invention realize that the emotion types of the users in the conversation process of the customer service and the users are monitored in real time and the matched reply texts are automatically provided for the customer service according to the emotion types of the users, and on the other hand, the method of the invention can also comprise the following steps:

s600: separating the customer service voice flow from the streaming voice flow;

s700: acquiring customer service text information and a customer service emotion type according to the user voice stream; the type of customer service emotion can likewise include at least one of a positive emotion, which can include happiness, love or pleasure, etc., a neutral emotion, which can include surprise or question, etc., or a negative emotion, which can include complaint, anger, disgust, fear or sadness, etc.

S800: judging whether the type of the customer service emotion is a negative emotion, if so, S900: and pushing the warning information to the customer service.

Through the steps from S600 to S900, the emotion type of the customer service is monitored while the emotion type of the user is monitored, and when the customer service is found to be in a negative emotion state, warning information is sent to the customer service to remind the customer service. Of course, the above steps S600 to S900 may be started when the emotion type of the user is detected to be a negative emotion, that is, before the step S600 separates the customer service voice stream from the streaming voice stream, the method of the present invention further includes the following steps:

and judging whether the emotion type of the user is negative emotion or not, and if the emotion type of the user is negative emotion, starting a step of separating the customer service voice stream from the streaming voice stream.

Some embodiments of the present invention further provide a system for recognizing and replying user emotion, which is used to implement the method for recognizing and replying user emotion, and includes a voice stream obtaining module M100, a voice separating module M200, an analyzing module M300, a reply matching module M400, and an interaction module M500, where:

the voice flow obtaining module M100 is configured to obtain a streaming voice flow in a customer service and user session process in real time;

the voice separation module M200 is configured to separate a user voice stream from the streaming voice stream;

the analysis module M300 is configured to obtain user text information and a user emotion type according to the user voice stream;

the reply matching module M400 is used for matching at least one reply text according to the text information of the user and the emotion type of the user;

the interaction module M500 is configured to push at least one of the reply texts to the customer service.

The function implementation manners of each function module in the user emotion recognition and reply system of the embodiment can be implemented by adopting the specific implementation manners of each step in the user emotion recognition and reply method. For example, the voice stream obtaining module M100, the voice separating module M200, the analyzing module M300, the reply matching module M400, and the interaction module M500 may respectively adopt the specific implementation manners of the steps S100 to S500 to implement the functions thereof, which is not described herein again. The user emotion recognition and reply system provided by the invention recognizes the emotion type of the user in real time in the process of serving the user by the customer service, provides a customer service reply scheme according to the emotion type of the user, helps the customer service meet the requirements of the customer at the first time, soothes the negative emotion of the customer, improves the quality of the customer service and improves the user experience.

An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 3. The electronic device 600 shown in fig. 3 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.

As shown in fig. 3, the electronic device 600 is embodied in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: at least one processing unit 610, at least one memory unit 620, a bus 628 connecting the different platform components (including the memory unit 620 and the processing unit 610), a display unit 640, and the like.

Wherein the storage unit stores program code which can be executed by the processing unit 610 such that the processing unit 610 performs the steps according to various exemplary embodiments of the present invention as described in the above-mentioned method section of the present specification. For example, processing unit 610 may perform the steps as shown in fig. 1.

The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM)6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.

The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.

Bus 628 may be one or more of several types of bus structures including a memory unit bus or memory unit controller, a peripheral bus, an accelerated graphics port, a processing unit, or a local bus using any of a variety of bus architectures.

The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via an input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. The network adapter 660 may communicate with other modules of the electronic device 600 via the bus 628. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage platforms, to name a few.

An embodiment of the present invention further provides a computer-readable storage medium for storing a program, where the program is executed to implement the steps of the method for recognizing and replying the emotion of the user. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to the various exemplary embodiments of the invention described in the method part above of this description when said program product is run on the terminal device.

Referring to fig. 4, a program product 800 for implementing the above method according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

A computer readable storage medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable storage medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a readable storage medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).

In summary, the present invention provides a method, a system, an electronic device and a storage medium for recognizing and replying a user emotion, wherein the method includes the following steps: acquiring streaming voice flow in the conversation process of customer service and a user in real time; separating a user voice stream from the streaming voice stream; obtaining user text information and a user emotion type according to the user voice stream; matching at least one reply text according to the text information of the user and the emotion type of the user; and pushing at least one reply text to the customer service. According to the user emotion recognition and reply method, the emotion of the user is recognized in real time in the conversation process, the reference reply text is provided for the customer service according to the emotion of the user, when the customer is in the negative emotion, the customer service is helped to meet the requirement of the customer in the first time, the negative emotion of the customer is pacified, the service quality of the customer service is improved, and the user experience degree is improved.

The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种智能语音交互系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类