Voice reply method and device

文档序号:989496 发布日期:2020-11-06 浏览:2次 中文

阅读说明:本技术 一种语音回复方法及装置 (Voice reply method and device ) 是由 向岩 吕曼瑶 于 2020-07-29 设计创作,主要内容包括:本发明公开了一种语音回复方法及装置,其中,方法包括:获取用户在室内输入的语音信息,并确定所述用户的特征信息;根据所述语音信息确定对应的当前位置信息和当前时间信息,并识别出所述语音信息对应的文字信息;根据所述特征信息、当前位置信息和当前时间信息确定所述语音信息对应的目标情境信息;根据所述文字信息和所述目标情境信息确定对应的目标回复信息;根据所述目标情境信息和所述目标回复信息输出与所述语音信息对应的语音回复信息。通过该技术方案,使得情感随着情境的改变而改变,使得用户说话的情境不同,回复的内容和情感也不同。(The invention discloses a voice reply method and a voice reply device, wherein the method comprises the following steps: acquiring voice information input by a user indoors, and determining characteristic information of the user; determining corresponding current position information and current time information according to the voice information, and identifying character information corresponding to the voice information; determining target situation information corresponding to the voice information according to the feature information, the current position information and the current time information; determining corresponding target reply information according to the text information and the target context information; and outputting voice reply information corresponding to the voice information according to the target situation information and the target reply information. By the technical scheme, the emotion is changed along with the change of the situation, so that the speaking situation of the user is different, and the replied content and the emotion are also different.)

1. A method for voice reply, comprising:

acquiring voice information input indoors by a user, and determining characteristic information of the user, wherein the characteristic information comprises at least one of the following items: gender and age group;

determining corresponding current position information and current time information according to the voice information, and identifying character information corresponding to the voice information;

determining target situation information corresponding to the voice information according to the feature information, the current position information and the current time information;

determining corresponding target reply information according to the text information and the target context information;

and outputting voice reply information corresponding to the voice information according to the target situation information and the target reply information.

2. The method according to claim 1, wherein before determining the target context information corresponding to the speech information according to the feature information, the current location information, and the current time information, the method further comprises:

arranging and combining all the characteristic information, the position information and the time information, and respectively carrying out situation numbering on results of different arrangements and combinations;

determining target context information corresponding to the voice information according to the feature information, the current position information and the current time information of the user, wherein the target context information comprises:

and determining a target situation number corresponding to the voice information according to the feature information, the current position information and the current time information.

3. The method of claim 2, wherein outputting voice reply information corresponding to the voice information according to the target context information and the target reply information comprises:

determining a target emotion voice synthesis model corresponding to the target situation number according to the corresponding relation between a preset emotion voice synthesis model and the situation number;

and generating and outputting voice reply information corresponding to the voice information according to the target emotion voice synthesis model and the target reply information.

4. The method of claim 1, wherein the obtaining of the voice information input by the user indoors comprises:

when a preset awakening word is received, acquiring voice information input by a user indoors through sound receiving devices arranged in different indoor rooms;

determining corresponding current position information according to the voice information, wherein the determining comprises the following steps:

and determining the current position information corresponding to the voice information according to the position of the sound receiving device for receiving the voice information.

5. The method of claim 1, wherein the determining the characteristic information of the user comprises:

and carrying out voiceprint recognition on the voice information, and determining the gender and the age of the user according to a voiceprint recognition result.

6. A voice reply device, comprising:

the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring voice information input by a user indoors and determining characteristic information of the user, and the characteristic information comprises at least one of the following items: gender and age group;

the information identification module is used for determining corresponding current position information and current time information according to the voice information and identifying character information corresponding to the voice information;

the situation definition module is used for determining target situation information corresponding to the voice information according to the characteristic information, the current position information and the current time information;

the dialogue module is used for determining corresponding target reply information according to the character information and the target context information;

and the voice synthesis module is used for outputting voice reply information corresponding to the voice information according to the target situation information and the target reply information.

7. The apparatus of claim 6, wherein the context definition module comprises:

the preprocessing unit is used for carrying out permutation and combination on all the characteristic information, the position information and the time information and respectively carrying out situation numbering on results of different permutation and combination;

and the number determining unit is used for determining a target context number corresponding to the voice information according to the feature information, the current position information and the current time information of the user.

8. The apparatus of claim 7, wherein the speech synthesis module comprises:

the model determining unit is used for determining a target emotion voice synthesis model corresponding to the target context number according to the corresponding relation between a preset emotion voice synthesis model and the context number;

and the output unit is used for generating and outputting voice reply information corresponding to the voice information according to the target emotion voice synthesis model and the target reply information.

9. The apparatus of claim 6, wherein the obtaining module is configured to:

when a preset awakening word is received, acquiring voice information input by a user indoors through sound receiving devices arranged in different indoor rooms;

the information identification module is used for:

and determining the current position information corresponding to the voice information according to the position of the sound receiving device for receiving the voice information.

10. The apparatus of claim 6, wherein the obtaining module is configured to:

and carrying out voiceprint recognition on the voice information, and determining the gender and the age of the user according to a voiceprint recognition result.

Technical Field

The invention relates to the technical field of intelligent voice interaction, in particular to a voice reply method and a voice reply device.

Background

In the process of man-machine interaction, the function of converting characters into voice (voice synthesis technology) and transmitting the voice to user information has gradually penetrated into the life of people. With the continuous maturity of the technology, the use scene is gradually enlarged, and people are not satisfied with the synthetic speech with single dimension such as 'clear listening' and 'good listening' and the like. How to match the voice with the current situation and transmit the appropriate semantic and emotional information, so that the experience of people is more comfortable and the method becomes the current urgent need.

Disclosure of Invention

In view of the above problems, the present invention provides a method and a corresponding device for voice reply, which can combine a scene with voice, and the emotion changes with the change of the scene, so that the time, season, place, facing people, and speaking topics are different, and the replied words and emotion are different.

According to a first aspect of the embodiments of the present invention, there is provided a voice reply method, including:

acquiring voice information input indoors by a user, and determining characteristic information of the user, wherein the characteristic information comprises at least one of the following items: gender and age group;

determining corresponding current position information and current time information according to the voice information, and identifying character information corresponding to the voice information;

determining target situation information corresponding to the voice information according to the feature information, the current position information and the current time information;

determining corresponding target reply information according to the text information and the target context information;

and outputting voice reply information corresponding to the voice information according to the target situation information and the target reply information.

In one embodiment, preferably, before determining the target context information corresponding to the voice information according to the feature information, the current location information, and the current time information, the method further includes:

arranging and combining all the characteristic information, the position information and the time information, and respectively carrying out situation numbering on results of different arrangements and combinations;

determining target context information corresponding to the voice information according to the feature information, the current position information and the current time information of the user, wherein the target context information comprises:

and determining a target situation number corresponding to the voice information according to the feature information, the current position information and the current time information.

In one embodiment, preferably, outputting voice reply information corresponding to the voice information according to the target context information and the target reply information includes:

determining a target emotion voice synthesis model corresponding to the target situation number according to the corresponding relation between a preset emotion voice synthesis model and the situation number;

and generating and outputting voice reply information corresponding to the voice information according to the target emotion voice synthesis model and the target reply information.

In one embodiment, preferably, the acquiring the voice information input by the user indoors includes:

when a preset awakening word is received, acquiring voice information input by a user indoors through sound receiving devices arranged in different indoor rooms;

determining corresponding current position information according to the voice information, wherein the determining comprises the following steps:

and determining the current position information corresponding to the voice information according to the position of the sound receiving device for receiving the voice information.

In one embodiment, preferably, the determining the characteristic information of the user includes:

and carrying out voiceprint recognition on the voice information, and determining the gender and the age of the user according to a voiceprint recognition result.

According to a second aspect of the embodiments of the present invention, there is provided a voice replying apparatus, including:

the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring voice information input by a user indoors and determining characteristic information of the user, and the characteristic information comprises at least one of the following items: gender and age group;

the information identification module is used for determining corresponding current position information and current time information according to the voice information and identifying character information corresponding to the voice information;

the situation definition module is used for determining target situation information corresponding to the voice information according to the characteristic information, the current position information and the current time information;

the dialogue module is used for determining corresponding target reply information according to the character information and the target context information;

and the voice synthesis module is used for outputting voice reply information corresponding to the voice information according to the target situation information and the target reply information.

In one embodiment, preferably, the context definition module comprises:

the preprocessing unit is used for carrying out permutation and combination on all the characteristic information, the position information and the time information and respectively carrying out situation numbering on results of different permutation and combination;

and the number determining unit is used for determining a target context number corresponding to the voice information according to the feature information, the current position information and the current time information of the user.

In one embodiment, preferably, the speech synthesis module includes:

the model determining unit is used for determining a target emotion voice synthesis model corresponding to the target context number according to the corresponding relation between a preset emotion voice synthesis model and the context number;

and the output unit is used for generating and outputting voice reply information corresponding to the voice information according to the target emotion voice synthesis model and the target reply information.

In one embodiment, preferably, the obtaining module is configured to:

when a preset awakening word is received, acquiring voice information input by a user indoors through sound receiving devices arranged in different indoor rooms;

the information identification module is used for:

and determining the current position information corresponding to the voice information according to the position of the sound receiving device for receiving the voice information.

In one embodiment, preferably, the obtaining module is configured to:

and carrying out voiceprint recognition on the voice information, and determining the gender and the age of the user according to a voiceprint recognition result.

According to a third aspect of the embodiments of the present invention, there is provided a voice reply device, including:

one or more processors;

a memory;

one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method as described in the first aspect or any of the embodiments of the first aspect.

In the embodiment of the invention, the scene is combined with the voice, and the emotion changes along with the change of the scene, so that the speaking situation of the user is different, and the replied content and emotion are also different.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a flowchart of a voice reply method according to an embodiment of the present invention.

Fig. 2 is a flow chart of another voice reply method according to an embodiment of the invention.

Fig. 3 is a flow chart of another voice reply method according to an embodiment of the invention.

Fig. 4 is a block diagram of a voice reply device according to an embodiment of the present invention.

FIG. 5 is a block diagram of a context definition module in a voice response apparatus according to an embodiment of the present invention.

Fig. 6 is a block diagram of the speech synthesis module 45 in the speech replying device according to an embodiment of the present invention.

Detailed Description

In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention.

In some of the flows described in the present specification and claims and in the above figures, a number of operations are included that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel as they occur herein, with the order of the operations being indicated as 101, 102, etc. merely to distinguish between the various operations, and the order of the operations by themselves does not represent any order of performance. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Fig. 1 is a flowchart of a voice reply method according to an embodiment of the present invention, and as shown in fig. 1, the voice reply method includes:

step S101, acquiring voice information input by a user indoors, and determining characteristic information of the user, wherein the characteristic information comprises at least one of the following items: gender and age group;

in one embodiment, preferably, the determining the characteristic information of the user includes:

and carrying out voiceprint recognition on the voice information, and determining the gender and the age of the user according to a voiceprint recognition result.

Acquiring voice information input by a user indoors, wherein the voice information comprises:

when a preset awakening word is received, acquiring voice information input by a user indoors through sound receiving devices arranged in different indoor rooms;

determining corresponding current position information according to the voice information, wherein the determining comprises the following steps:

and determining the current position information corresponding to the voice information according to the position of the sound receiving device for receiving the voice information.

Step S102, determining corresponding current position information and current time information according to the voice information, and identifying character information corresponding to the voice information;

step S103, determining target situation information corresponding to the voice information according to the feature information, the current position information and the current time information;

step S104, determining corresponding target reply information according to the character information and the target context information;

and step S105, outputting voice reply information corresponding to the voice information according to the target situation information and the target reply information.

In this embodiment, target context information corresponding to the voice information is determined according to the gender, age, current location, current time, and the like of the user, and the voice reply information is output according to the target context information and the corresponding target reply information, so that the emotion changes with the change of the context, the speaking context of the user is different, and the reply content and emotion are also different.

Fig. 2 is a flow chart of another voice reply method according to an embodiment of the invention.

As shown in fig. 2, in an embodiment, preferably, before the step S103, the method further includes:

step S201, all the characteristic information, the position information and the time information are arranged and combined, and the results of different arrangements and combinations are respectively subjected to situation numbering;

step S103 includes:

step S202, determining a target context number corresponding to the voice information according to the feature information, the current position information and the current time information.

Fig. 3 is a flow chart of another voice reply method according to an embodiment of the invention.

As shown in fig. 3, in one embodiment, preferably, the step S105 includes:

step S301, determining a target emotion voice synthesis model corresponding to the target context number according to the corresponding relation between a preset emotion voice synthesis model and the context number.

The corresponding relation between the emotion voice synthesis model and the situation number can be preset, and similarly, the emotion voice synthesis model can be numbered and matched with the situation number. For example, context information with context number 001 corresponds to emotion speech synthesis model with emotion speech synthesis model number 001.

Step S302, generating and outputting voice reply information corresponding to the voice information according to the target emotion voice synthesis model and the target reply information.

Fig. 4 is a block diagram of a voice reply device according to an embodiment of the present invention.

As shown in fig. 4, according to a second aspect of the embodiments of the present invention, there is provided a voice replying apparatus, including:

an obtaining module 41, configured to obtain voice information input indoors by a user, and determine feature information of the user, where the feature information includes at least one of: gender and age group;

the information identification module 42 is configured to determine corresponding current position information and current time information according to the voice information, and identify text information corresponding to the voice information;

a context defining module 43, configured to determine target context information corresponding to the voice information according to the feature information, the current location information, and the current time information;

the dialogue module 44 is configured to determine corresponding target reply information according to the text information and the target context information;

and a voice synthesis module 45, configured to output, according to the target context information and the target reply information, voice reply information corresponding to the voice information.

FIG. 5 is a block diagram of a context definition module in a voice response apparatus according to an embodiment of the present invention.

As shown in FIG. 5, in one embodiment, the context definition module 43 preferably comprises:

a preprocessing unit 51, configured to perform permutation and combination on all feature information, location information, and time information, and perform context numbering on results of different permutation and combination respectively;

and a number determining unit 52, configured to determine a target context number corresponding to the voice information according to the feature information of the user, the current location information, and the current time information.

Fig. 6 is a block diagram of the speech synthesis module 45 in the speech replying device according to an embodiment of the present invention.

As shown in fig. 6, in one embodiment, the speech synthesis module 45 preferably includes:

the model determining unit 61 is configured to determine a target emotion voice synthesis model corresponding to the target context number according to a corresponding relationship between a preset emotion voice synthesis model and the context number;

and the output unit 62 is configured to generate and output voice reply information corresponding to the voice information according to the target emotion voice synthesis model and the target reply information.

In one embodiment, preferably, the obtaining module 41 is configured to:

when a preset awakening word is received, acquiring voice information input by a user indoors through sound receiving devices arranged in different indoor rooms;

the information identification module is used for:

and determining the current position information corresponding to the voice information according to the position of the sound receiving device for receiving the voice information.

In one embodiment, preferably, the obtaining module 41 is configured to:

and carrying out voiceprint recognition on the voice information, and determining the gender and the age of the user according to a voiceprint recognition result.

According to a third aspect of the embodiments of the present invention, there is provided a voice reply device, including:

one or more processors;

a memory;

one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method as described in the first aspect or any of the embodiments of the first aspect.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, or the like.

It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by hardware that is instructed to implement by a program, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.

While the portable multifunctional device provided by the present invention has been described in detail, those skilled in the art will appreciate that the various embodiments and applications of the invention can be modified, and that the scope of the invention is not limited by the disclosure of the present invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于识别合成语音的方法、装置、设备和介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!