Cloud-based voice control method and device and electronic equipment

文档序号:1355684 发布日期:2020-07-24 浏览:6次 中文

阅读说明:本技术 基于云端的语音控制方法、装置及电子设备 (Cloud-based voice control method and device and electronic equipment ) 是由 卢静 于 2018-12-27 设计创作,主要内容包括:本发明公开了一种基于云端的语音控制方法、装置及电子设备,方法包括:接收终端发送的语音信息,将语音信息输入至预测模型中进行预测处理,得到语音预测结果;其中,预测模型是根据预采集的语音样本进行训练得到的;语音样本与预采集的用户标识相关联保存在预设语音库中;根据语音预测结果,向终端发送控制指令,以供终端根据接收到的控制指令进行控制处理。该方式利用终端采集并向云端传输语音信息,在云端利用机器学习的方法对语音信息进行预测,能够提高预测的效率以及准确率,从而快速准确地确认说话人的身份,实现精准的语音控制。(The invention discloses a cloud-based voice control method, a cloud-based voice control device and electronic equipment, wherein the method comprises the following steps: receiving voice information sent by a terminal, inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way; and sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction. According to the method, the voice information is collected and transmitted to the cloud end by the terminal, and is predicted by the machine learning method at the cloud end, so that the prediction efficiency and accuracy can be improved, the identity of a speaker can be quickly and accurately confirmed, and accurate voice control is realized.)

1. A cloud-based voice control method comprises the following steps:

receiving voice information sent by a terminal, inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way;

and sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction.

2. The method of claim 1, wherein the inputting the speech information into a prediction model for prediction processing to obtain a speech prediction result further comprises:

performing feature analysis on the voice information, and extracting feature information of multiple dimensions;

respectively inputting the characteristic information of each dimension into a prediction model corresponding to each dimension to perform prediction processing, and obtaining a prediction result of each dimension;

and integrating the prediction results of all dimensions to obtain a voice prediction result.

3. The method of claim 2, wherein the integrating the prediction results of the dimensions to obtain the speech prediction result further comprises:

and integrating the prediction results of all dimensions according to the preset dimension priority level to obtain a voice prediction result.

4. The method according to claim 2 or 3, wherein the plurality of dimensions in particular comprises one or more of the following dimensions: a pitch dimension, a timbre dimension, a intonation dimension, a frequency dimension, a pace dimension, and a tailpiece dimension.

5. The method according to claim 1, wherein the preset speech library further holds time period information associated with the speech samples, and the prediction model corresponds to the time period information; inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result further comprises:

acquiring time information contained in the voice information, inquiring time period information matched with the time information, and determining a prediction model corresponding to the matched time period information;

and inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing to obtain a voice prediction result.

6. The method of claim 1, wherein after obtaining the speech prediction result, the method further comprises:

carrying out voice recognition processing on the voice information to obtain a voice recognition result;

then, according to the voice prediction result, sending a control instruction to the terminal further includes:

and sending a control instruction to the terminal according to the voice prediction result and the voice recognition result.

7. The method of any of claims 1-6, wherein prior to performing the method, further comprising:

receiving image information sent by a terminal, and carrying out face recognition processing on the image information to obtain a face recognition result;

then, according to the voice prediction result, sending a control instruction to the terminal further includes:

and sending a control instruction to the terminal according to the voice prediction result and the face recognition result.

8. A cloud-based voice control device, comprising:

the prediction processing module is used for receiving the voice information sent by the terminal, inputting the voice information into the prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way;

and the sending module is suitable for sending a control instruction to the terminal according to the voice prediction result so that the terminal can carry out control processing according to the received control instruction.

9. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the operation corresponding to the cloud-based voice control method in any one of claims 1-7.

10. A computer storage medium having stored therein at least one executable instruction that causes a processor to perform operations corresponding to the cloud-based voice control method of any of claims 1-7.

Technical Field

The invention relates to the technical field of artificial intelligence, in particular to a cloud-based voice control method and device and electronic equipment.

Background

The voice control is the most natural and convenient mode for human beings, along with the development of scientific technology, the voice control is widely applied to various fields, the application of voice control solves the problem of putting both hands of people, and the simplicity, the easiness, the interactivity and the entertainment of the control terminal equipment can be further improved. The important component of the voice control technology is the voiceprint recognition technology, which is the identification technology for recognizing the speaker according to the biological characteristics of the speaker implied by the voice. Because the voiceprint characteristics of each person are unique and are not easy to forge and counterfeit, the voiceprint identification has the characteristics of safety, reliability, convenience and the like, and the voiceprint identification can be widely applied to occasions needing identity identification. However, in the prior art, the voiceprint recognition process is often time-consuming and the recognition result is often not accurate, thereby affecting the efficiency and accuracy of voice control.

Disclosure of Invention

In view of the above, the present invention is proposed to provide a cloud-based voice control method, apparatus and electronic device that overcome the above problems or at least partially solve the above problems.

According to one aspect of the invention, a cloud-based voice control method is provided, and the method comprises the following steps:

receiving voice information sent by a terminal, inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way;

and sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction.

Optionally, inputting the speech information into a prediction model for prediction processing, and obtaining a speech prediction result further includes:

performing feature analysis on the voice information, and extracting feature information of multiple dimensions;

respectively inputting the characteristic information of each dimension into a prediction model corresponding to each dimension to perform prediction processing, and obtaining a prediction result of each dimension;

and integrating the prediction results of all dimensions to obtain a voice prediction result.

Optionally, integrating the prediction results of the dimensions, and obtaining the speech prediction result further includes:

and integrating the prediction results of all dimensions according to the preset dimension priority level to obtain a voice prediction result.

Optionally, the plurality of dimensions specifically includes one or more of the following dimensions: a pitch dimension, a timbre dimension, a intonation dimension, a frequency dimension, a pace dimension, and a tailpiece dimension.

Optionally, the preset voice library further stores time period information associated with the voice sample, and the prediction model corresponds to the time period information; inputting the voice information into a prediction model for prediction processing, and obtaining a voice prediction result further comprises:

acquiring time information contained in the voice information, inquiring time segment information matched with the time information, and determining a prediction model corresponding to the matched time segment information;

and inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing to obtain a voice prediction result.

Optionally, after obtaining the speech prediction result, the method further comprises:

carrying out voice recognition processing on the voice information to obtain a voice recognition result;

then, sending a control instruction to the terminal according to the voice prediction result further comprises:

and sending a control instruction to the terminal according to the voice prediction result and the voice recognition result.

Optionally, before the method is executed, the method further includes:

receiving image information sent by a terminal, and carrying out face recognition processing on the image information to obtain a face recognition result;

then, sending a control instruction to the terminal according to the voice prediction result further comprises:

and sending a control instruction to the terminal according to the voice prediction result and the face recognition result.

Optionally, the preset voice library further stores a sample check value of the voice sample, and after receiving the voice information sent by the terminal, the method further includes:

calculating a check value of the voice information, and judging whether a voice sample with a sample check value consistent with the check value of the voice information exists in a preset voice library or not; if yes, giving up the prediction processing of the voice information;

if not, the step of inputting the voice information into the prediction model for prediction processing is executed.

Optionally, sending a control instruction to the terminal, so that the terminal performs control processing according to the received control instruction further includes:

and sending an unlocking control instruction to the terminal so that the terminal can unlock the door lock according to the received unlocking control instruction.

Optionally, sending a control instruction to the terminal, so that the terminal performs control processing according to the received control instruction further includes:

and sending a payment permission instruction to the terminal so that the terminal can complete payment processing according to the received payment permission instruction.

According to another aspect of the present invention, there is provided a cloud-based voice control apparatus, including:

the prediction processing module is used for receiving the voice information sent by the terminal, inputting the voice information into the prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way;

and the sending module is suitable for sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction.

Optionally, the prediction processing module is further adapted to:

performing feature analysis on the voice information, and extracting feature information of multiple dimensions;

respectively inputting the characteristic information of each dimension into a prediction model corresponding to each dimension to perform prediction processing, and obtaining a prediction result of each dimension;

and integrating the prediction results of all dimensions to obtain a voice prediction result.

Optionally, the prediction processing module is further adapted to:

and integrating the prediction results of all dimensions by the preset dimension priority level to obtain a voice prediction result.

Optionally, the plurality of dimensions specifically includes one or more of the following dimensions: a pitch dimension, a timbre dimension, a intonation dimension, a frequency dimension, a pace dimension, and a tailpiece dimension.

Optionally, the preset speech library further stores time period information associated with the speech sample, and the prediction model corresponds to the time period information, then the prediction processing module is further adapted to:

acquiring time information contained in the voice information, inquiring time segment information matched with the time information, and determining a prediction model corresponding to the matched time segment information;

and inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing to obtain a voice prediction result.

Optionally, the apparatus further comprises:

the voice recognition module is suitable for performing voice recognition processing on the voice information after the voice prediction result is obtained to obtain a voice recognition result;

the sending module is further adapted to: and sending a control instruction to the terminal according to the voice prediction result and the voice recognition result.

Optionally, the apparatus further comprises:

the face recognition module is suitable for receiving the image information sent by the terminal and carrying out face recognition processing on the image information to obtain a face recognition result;

the sending module is further adapted to: and sending a control instruction to the terminal according to the voice prediction result and the face recognition result.

Optionally, the preset voice library further stores a sample check value of the voice sample, and the apparatus further includes:

the verification module is suitable for calculating a verification value of the voice information and judging whether a voice sample with a sample verification value consistent with the verification value of the voice information exists in a preset voice library or not;

the prediction processing module is further adapted to:

if a voice sample with a sample check value consistent with the check value of the voice information exists in the preset voice library, giving up the prediction processing on the voice information;

and if no voice sample with the sample check value consistent with the check value of the voice information exists in the preset voice library, the step of inputting the voice information into the prediction model for prediction processing is executed.

Optionally, the sending module is further adapted to:

and sending an unlocking control instruction to the terminal so that the terminal can unlock the door lock according to the received unlocking control instruction.

Optionally, the sending module is further adapted to:

and sending a payment permission instruction to the terminal so that the terminal can complete payment processing according to the received payment permission instruction.

According to still another aspect of the present invention, there is provided an electronic apparatus including: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is used for storing at least one executable instruction, and the executable instruction enables the processor to execute the operation corresponding to the cloud-based voice control method.

According to another aspect of the present invention, a computer storage medium is provided, where at least one executable instruction is stored in the storage medium, and the executable instruction causes a processor to perform an operation corresponding to the cloud-based voice control method.

According to the cloud-based voice control method, the cloud-based voice control device and the electronic equipment, the method comprises the following steps: receiving voice information sent by a terminal, inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way; and sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction. According to the method, the voice information is collected and transmitted to the cloud end by the terminal, and is predicted by the machine learning method at the cloud end, so that the prediction efficiency and accuracy can be improved, the identity of a speaker can be quickly and accurately confirmed, and accurate voice control is realized.

The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.

Drawings

Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:

fig. 1 is a flow chart illustrating a cloud-based voice control method according to an embodiment of the present invention;

FIG. 2 is a flow chart illustrating a cloud-based voice control method according to another embodiment of the invention;

FIG. 3 is a flow chart illustrating a cloud-based voice control method according to another embodiment of the invention;

FIG. 4 is a functional block diagram of a cloud-based voice control apparatus according to another embodiment of the present invention;

FIG. 5 is a functional block diagram of a cloud-based voice control apparatus according to another embodiment of the present invention;

fig. 6 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention.

Detailed Description

Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

Fig. 1 is a flowchart illustrating a cloud-based voice control method according to an embodiment of the present invention, where as shown in fig. 1, the method includes:

step S101, receiving voice information sent by a terminal, inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result, wherein the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way.

The terminal can be integrated with voice acquisition equipment such as a microphone, acquires voice information acquired by the voice acquisition equipment and sends the voice information to the cloud, and the received voice information is subjected to prediction processing by adopting a machine learning method at the cloud.

In the pre-collection stage, a user inputs a voice sample through a terminal, the terminal sends the pre-collected voice sample to a cloud end, the cloud end trains according to the pre-collected voice sample to obtain a prediction model, and specifically, for each pre-collected user, a prediction model is trained according to the pre-collected voice sample input by the user. And the voice sample and the pre-collected user identifier are stored in a preset voice library in a related manner, that is, the prediction model and the pre-collected user identifier have a corresponding relationship in this embodiment. The pre-collected user identifier may be entered by the user through the terminal, or may be automatically generated by the cloud after receiving the voice sample, which is not limited in the present invention.

The voice prediction result of the prediction model can indicate whether the speaker corresponding to the voice information is a pre-collected user corresponding to the prediction model. For example, in the pre-collection stage, a voice sample is pre-recorded by the user zhang san, and a user identifier "zhang san" is recorded, the cloud trains the prediction model according to the voice sample recorded by zhang san, and the pre-collected user identifier corresponding to the prediction model is "zhang san". In the application stage, the voice prediction result of the prediction model aiming at the voice information is as follows: [ (0,0.1), (1,0.9) ], where the first set of values (0,0.1) indicates that the probability that the speaker is not Zhang III is 0.1, the second set of values (1,0.9) indicates that the probability that the speaker is Zhang III is 0.9, and the probability that the speaker is Zhang III far exceeds the probability that the speaker is not Zhang III, it can be determined that the speaker is Zhang III.

And step S102, sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction.

The voice prediction result can indicate whether the speaker corresponding to the voice information is a pre-collected user, and the cloud sends a control instruction to the terminal according to the result of whether the speaker is the pre-collected user. In different application scenes, control instructions sent by the cloud end are different. For example, in a voice-controlled door opening scene, if the voice information is predicted to obtain the voice prediction result and the speaker is judged to be a pre-collected user (Zhang III), the cloud sends a door opening control instruction to the terminal, and the terminal opens the door according to the received door opening control instruction.

In addition, in a scene of opening the door under voice control, each member in a family can open the door under voice, and the family members comprise people, pets and the like, so that a user group can be set at the cloud end, and a plurality of associated prediction models corresponding to the pre-collected users are divided into a group. Specifically, the corresponding relationship between the group identifier and the terminal identifier may be pre-established, and the cloud may determine, according to the terminal identifier of the terminal that sends the voice sample, which group the prediction model trained according to the voice sample belongs to. For example, voice samples are recorded in advance by each family member through the same terminal, the cloud respectively obtains training prediction models according to the voice samples recorded by each pre-collected family member, and the prediction models corresponding to the pre-collected family members are divided into a group.

Then, in the application stage, when receiving the voice information sent by the terminal, firstly, acquiring the terminal identifier of the terminal, determining the corresponding group according to the terminal identifier, and then, inputting the voice information into each prediction model in the group in parallel for prediction processing, or inputting the voice information into each prediction model in the group in sequence for prediction processing. The invention is not limited in this regard.

And each prediction model in the group outputs a voice prediction result, and then an instruction is sent to the terminal according to the voice prediction result output by each prediction model. For example, in a voice door opening scene, when a voice prediction result output by any prediction model in the group indicates that a speaker is a pre-acquired user, an unlocking control instruction is sent to the terminal; and when the voice prediction results output by each prediction model in the group indicate that the speaker is not the pre-collected user, sending an alarm control instruction to the terminal.

According to the cloud-based voice control method provided by the embodiment, firstly, the voice information sent by the terminal is received, and the voice information is input into the prediction model for prediction processing, so that a voice prediction result is obtained. The prediction model is obtained by training according to a pre-collected voice sample, and the voice sample and a pre-collected user identifier are stored in a preset voice library in a relevant and associated manner; and sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction. According to the method, the voice information is collected and transmitted to the cloud end by the terminal, and is predicted by the machine learning method at the cloud end, so that the prediction efficiency and accuracy can be improved, the identity of a speaker can be quickly and accurately confirmed, and accurate voice control is realized.

Fig. 2 is a schematic flow diagram illustrating a cloud-based voice control method according to another embodiment of the present invention, where a voice control door opening scene is taken as an example for description in this embodiment, as shown in fig. 2, the method includes:

step S201, receiving image information sent by a terminal, and performing face recognition processing on the image information to obtain a face recognition result.

The terminal is integrated with image acquisition equipment such as a camera, the terminal acquires image information acquired by the image acquisition equipment and sends the image information to the cloud, and the cloud performs face recognition processing on the received image information to recognize faces in the image and obtain a face image. Wherein, the face recognition result is also used for verifying the identity of the speaker. The face recognition result obtained in this step is the auxiliary verification information for the subsequent identity verification, and the voice control method of this embodiment may not perform face recognition, that is, this step is an optional step in this embodiment.

Step S202, receiving the voice information sent by the terminal, carrying out feature analysis on the voice information, and extracting feature information of multiple dimensions.

The characteristics for representing one person speaking are multifaceted, and the purpose of characteristic analysis is to extract characteristic information with characteristics of strong separability, high stability and the like for the voice of the speaker. Therefore, in this embodiment, the terminal sends voice information to the cloud, and the cloud carries out feature analysis to voice information from a plurality of dimensions, extracts the features of a plurality of dimensions of voice information, and accurately depicts voice information from a plurality of dimensions. Wherein the plurality of dimensions specifically includes one or more of the following dimensions: a pitch dimension, a timbre dimension, a intonation dimension, a frequency dimension, a pace dimension, and a tailpiece dimension.

The speech acoustic characteristics of each speaker are both relatively stable and variable. However, the recorded voice often has absolute stability, the feature analysis is performed on the same recorded voice at different moments, the extracted feature information is consistent, and if the recorded voice is recorded in the pre-acquisition stage, the cloud end is triggered to send a control instruction to the terminal by using the recorded voice in the application stage. For example, in a scenario of opening a door by voice control, a pre-collected user is a family member, if the voice of the family member is recorded and a voice sample is generated according to the recorded voice of the family member, the cloud can be triggered to send a control instruction by using the recorded voice of the family member in an application stage so as to open the door lock, and therefore, if the voice sample of the family member is leaked, for example, the voice sample is recorded by other people who are not family members or even lawbreakers, a very dangerous potential safety hazard can be caused.

Based on this, in the present embodiment, the recorded voice is excluded by checking the voice information. Specifically, the preset voice library also stores a sample check value of the voice sample, and after the voice information is received, the check value of the voice information is firstly calculated, and whether the voice sample with the sample check value consistent with the check value of the voice information exists in the voice library or not is judged; if the verification value of the voice information is consistent with the verification value of the voice sample, the fact that the speaker uses the recorded voice to open the door is indicated, and in this case, the voice information is not processed in the next step, or a door opening control instruction is directly sent to the terminal; and if the verification value of the voice information is inconsistent with the verification value of the voice sample, continuing to perform the subsequent steps of performing feature analysis on the voice information, extracting feature information of multiple dimensions and the like.

Step S203, respectively inputting the characteristic information of each dimension into a prediction model corresponding to each dimension for prediction processing to obtain a prediction result of each dimension, wherein the prediction model corresponding to each dimension is obtained by training according to a pre-collected voice sample, and the voice sample and the pre-collected user identification are stored in a preset voice library in a relevant way.

In the pre-collection stage, a user inputs a voice sample, the cloud extracts feature information of each dimensionality of the pre-collected voice sample, and a prediction model corresponding to each dimensionality is obtained through training prediction according to the feature information of each dimensionality of the voice sample. In the application stage, after the feature information of each dimension of the voice information is extracted, the feature information of each dimension is respectively input into the prediction model corresponding to each dimension, and the prediction result of each dimension is obtained. That is, in this embodiment, one pre-collected user identifier corresponds to a plurality of prediction models with different dimensions.

The prediction result of any dimension represents the probability of whether the speaker corresponding to the voice information in the dimension is the pre-collected user corresponding to the prediction model. For example, the user identifier "zhangsan" corresponds to a prediction model with a pitch dimension and a tone dimension, the feature information of the pitch dimension is input into the prediction model with the pitch dimension for prediction processing, and the feature information of the tone dimension is input into the prediction model with the tone dimension for prediction processing, and the prediction result output by the prediction model with the pitch dimension is: [ (0,0.2), (1,0.8) ], the prediction model for timbre dimensions outputs prediction results of: [ (0,0.26), (1,0.74) ], then the prediction of the pitch dimension represents: in the pitch dimension, the probability that the speaker is not Zhang III is 0.2, but the probability that the speaker is Zhang III is 0.8; the prediction result of the timbre dimension represents: in the timbre dimension, the probability of a speaker not being Zhang III is 0.26, while the probability of a speaker being Zhang III is 0.74.

And step S204, integrating the prediction results of all dimensions to obtain a voice prediction result.

Optionally, the prediction results of the dimensions are integrated according to a preset dimension priority level to obtain a voice prediction result. For example, the tone and the pitch are significant features that can distinguish the voice of a speaker, and therefore, the dimension priority level of the tone dimension may be set to a first level, the dimension priority level of the tone dimension may be set to a second level, and the dimension priority levels of other dimensions may be set to a third level.

Following the above example, where the weight of the pitch dimension is 0.6 and the weight of the timbre dimension is 0.4, the speech prediction result obtained by weighted summation is: [ (0,0.2 × 0.6+0.26 × 0.4), (1,0.8 × 0.6+0.74 × 0.4) ], that is, [ (0,0.224), (1,0.776) ], it can be seen that, by combining the prediction results of the respective dimensions, the probability (0.776) that the speaker is three-open is far higher than the probability (0.224) that the speaker is not three-open, and thus the speaker is determined to be three-open.

And S205, sending an unlocking control instruction to the terminal according to the voice prediction result and the face recognition result, so that the terminal can unlock the door lock according to the received unlocking control instruction.

Step S201 is an optional step of this embodiment, and it is also an optional implementation to send a control instruction to the terminal according to the speech prediction result and the face recognition result. And determining whether the speaker corresponding to the voice information is a pre-collected user according to the voice prediction result, wherein the face recognition result can further verify the identity of the speaker.

Optionally, in the pre-acquisition stage, a user can also enter a face image while entering a voice sample, the terminal sends the shot image information to the cloud, the cloud performs face recognition on the received image information to obtain an image sample, and associates the image sample with a user identifier for storage.

And in the application stage, after the voice prediction result is obtained, an image sample corresponding to the prediction model is further obtained, and the face recognition result is matched with the corresponding image sample to obtain an image matching result. And then, determining whether the speaker is a pre-collected user or not by combining the voice prediction result and the image matching result, if so, sending an unlocking control instruction to the terminal, and unlocking the door lock by the terminal according to the received unlocking control instruction. If not, sending an alarm control instruction to the terminal, and broadcasting preset voice information by the terminal according to the received alarm control instruction by voice, which is not limited by the invention.

Optionally, in order to further improve the security of the voice control, in addition to identifying the identity of the speaker according to the voice information, the method further identifies the voice content included in the identified voice information, that is, performs voice identification processing on the voice information, and sends a control instruction according to the voice prediction result and the voice identification result. Namely, the verification is performed on the voice information in two aspects, namely, the identity of the speaker corresponding to the voice information on one hand, and the voice content of the voice information on the other hand.

Specifically, after receiving the voice information, performing voice recognition processing on the voice information to obtain a voice recognition result, and then sending a control instruction to the terminal according to the voice prediction result and the voice recognition result. In the pre-collection stage, voice recognition processing is carried out on a voice sample input by a user to obtain a voice recognition sample, the voice recognition sample and the voice sample are stored in a preset voice library in a related mode, or the user inputs a control password (for example, sesame opening) by himself to generate the voice recognition sample. And in the application stage, judging whether a speaker corresponding to the voice information is a pre-collected user or not according to the voice prediction result, judging whether the voice recognition processing result is matched with a pre-collected voice recognition sample or not, and if the speaker is judged to be the pre-collected user and the voice content is 'sesame door opening', sending an unlocking control instruction to the terminal by the cloud end.

In summary, the application scenario of the invention is wide, except for the above-mentioned voice control door opening scenario, the invention can also be applied to any scenario requiring identity recognition, such as a payment scenario, and if the voice prediction result indicates that the speaker corresponding to the voice information is a pre-collected user, the cloud sends a payment permission instruction to the terminal, so that the terminal completes payment processing according to the received payment permission instruction; and if the voice prediction result shows that the speaker corresponding to the voice information is not the pre-collected user, the cloud end sends a payment disallowing instruction to the terminal.

Therefore, by adopting the cloud-based voice control method provided by the embodiment, the voice information and the image information are collected and transmitted to the cloud by the terminal, the characteristic information of multiple dimensions of the voice information is extracted from the cloud, the characteristic information of each dimension is predicted by a machine learning method, the prediction results of each dimension are integrated to obtain the voice prediction result, and the prediction efficiency and the accuracy are high; meanwhile, the image information is subjected to face recognition processing at the cloud end, the identity of the speaker is quickly and accurately confirmed by combining a voice prediction result and a face recognition result, the identity of the speaker is recognized from two layers, the recognition accuracy is high, and accurate voice control can be realized; in addition, the mode can also eliminate the recorded voice, so that the recorded voice is prevented from triggering the cloud to send a control instruction, and the safety of voice control is greatly improved.

Fig. 3 is a flowchart illustrating a cloud-based voice control method according to another embodiment of the present invention, and as shown in fig. 3, the method includes:

step S301, receiving voice information sent by a terminal, obtaining time information contained in the voice information, inquiring time period information matched with the time information, and determining a prediction model corresponding to the matched time period information, wherein the prediction model is obtained by training according to a pre-collected voice sample, and the voice sample and a pre-collected user identifier are stored in a preset voice library in a related manner.

The voice acoustic characteristics of each speaker have relative stability and variability, are not absolute and invariable, and even the voice acoustic characteristics in different time periods in a day may be different, so that the identity of the speaker is identified by adopting a single prediction model, and the accuracy is low. Based on this, in the embodiment, a plurality of prediction models are trained according to the time information, so as to verify the identity of the speaker.

Specifically, in the pre-acquisition stage, a user can divide a time period or a system preset time period according to own needs, and input a voice sample in different time periods, and for the user, the cloud trains a prediction model corresponding to the time period according to the voice sample received in the same time period. For example, the user divides 5 am to 8 am into a first time period, divides 9 am to 4 pm into a second time period, divides 5 pm to 11 pm into a third time period, and respectively inputs voice samples in the first, second, and third time periods, the cloud end trains to obtain a first prediction model according to the voice samples received in the first time period, trains to obtain a second prediction model according to the voice samples received in the second time period, and trains to obtain a third prediction model according to the voice samples received in the third time period.

In the application stage, when voice information sent by a terminal is received, time information contained in the voice information is firstly acquired, time period information matched with the time information is determined, and then a prediction model corresponding to the matched time period information is determined. Continuing with the above example, the terminal acquires the voice information at 7 am, and sends the voice information and the time information to the cloud in real time, and if the time period information matched with the 7 am is the first time period, the prediction model corresponding to the matched time period information is the first prediction model.

Step S302, inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing, and obtaining a voice prediction result.

The prediction model corresponding to the matched time segment information is the prediction model matched with the voice information in time. And inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing. The specific implementation of the prediction processing refers to the description in the above embodiments, and is not repeated herein. By adopting the method, the prediction models corresponding to the time periods are respectively trained, so that the identity of the speaker can be accurately recognized in different time periods, and the recognition accuracy is high.

And step S303, sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction.

For a specific implementation of this step, reference may be made to the descriptions in step S102 and step S205, which are not described herein again.

Therefore, the cloud-based voice control method provided by the embodiment is adopted, the machine learning mode is adopted, the prediction models corresponding to different time periods are obtained according to the training of the voice samples received in different time periods, the voice information is predicted by utilizing the prediction models matched with the time of the voice information, and compared with the single prediction model mode, the influence of different sounds of a user in different time periods can be avoided, the identity of a speaker can be accurately recognized in any time period, the recognition accuracy is improved, and the accuracy of voice control is improved.

Fig. 4 is a functional block diagram of a cloud-based voice control apparatus according to another embodiment of the present invention, and as shown in fig. 4, the apparatus includes:

the prediction processing module 41 is used for receiving the voice information sent by the terminal, inputting the voice information into the prediction model for prediction processing, and obtaining a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way;

and the sending module 42 is adapted to send a control instruction to the terminal according to the voice prediction result, so that the terminal performs control processing according to the received control instruction.

Fig. 5 is a functional block diagram of a cloud-based voice control apparatus according to another embodiment of the present invention, and as shown in fig. 5, the apparatus further includes, on the basis of the apparatus shown in fig. 4: a speech recognition module 51, a face recognition module 52 and a verification module 53.

Optionally, the prediction processing module 41 is further adapted to:

performing feature analysis on the voice information, and extracting feature information of multiple dimensions;

respectively inputting the characteristic information of each dimension into a prediction model corresponding to each dimension to perform prediction processing, and obtaining a prediction result of each dimension;

and integrating the prediction results of all dimensions to obtain a voice prediction result.

Optionally, the prediction processing module 41 is further adapted to:

and integrating the prediction results of all dimensions by the preset dimension priority level to obtain a voice prediction result.

Optionally, the plurality of dimensions specifically includes one or more of the following dimensions: a pitch dimension, a timbre dimension, a intonation dimension, a frequency dimension, a pace dimension, and a tailpiece dimension.

Optionally, the preset speech library further stores time period information associated with the speech sample, and the prediction model corresponds to the time period information, then the prediction processing module 41 is further adapted to:

acquiring time information contained in the voice information, inquiring time segment information matched with the time information, and determining a prediction model corresponding to the matched time segment information;

and inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing to obtain a voice prediction result.

Optionally, the apparatus further comprises:

the voice recognition module 51 is adapted to perform voice recognition processing on the voice information after obtaining the voice prediction result to obtain a voice recognition result;

the sending module 42 is further adapted to: and sending a control instruction to the terminal according to the voice prediction result and the voice recognition result.

Optionally, the apparatus further comprises:

the face recognition module 52 is adapted to receive the image information sent by the terminal, and perform face recognition processing on the image information to obtain a face recognition result;

the sending module 42 is further adapted to: and sending a control instruction to the terminal according to the voice prediction result and the face recognition result.

Optionally, the preset voice library further stores a sample check value of the voice sample, and the apparatus further includes:

the verification module 53 is adapted to calculate a verification value of the voice information, and determine whether a voice sample with a sample verification value consistent with the verification value of the voice information exists in a preset voice library;

the prediction processing module 41 is further adapted to:

if a voice sample with a sample check value consistent with the check value of the voice information exists in the preset voice library, giving up the prediction processing on the voice information;

and if no voice sample with the sample check value consistent with the check value of the voice information exists in the preset voice library, the step of inputting the voice information into the prediction model for prediction processing is executed.

Optionally, the sending module 42 is further adapted to:

and sending an unlocking control instruction to the terminal so that the terminal can unlock the door lock according to the received unlocking control instruction.

Optionally, the sending module 42 is further adapted to:

and sending a payment permission instruction to the terminal so that the terminal can complete payment processing according to the received payment permission instruction.

The specific structure and the working principle of each module may refer to the description of the corresponding step in the method embodiment, and are not described herein again.

The embodiment of the application provides a nonvolatile computer storage medium, wherein at least one executable instruction is stored in the computer storage medium, and the computer executable instruction can execute the cloud-based voice control method in any method embodiment.

Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the electronic device.

As shown in fig. 6, the electronic device may include: a processor (processor)602, a communication Interface 604, a memory 606, and a communication bus 608.

Wherein:

the processor 602, communication interface 604, and memory 606 communicate with one another via a communication bus 608.

A communication interface 604 for communicating with network elements of other devices, such as clients or other servers.

The processor 602 is configured to execute the program 610, and may specifically perform relevant steps in the above embodiment of the cloud-based voice control method.

In particular, program 610 may include program code comprising computer operating instructions.

The processor 602 may be a central processing unit CPU or an application specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present invention. The electronic device comprises one or more processors, which can be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.

And a memory 606 for storing a program 610. Memory 606 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.

The program 610 may specifically be configured to cause the processor 602 to perform the following operations:

receiving voice information sent by a terminal, inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way;

and sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction.

In an alternative manner, the program 610 may specifically be further configured to cause the processor 602 to perform the following operations:

performing feature analysis on the voice information, and extracting feature information of multiple dimensions;

respectively inputting the characteristic information of each dimension into a prediction model corresponding to each dimension to perform prediction processing, and obtaining a prediction result of each dimension;

and integrating the prediction results of all dimensions to obtain a voice prediction result.

In an alternative manner, the program 610 may specifically be further configured to cause the processor 602 to perform the following operations:

and integrating the prediction results of all dimensions according to the preset dimension priority level to obtain a voice prediction result.

In an alternative approach, the plurality of dimensions specifically includes one or more of the following dimensions: a pitch dimension, a timbre dimension, a intonation dimension, a frequency dimension, a pace dimension, and a tailpiece dimension.

In an optional mode, the preset voice library also stores time period information associated with the voice sample, and the prediction model corresponds to the time period information; the program 610 may specifically be further configured to cause the processor 602 to perform the following operations:

acquiring time information contained in the voice information, inquiring time segment information matched with the time information, and determining a prediction model corresponding to the matched time segment information;

and inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing to obtain a voice prediction result.

In an alternative manner, the program 610 may specifically be further configured to cause the processor 602 to perform the following operations:

carrying out voice recognition processing on the voice information to obtain a voice recognition result;

and sending a control instruction to the terminal according to the voice prediction result and the voice recognition result.

In an alternative manner, the program 610 may specifically be further configured to cause the processor 602 to perform the following operations:

receiving image information sent by a terminal, and carrying out face recognition processing on the image information to obtain a face recognition result; and sending a control instruction to the terminal according to the voice prediction result and the face recognition result.

In an alternative manner, the program 610 may specifically be further configured to cause the processor 602 to perform the following operations:

calculating a check value of the voice information, and judging whether a voice sample with a sample check value consistent with the check value of the voice information exists in a preset voice library or not; if yes, giving up the prediction processing of the voice information;

if not, the step of inputting the voice information into the prediction model for prediction processing is executed.

In an alternative manner, the program 610 may specifically be further configured to cause the processor 602 to perform the following operations:

and sending an unlocking control instruction to the terminal so that the terminal can unlock the door lock according to the received unlocking control instruction.

In an alternative manner, the program 610 may specifically be further configured to cause the processor 602 to perform the following operations:

and sending a payment permission instruction to the terminal so that the terminal can complete payment processing according to the received payment permission instruction.

The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.

In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.

Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.

Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.

The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an electronic device according to embodiments of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

The invention discloses: A1. a cloud-based voice control method comprises the following steps:

receiving voice information sent by a terminal, inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way;

and sending a control instruction to the terminal according to the voice prediction result so that the terminal can perform control processing according to the received control instruction.

A2. The method according to a1, wherein the inputting the speech information into a prediction model for prediction processing to obtain a speech prediction result further comprises:

performing feature analysis on the voice information, and extracting feature information of multiple dimensions;

respectively inputting the characteristic information of each dimension into a prediction model corresponding to each dimension to perform prediction processing, and obtaining a prediction result of each dimension;

and integrating the prediction results of all dimensions to obtain a voice prediction result.

A3. The method according to a2, wherein the integrating the prediction results of the dimensions to obtain the speech prediction result further comprises:

and integrating the prediction results of all dimensions according to the preset dimension priority level to obtain a voice prediction result.

A4. The method of a2 or A3, wherein the plurality of dimensions specifically includes one or more of the following dimensions: a pitch dimension, a timbre dimension, a intonation dimension, a frequency dimension, a pace dimension, and a tailpiece dimension.

A5. The method according to a1, wherein the preset speech library further stores time segment information associated with speech samples, and the prediction model corresponds to the time segment information; inputting the voice information into a prediction model for prediction processing to obtain a voice prediction result further comprises:

acquiring time information contained in the voice information, inquiring time period information matched with the time information, and determining a prediction model corresponding to the matched time period information;

and inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing to obtain a voice prediction result.

A6. The method according to a1, wherein after obtaining the speech prediction result, the method further comprises:

carrying out voice recognition processing on the voice information to obtain a voice recognition result;

then, according to the voice prediction result, sending a control instruction to the terminal further includes:

and sending a control instruction to the terminal according to the voice prediction result and the voice recognition result.

A7. The method of any one of a1-a6, wherein prior to performing the method, further comprising:

receiving image information sent by a terminal, and carrying out face recognition processing on the image information to obtain a face recognition result;

then, according to the voice prediction result, sending a control instruction to the terminal further includes:

and sending a control instruction to the terminal according to the voice prediction result and the face recognition result.

A8. The method according to a1, wherein the preset speech library further stores a sample check value of a speech sample, and after receiving the speech information sent by the terminal, the method further includes:

calculating a check value of the voice information, and judging whether a voice sample with a sample check value consistent with the check value of the voice information exists in the preset voice library or not; if yes, giving up the prediction processing of the voice information;

and if not, executing the step of inputting the voice information into a prediction model for prediction processing.

A9. The method according to a1, wherein the sending a control instruction to the terminal for the terminal to perform control processing according to the received control instruction further comprises:

and sending an unlocking control instruction to the terminal so that the terminal can unlock the door lock according to the received unlocking control instruction.

A10. The method according to a1, wherein the sending a control instruction to the terminal for the terminal to perform control processing according to the received control instruction further comprises:

and sending a payment permission instruction to the terminal so that the terminal can complete payment processing according to the received payment permission instruction.

B11. A cloud-based voice control device, comprising:

the prediction processing module is used for receiving the voice information sent by the terminal, inputting the voice information into the prediction model for prediction processing to obtain a voice prediction result; the prediction model is obtained by training according to a pre-collected voice sample; the voice sample and the pre-collected user identification are stored in a preset voice library in a related way;

and the sending module is suitable for sending a control instruction to the terminal according to the voice prediction result so that the terminal can carry out control processing according to the received control instruction.

B12. The apparatus of B11, wherein the prediction processing module is further adapted to:

performing feature analysis on the voice information, and extracting feature information of multiple dimensions;

respectively inputting the characteristic information of each dimension into a prediction model corresponding to each dimension to perform prediction processing, and obtaining a prediction result of each dimension;

and integrating the prediction results of all dimensions to obtain a voice prediction result.

B13. The apparatus of B12, wherein the prediction processing module is further adapted to:

and integrating the prediction results of all dimensions by using a preset dimension priority level to obtain a voice prediction result.

B14. The apparatus of B12 or B13, wherein the plurality of dimensions specifically includes one or more of the following dimensions: a pitch dimension, a timbre dimension, a intonation dimension, a frequency dimension, a pace dimension, and a tailpiece dimension.

B15. The apparatus according to B11, wherein the preset speech library further stores time segment information associated with speech samples, and the prediction model corresponds to the time segment information, the prediction processing module is further adapted to:

acquiring time information contained in the voice information, inquiring time period information matched with the time information, and determining a prediction model corresponding to the matched time period information;

and inputting the voice information into a prediction model corresponding to the matched time period information for prediction processing to obtain a voice prediction result.

B16. The apparatus of B11, wherein the apparatus further comprises:

the voice recognition module is suitable for performing voice recognition processing on the voice information after a voice prediction result is obtained to obtain a voice recognition result;

the sending module is further adapted to: and sending a control instruction to the terminal according to the voice prediction result and the voice recognition result.

B17. The apparatus of any one of B11-B16, wherein the apparatus further comprises:

the face recognition module is suitable for receiving image information sent by a terminal and carrying out face recognition processing on the image information to obtain a face recognition result;

the sending module is further adapted to: and sending a control instruction to the terminal according to the voice prediction result and the face recognition result.

B18. The apparatus according to B11, wherein the preset speech library also stores sample check values of speech samples, the apparatus further comprising:

the verification module is suitable for calculating a verification value of the voice information and judging whether a voice sample with a sample verification value consistent with the verification value of the voice information exists in the preset voice library or not;

the prediction processing module is further adapted to:

if a voice sample with a sample check value consistent with the check value of the voice information exists in the preset voice library, giving up the prediction processing on the voice information;

and if no voice sample with the sample check value consistent with the check value of the voice information exists in the preset voice library, the step of inputting the voice information into a prediction model for prediction processing is executed.

B19. The apparatus of B11, wherein the sending module is further adapted to:

and sending an unlocking control instruction to the terminal so that the terminal can unlock the door lock according to the received unlocking control instruction.

B20. The apparatus of B11, wherein the sending module is further adapted to:

and sending a payment permission instruction to the terminal so that the terminal can complete payment processing according to the received payment permission instruction.

C21. An electronic device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;

the memory is configured to store at least one executable instruction, which causes the processor to perform operations corresponding to the cloud-based voice control method as described in any one of a1-a 10.

D22. A computer storage medium having stored therein at least one executable instruction that causes a processor to perform operations corresponding to the cloud-based voice control method of any one of a1-a 10.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:混合语音识别网络训练方法、混合语音识别方法、装置及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!