Automatic gain control method, device and system of voice interaction system

文档序号:154833 发布日期:2021-10-26 浏览:54次 中文

阅读说明:本技术 语音交互系统的自动增益控制方法、装置及系统 (Automatic gain control method, device and system of voice interaction system ) 是由 孙祥宇 于 2021-07-30 设计创作,主要内容包括:本申请公开了一种语音交互系统的自动增益控制方法、装置、系统以及计算机可读存储介质,该方法包括:接收语音信号;通过盲源分离将接收到的语音信号划分为多个不同音源的声音信号;调用预先训练的唤醒词检测模型,针对每个不同音源的声音信号分别执行唤醒词检测,确定其中检测到唤醒词的声音信号;基于所述检测到唤醒词的声音信号,计算所述语音交互系统中自动增益控制所需的增益,以调节拾取音频数据的音量在预设范围内。本申请能够准确计算出每次交互所需的增益,保证语音识别的准确性,提升用户的交互体验。(The application discloses an automatic gain control method, a device, a system and a computer readable storage medium of a voice interaction system, wherein the method comprises the following steps: receiving a voice signal; dividing the received voice signal into a plurality of sound signals of different sound sources through blind source separation; calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals of the awakening words detected; and calculating the gain required by automatic gain control in the voice interaction system based on the sound signal of the detected awakening word so as to adjust the volume of the picked audio data within a preset range. The method and the device can accurately calculate the gain required by each interaction, ensure the accuracy of voice recognition and improve the interaction experience of the user.)

1. An automatic gain control method for a voice interactive system, comprising:

receiving a voice signal;

dividing the received voice signal into a plurality of sound signals of different sound sources through blind source separation;

calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals of the awakening words detected;

and calculating the gain required by automatic gain control in the voice interaction system based on the sound signal of the detected awakening word so as to adjust the volume of the picked audio data within a preset range.

2. The automatic gain control method of claim 1, further comprising, after the dividing the received speech signal into sound signals of a plurality of different sound sources by blind source separation:

respectively maintaining corresponding FIFO queues aiming at the sound signals of different sound sources; the numerical value of the length N of the FIFO queue is determined according to the length of the awakening word and the length of each frame of awakening word detection;

and respectively reading the sound signals of different sound sources according to frames, calculating the maximum value of the absolute value of the amplitude of each frame of signal, and storing the maximum value of the absolute value into the FIFO queue.

3. The automatic gain control method of claim 1, further comprising, after the dividing the received speech signal into sound signals of a plurality of different sound sources by blind source separation:

respectively maintaining corresponding FIFO queues aiming at the sound signals of different sound sources; the numerical value of the length N of the FIFO queue is determined according to the length of the awakening word and the length of each frame of awakening word detection;

and respectively reading the sound signals of different sound sources according to frames, calculating the absolute value of the amplitude of each frame of signal, performing convolution on the absolute value by using a Gaussian window, and storing the maximum value after convolution into the FIFO queue.

4. The automatic gain control method according to claim 2 or 3, wherein the length N of the FIFO queue multiplied by the length of each frame detected by the wakeup word is equal to the length of the wakeup word.

5. The automatic gain control method according to claim 2 or 3, wherein the calculating the gain required for automatic gain control in the voice interactive system comprises:

calculating the maximum value A in the FIFO queue corresponding to the sound signal of the detected awakening wordmaxAccording to

Calculating gain G required by pickup equipment in the voice interaction system;

wherein A isrefIs the amplitude of the reference audio signal, which is a fixed value.

6. The automatic gain control method according to claim 5, further comprising, after said calculating a gain required for automatic gain control in said voice interactive system:

adjusting the analog gain to the desired gain G if the analog gain is within the adjustable range GaIf the gain is less than the required gain G, the analog gain is adjusted to the maximum, and the digital gain is adjusted to G-Ga

7. The automatic gain control method of claim 6, further comprising: after detecting that the voice interaction is finished, the gain is restored to the initial gain Ginit

8. The automatic gain control method of claim 7, wherein the initial gain G isinitComprises the following steps: and under the condition of determining the reference audio signal value, ensuring that the audio data picked up by the sound pickup equipment is not saturated by the corresponding gain.

9. The automatic gain control method according to any one of claims 1 to 3, wherein the wake word detection model is a model previously trained using delta-LFBE as a feature.

10. An automatic gain control apparatus for a voice interactive system, comprising:

the receiving module is used for receiving voice signals;

the blind source separation module is used for dividing the received voice signals into a plurality of sound signals of different sound sources through blind source separation;

the awakening word detection module is used for calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources and determining the sound signals in which the awakening words are detected;

and the gain determining module is used for calculating the gain required by automatic gain control in the voice interaction system based on the sound signal of the detected awakening word so as to adjust the volume of the picked audio data within a preset range.

11. A voice interaction system, comprising: a memory and a processor; wherein the memory is configured to store instructions; the processor, when calling the instruction, performs the method of any of claims 1 to 9.

12. A computer readable storage medium comprising instructions that when executed perform the method of any of claims 1 to 9.

Technical Field

The present application relates to the field of voice interaction technologies, and in particular, to an automatic gain control method, apparatus, system, and computer-readable storage medium for a voice interaction system.

Background

With the popularization of intelligent technologies, the intelligent technologies, such as smart speakers and smart televisions, are beginning to be applied to various aspects of clothes, eating and housing activities in life. The intelligent technology provides convenient and fast service for the life of people. Intelligent voice interaction is an intelligent technology that is currently widely used. The intelligent voice interaction is an interaction mode based on voice input, and a user can directly use voice to control and control equipment. This interactive mode can effectively liberate both hands, and the at utmost reduces the operation degree of difficulty, has greatly made things convenient for user's use.

Speech interactions can be divided into far-field speech and near-field speech, with the distance of the speaker from the device typically being between one and ten meters during far-field speech interactions as compared to near-field speech. The front end of the microphone array is usually used as a sound pickup device, and the picked-up sound signal is used for subsequent speech recognition processing.

In far-field voice interaction, the dynamic range of sound picked up by the sound pickup device is large, so that the sound pickup device is required to have an Automatic Gain Control (AGC) function to Control the volume of the picked-up sound within a reasonable range, and the accuracy of voice recognition is ensured. In the actual scene of far-field speech interaction, there may be interference factors such as background noise, multi-person speaking voice, and the playing voice of the device itself. Therefore, how to accurately calculate the required gain is one of the technical problems that the inventors of the present application intend to solve.

It should be understood that the above-listed technical problems are only exemplary and not limiting to the present invention, and the present invention is not limited to the technical solutions for simultaneously solving all the above technical problems. Technical solutions of the present invention may be implemented to solve one or more of the above or other technical problems.

Disclosure of Invention

In order to solve the above problem, the present application provides an automatic gain control method for a voice interactive system, including:

receiving a voice signal;

dividing the received voice signal into a plurality of sound signals of different sound sources through blind source separation;

calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals of the awakening words detected;

and calculating the gain required by automatic gain control in the voice interaction system based on the sound signal of the detected awakening word so as to adjust the volume of the picked audio data within a preset range.

Optionally, after the dividing the received speech signal into sound signals of a plurality of different sound sources by blind source separation, the method further includes:

respectively maintaining corresponding FIFO queues aiming at the sound signals of different sound sources; the numerical value of the length N of the FIFO queue is determined according to the length of the awakening word and the length of each frame of awakening word detection;

and respectively reading the sound signals of different sound sources according to frames, calculating the maximum value of the absolute value of the amplitude of each frame of signal, and storing the maximum value of the absolute value into the FIFO queue.

Optionally, after the dividing the received speech signal into sound signals of a plurality of different sound sources by blind source separation, the method further includes:

respectively maintaining corresponding FIFO queues aiming at the sound signals of different sound sources; the numerical value of the length N of the FIFO queue is determined according to the length of the awakening word and the length of each frame of awakening word detection;

and respectively reading the sound signals of different sound sources according to frames, calculating the absolute value of the amplitude of each frame of signal, performing convolution on the absolute value by using a Gaussian window, and storing the maximum value after convolution into the FIFO queue.

Optionally, the length N of the FIFO queue multiplied by the length of each frame detected by the wakeup word is equal to the length of the wakeup word.

Optionally, the calculating the gain required for automatic gain control in the voice interactive system includes:

calculating the maximum value A in the FIFO queue corresponding to the sound signal of the detected awakening wordmaxAccording to

Calculating gain G required by pickup equipment in the voice interaction system;

wherein A isrefIs the amplitude of the reference audio signal, which is a fixed value.

Optionally, after the calculating the gain required by the automatic gain control in the voice interactive system, the method further includes:

adjusting the analog gain to the desired gain G if the analog gain is within the adjustable range GaIf the gain is less than the required gain G, the analog gain is adjusted to the maximum, and the digital gain is adjusted to G-Ga

Optionally, the method further comprises: after detecting that the voice interaction is finished, the gain is restored to the initial gain Ginit

Optionally, the initial gain GinitComprises the following steps: and under the condition of determining the reference audio signal value, ensuring that the audio data picked up by the sound pickup equipment is not saturated by the corresponding gain.

Optionally, the wake word detection model is a model obtained by training in advance by using delta-LFBE as a feature.

The present application further provides an automatic gain control device of a voice interaction system, including:

the receiving module is used for receiving voice signals;

the blind source separation module is used for dividing the received voice signals into a plurality of sound signals of different sound sources through blind source separation;

the awakening word detection module is used for calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources and determining the sound signals in which the awakening words are detected;

and the gain determining module is used for calculating the gain required by automatic gain control in the voice interaction system by taking the sound signal based on the detected awakening word as a reference audio signal so as to adjust the volume of the picked audio data within a preset range.

The present application further provides a voice interaction system, comprising: a memory and a processor; wherein the memory is configured to store instructions; and when the processor calls the instruction, executing any one of the methods.

The present application also provides a computer-readable storage medium comprising instructions that when executed perform any of the methods described above.

The automatic gain control method of the voice interaction system provided by the application receives a voice signal; dividing the received voice signal into a plurality of sound signals of different sound sources through blind source separation; calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals of the awakening words detected; and calculating the gain required by automatic gain control in the voice interaction system based on the sound signal of the detected awakening word so as to adjust the volume of the picked audio data within a preset range. According to the method and the device, the awakening words are adopted for detection in a real use scene, the sound signals of the awakening words are detected to be distinguished from other noise signals, the sound signals of the awakening words are detected to be used as an adjusting reference for automatic gain control, the gain required by each interaction can be accurately calculated, the accuracy of voice recognition is guaranteed, and the interaction experience of users is improved. In addition, the application also provides an automatic gain control device, a system and a computer readable storage medium of the voice interaction system with the technical effects.

Drawings

Hereinafter, the present application will be further explained with reference to the drawings based on embodiments.

FIG. 1 is a flow chart schematically illustrating an embodiment of an automatic gain control method for a voice interactive system provided in the present application;

FIG. 2 is a flow chart schematically illustrating another embodiment of an automatic gain control method for a voice interactive system provided in the present application;

FIG. 3 is a flow chart schematically illustrating a method for automatic gain control of a voice interactive system according to another embodiment of the present disclosure;

FIG. 4 is a block diagram schematically illustrating an embodiment of an automatic gain control apparatus of a voice interactive system provided in the present application;

fig. 5 schematically shows a block diagram of a voice interaction system provided in the present application.

Detailed Description

The method and apparatus of the present application will be described in detail below with reference to the following figures and detailed description of the preferred embodiments. It is to be understood that the embodiments shown in the drawings and described below are merely illustrative and not restrictive of the application.

Fig. 1 is a flowchart illustrating an embodiment of an automatic gain control method for a voice interactive system provided in the present application. In this embodiment, the method specifically includes:

step S100: a speech signal is received.

In this step, the sound pickup device can pick up the voice signal in the environment. In particular, the sound pickup device may be a microphone array. It is understood that the specific number and distribution of microphones in the microphone array does not affect the implementation of the present application, and may be implemented by using a single microphone.

Step S102: the received speech signal is divided into a plurality of sound signals of different sound sources by blind source separation.

After the voice signal is received, the received voice signal is divided into a plurality of sound signals of different sound sources in a blind source separation mode. It is understood that blind source separation is prior art and that embodiments thereof are well known in the art and will not be described herein in detail.

Step S104: calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals in which the awakening words are detected.

The awakening word detection model is obtained by training aiming at a plurality of linguistic data in advance. After the voice signal is input, the module can output a detection result of whether the input voice signal contains the awakening word or not by detecting the preset awakening word. The preset wake-up word may be any predefined word.

As a specific implementation mode, the wake word detection model can be a model obtained by using delta-LFBE as feature training in advance. By using delta-LFBE as the characteristic to train the awakening word, the awakening word model obtained by training is insensitive to the volume, namely, sound signals with larger/smaller volume can be processed, so that the amplitude range of the voice signals received by the awakening word model is expanded. As a specific embodiment, the amplitude range may be greater than 40db in volume at the microphone.

In addition, volume-based data enhancement can be added in the process of the wake-up word training, namely, amplitude value-based enhancement is carried out on the pre-stored wake-up word, and a plurality of wake-up words with different amplitude values are used for analog training, such as [ -30dB, -25dB, -20dB, -15dB, -10dB, -5dB, -2dB ], so that the generalization capability of the wake-up word training model is increased.

For the sound signals of different sound sources, the awakening word detection module can synchronously or asynchronously perform awakening word detection on the sound signals, and determine the sound signals capable of detecting the awakening words.

Step S106: and calculating the gain required by automatic gain control in the voice interaction system based on the sound signal of the detected awakening word so as to adjust the volume of the picked audio data within a preset range.

And taking the sound signal of the detected awakening word as a reference, and calculating the gain required by the automatic gain control of the voice interaction system at the moment to ensure that the volume of the picked audio data is within a preset range. The preset range may be predefined, and is not limited herein.

The automatic gain control method of the voice interaction system provided by the application receives a voice signal; dividing the received voice signal into a plurality of sound signals of different sound sources through blind source separation; calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals of the awakening words detected; and calculating the gain required by automatic gain control in the voice interaction system based on the sound signal of the detected awakening word so as to adjust the volume of the picked audio data within a preset range. According to the method and the device, the awakening words are adopted for detection in a real use scene, the sound signals of the awakening words are detected to be distinguished from other noise signals, the sound signals of the awakening words are detected to be used as an adjusting reference for automatic gain control, the gain required by each interaction can be accurately calculated, the accuracy of voice recognition is guaranteed, and the interaction experience of users is improved.

Fig. 2 shows a flowchart of another specific implementation of an automatic gain control method of a voice interaction system, where the method specifically includes:

step S200: receiving a voice signal;

step S202: dividing the received voice signal into a plurality of sound signals of different sound sources through blind source separation;

step S204: respectively maintaining corresponding FIFO queues aiming at the sound signals of different sound sources; the numerical value of the length N of the FIFO queue is determined according to the length of the awakening word and the length of each frame of awakening word detection;

the length of the FIFO queue, N, multiplied by the length of the wakeup word detection frame _ time, is determined by the length of the wakeup word. For example, when the length of the wake word is 1.5s, the nframe _ time is set around 1.5 s.

Step S206: reading sound signals of different sound sources according to frames, calculating the maximum value of the absolute value of the amplitude of each frame of signal, and storing the maximum value of the absolute value into the FIFO queue;

and calculating the maximum value of the absolute value of the amplitude of each frame of signal aiming at the sound signals of different sound sources, storing the maximum value into an FIFO queue, and automatically deleting the queue head element when the queue is full, and continuously circulating in sequence.

Step S208: calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals of the awakening words detected;

step S210: calculating the maximum value A in the FIFO queue corresponding to the sound signal of the detected awakening wordmaxAccording to

Calculating gain G required by pickup equipment in the voice interaction system; wherein A isrefIs the amplitude of the reference audio signal.

One specific embodiment of determining the reference audio signal is: and obtaining the amplitude of the reference audio signal according to the maximum allowable playing volume of the sound pickup equipment. Another embodiment may be: and obtaining the amplitude of the reference audio signal according to a preset fixed value. As a specific implementation, a fixed value of 90db may be preset.

Fig. 3 is a flowchart of another specific implementation of an automatic gain control method of a voice interaction system, where the method specifically includes:

step S300: receiving a voice signal;

step S302: dividing the received voice signal into a plurality of sound signals of different sound sources through blind source separation;

step S304: respectively maintaining corresponding FIFO queues aiming at the sound signals of different sound sources; the numerical value of the length N of the FIFO queue is determined according to the length of the awakening word and the length of each frame of awakening word detection;

the length of the FIFO queue, N, multiplied by the length of the wakeup word detection frame _ time, is determined by the length of the wakeup word. For example, when the length of the wake word is 1.5s, the nframe _ time is set around 1.5 s.

Step S306: reading sound signals of different sound sources according to frames, calculating the absolute value of the amplitude of each frame of signal, performing convolution on the absolute value by using a Gaussian window, and storing the maximum value after convolution into the FIFO queue;

and calculating the absolute value of the amplitude of each frame of signal aiming at the sound signals of different sound sources, performing convolution on the absolute value by using a Gaussian window, storing the maximum value after the convolution into an FIFO queue, and automatically deleting the head element of the queue when the queue is full, and circulating continuously in sequence. Wherein, the gaussian window can be a gaussian window with a window length w of 15 or 19.

Step S308: calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals of the awakening words detected;

step S310: calculating the maximum value A in the FIFO queue corresponding to the sound signal of the detected awakening wordmaxAccording to

Calculating gain G required by pickup equipment in the voice interaction system; wherein A isrefIs the amplitude of the reference audio signal.

One specific embodiment of determining the reference audio signal is: and obtaining the amplitude of the reference audio signal according to the maximum allowable playing volume of the sound pickup equipment. Another embodiment may be: and obtaining the amplitude of the reference audio signal according to a preset fixed value. As a specific implementation, a fixed value of 90db may be preset.

In the embodiment, the Gaussian window is used for convolution with the absolute value of the amplitude of each frame of signal, the maximum value after convolution is calculated, the stability of maximum value calculation can be improved, and the maximum value is reduced from being calculated by mistake due to instantaneous disturbance caused by sudden collision or other conditions. By adopting the method, the accuracy of gain calculation is further improved.

Further, on the basis of any of the above embodiments, in the calculating of the automatic gain control in the voice interactive systemThe required gain is followed by: adjusting the analog gain to the desired gain G if the analog gain is within the adjustable range GaIf the gain is less than the required gain G, the analog gain is adjusted to the maximum, and the digital gain is adjusted to G-Ga. After determining the gain required for automatic gain control, the analog gain of the audio ADC/DAC is preferentially adjusted according to the selected configuration of the audio ADC/DAC if the analog gain adjustable range GaLess than the desired gain G, the analog gain is adjusted to a maximum and then the digital gain is adjusted to G-Ga. The embodiment adopts the analog gain with higher signal-to-noise ratio to be adjusted first, and then adopts the digital gain adjustment to supplement, thereby ensuring the accurate implementation of the gain adjustment.

On the basis of any of the above embodiments, the automatic gain control method provided by the present application may further include: after detecting that the voice interaction is finished, the gain is restored to the initial gain Ginit

Wherein the initial gain GinitComprises the following steps: and under the condition of determining the reference audio signal value, ensuring that the audio data picked up by the sound pickup equipment is not saturated by the corresponding gain. The initial gain can ensure that the microphone recording data can not be amplitude-cut when the sound plays music under the maximum loudness.

In this embodiment, after completing a round of voice interaction, the gain is restored to the initial gain GinitAnd waiting for the next wake-up word trigger. It is understood that the voice interaction of the round is determined to be completed by detecting that the sound pickup device does not detect the voice signal of the speaker within the preset time period. It is understood that after detecting that the voice interaction is completed, the current gain may be maintained, waiting for the next wake-up word trigger.

Fig. 4 is a block diagram of an embodiment of an automatic gain control apparatus 40 of a voice interactive system, which includes:

a receiving module 42, configured to receive a voice signal;

a blind source separation module 44, configured to divide the received voice signal into a plurality of sound signals of different sound sources through blind source separation;

a wakeup word detection module 46, configured to invoke a pre-trained wakeup word detection model, perform wakeup word detection on the sound signals of different sound sources, and determine the sound signal in which the wakeup word is detected;

and a gain determining module 48, configured to calculate, based on the sound signal of the detected wake-up word, a gain required for automatic gain control in the voice interaction system, so as to adjust the volume of the picked-up audio data within a preset range.

It can be understood that the automatic gain control apparatus of the speech interactive system provided in the present application corresponds to the above automatic gain control method, and the internal modules 42 to 48 thereof are respectively used for implementing the steps S100 to S106 of the automatic gain control method, and the specific implementation thereof can refer to the above corresponding contents, and will not be described again here.

The device receives voice signals; dividing the received voice signal into a plurality of sound signals of different sound sources through blind source separation; calling a pre-trained awakening word detection model, respectively executing awakening word detection aiming at the sound signals of different sound sources, and determining the sound signals of the awakening words detected; and calculating the gain required by automatic gain control in the voice interaction system based on the sound signal of the detected awakening word so as to adjust the volume of the picked audio data within a preset range. According to the method and the device, the awakening words are adopted for detection in a real use scene, the sound signals of the awakening words are detected to be distinguished from other noise signals, the sound signals of the awakening words are detected to be used as an adjusting reference for automatic gain control, the gain required by each interaction can be accurately calculated, the accuracy of voice recognition is guaranteed, and the interaction experience of users is improved.

In addition, the present application also provides a voice interaction system 50, as shown in fig. 5, a block diagram of the structure of the voice interaction system 50 provided in the present application, where the voice interaction system 50 includes: a memory 52 and a processor 54; wherein the memory 52 is configured to store instructions; the processor 54, when invoking the instructions, performs any of the automatic gain control methods described above.

The present application further provides a computer-readable storage medium comprising instructions that, when executed, implement any of the automatic gain control methods described above.

It is to be understood that the automatic gain control apparatus, the voice interactive system, and the computer readable storage medium provided in the present application correspond to the automatic gain control method described above, and specific embodiments thereof may refer to the above contents, which are not described herein again.

According to the method and the device, the sound signal of the awakening word is detected to serve as the adjusting reference for automatic gain control, the gain required by each interaction can be accurately calculated, the accuracy of voice recognition is guaranteed, and the interaction experience of a user is improved.

While various embodiments of aspects of the present application have been described for purposes of this disclosure, they are not to be construed as limiting the teachings of the present disclosure to these embodiments. Features disclosed in one particular embodiment are not limited to that embodiment, but may be combined with features disclosed in different embodiments. For example, one or more features and/or operations of a method according to the present application described in one embodiment may also be applied, individually, in combination, or in whole, in another embodiment. It will be understood by those skilled in the art that there are many more alternative embodiments and variations possible and that various changes and modifications may be made to the system described above without departing from the scope defined by the claims of the present application.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:压缩音频识别方法、装置及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!