Voice endpoint detection method and system based on deep learning

文档序号:1546314 发布日期:2020-01-17 浏览:29次 中文

阅读说明:本技术 一种基于深度学习的语音端点检测方法及系统 (Voice endpoint detection method and system based on deep learning ) 是由 不公告发明人 于 2019-09-26 设计创作,主要内容包括:本发明公开了一种基于深度学习的语音端点检测方法及系统,包括:利用收集的音频数据生成样本音频数据;分帧处理样本音频数据,划分处理后得到的待训练语音帧为非噪声语音帧和噪声语音帧,获得训练集;利用训练集训练深度神经网络模型,获得训练后的深度神经网络模型;将端点待检测语音数据输入该训练后的模型,输出该语音数据中的所有非噪声语音帧和噪声语音帧;基于非噪声语音帧和噪声语音帧,获得端点待检测语音数据中的非噪声语音段和噪声语音段,提取所有的非噪声语音段在端点待检测语音数据中的起始坐标索引和结束坐标索引为语音端点。本发明解决了传统语音端点检测技术低信噪比识别准确率低、部分方法识别速度慢和语音特征选取困难的问题。(The invention discloses a voice endpoint detection method and a system based on deep learning, which comprises the following steps: generating sample audio data using the collected audio data; processing sample audio data in a framing mode, and obtaining non-noise voice frames and noise voice frames to be trained after the processing in the framing mode to obtain a training set; training the deep neural network model by using a training set to obtain a trained deep neural network model; inputting the voice data to be detected at the end point into the trained model, and outputting all non-noise voice frames and noise voice frames in the voice data; and acquiring a non-noise voice section and a noise voice section in the voice data to be detected at the end point based on the non-noise voice frame and the noise voice frame, and extracting the initial coordinate index and the ending coordinate index of all the non-noise voice sections in the voice data to be detected at the end point as voice end points. The invention solves the problems of low signal-to-noise ratio identification accuracy, low identification speed of part of methods and difficult speech feature selection in the traditional speech endpoint detection technology.)

1. A method for detecting a voice endpoint based on deep learning, the method comprising:

step 1: generating sample audio data using the collected audio data;

step 2: the method comprises the steps of processing sample audio data in a framing mode to obtain a voice frame to be trained, dividing the voice frame to be trained into a non-noise voice frame to be trained containing voice and a noise voice frame to be trained not containing voice according to the fact whether the voice frame to be trained contains voice or not, and enabling a plurality of non-noise voice frames to be trained and a plurality of noise voice frames to be trained to form a training set;

and step 3: training the deep neural network model by using a training set to obtain a trained deep neural network model;

and 4, step 4: inputting the voice data to be detected at the end point into the trained deep neural network model, and outputting all non-noise voice frames and noise voice frames in the voice data to be detected at the end point by the trained deep neural network model;

and 5: and acquiring a non-noise voice segment and a noise voice segment in the voice data to be detected at the end points based on the non-noise voice frame and the noise voice frame, extracting the initial coordinate index and the ending coordinate index of all the non-noise voice segments in the voice data to be detected at the end points, and acquiring a voice end point detection result of the voice data to be detected.

2. The method for detecting a voice endpoint based on deep learning according to claim 1, wherein the step 1 specifically comprises:

step 1.1: collecting voice audio data and noise audio data;

step 1.2: performing up-sampling or down-sampling operation on the collected audio data, and unifying the sampling rates of the voice audio data and the noise audio data;

step 1.3: randomly extracting a plurality of sections of voice audio data and a section of noise audio data from the collected audio data;

step 1.4: fusing the extracted voice audio data and the extracted noise audio data;

step 1.5: and (5) repeatedly executing the steps 1.3-1.4 for a plurality of times to generate sample audio data.

3. The method according to claim 1, wherein the step 2 specifically includes:

step 2.1: processing the sample audio data in a frame division mode at preset unit time to obtain a voice frame to be trained;

step 2.2: judging whether the voice frame to be trained contains voice, marking the voice frame to be trained containing voice as a first class, namely a non-noise voice frame to be trained, and dividing the voice frame to be trained not containing voice into a second class, namely a noise voice frame to be trained;

step 2.3: and extracting a plurality of non-noise voice frames to be trained as positive samples, extracting a plurality of noise voice frames to be trained as negative samples, and forming a training set by the positive samples and the negative samples.

4. The method for detecting a voice endpoint based on deep learning according to any one of claims 1 to 3, wherein the step 3 specifically comprises:

step 3.1: performing a first convolution operation on a speech frame to be trained in a training set, wherein the number of convolution kernels is n1, and learning the acoustic characteristics of the speech frame to be trained in a time domain to obtain a first characteristic vector;

step 3.2: performing a second convolution operation on the voice frame to be trained in the training set, wherein the number of convolution kernels is n2, and learning the acoustic characteristics of the voice frame to be trained in the time domain to obtain a second characteristic vector;

step 3.3: performing a third convolution operation on the voice frame to be trained in the training set, wherein the number of convolution kernels is n3, and learning the acoustic characteristics of the voice frame to be trained in the time domain to obtain a third characteristic vector;

step 3.4: performing feature fusion on the first feature vector to the third feature vector to obtain a time domain feature vector after feature fusion;

step 3.5: performing convolution operation on the time domain feature vector after feature fusion, and learning and extracting a frequency domain feature vector of a voice frame to be trained;

step 3.6: learning the frequency domain feature vector by using a long-time and short-time memory layer to obtain a learned feature vector;

step 3.7: obtaining a probability value A of a non-noise speech frame belonging to a speech frame to be trained and a probability value B of a noise speech frame belonging to the speech frame to be trained by using the feature vector after classification learning of the full connection layer;

step 3.8: for each voice frame to be trained, if A is larger than B, judging the voice frame to be trained as a non-noise voice frame; if A is less than or equal to B, judging the speech frame to be trained as a noise speech frame.

5. The method for detecting a voice endpoint based on deep learning according to any one of claims 1 to 3, wherein the step 5 specifically comprises:

step 5.1: sequentially splicing the speech frames output by the trained deep neural network model according to the time sequence to obtain spliced speech segments;

step 5.2: marking continuous non-noise speech frames in the spliced speech segments as non-noise speech segments, and marking continuous noise speech frames in the spliced speech segments as noise speech segments;

step 5.3: marking a single noise speech frame between two sections of non-noise speech segments as a noise speech segment, and marking a single non-noise speech frame between two noise speech segments as a non-noise speech segment to obtain a marked speech segment;

step 5.4: setting a merging threshold value a;

step 5.5: counting the total number of sampling points between two non-noise speech segments in the marked speech segment, if the total number of the sampling points between the two non-noise speech segments is less than a merging threshold a, marking the noise speech segment in the middle of the two non-noise speech segments in the marked speech segment as a non-noise speech segment, namely merging two adjacent non-noise speech segments of the noise speech segment;

step 5.6: counting the total number of sampling points between two sections of noise voice sections in the marked voice sections, and if the total number of the sampling points between the two sections of noise voice sections is smaller than a merging threshold a, marking a non-noise voice section in the middle of the two sections of noise voice sections in the marked voice sections as a noise voice section, namely merging two adjacent sections of noise voice sections of the non-noise voice section;

step 5.7: and extracting the initial coordinate index and the ending coordinate index of all the non-noise voice sections in the whole voice section to obtain a voice endpoint detection result of the voice data to be detected.

6. A speech endpoint detection method based on deep learning according to any of claims 1-3, characterized in that step 5.4 further comprises setting a misrecognition threshold b, which is used to determine whether a single speech frame is misrecognized: and counting the total number of sampling points between two non-noise speech segments in the marked speech segment, and if the total number of sampling points between the two non-noise speech segments is less than a recognition threshold b, marking the noise speech segment in the middle of the two non-noise speech segments in the marked speech segment as a noise speech segment.

7. A deep learning based speech endpoint detection method according to any one of claims 1-3, wherein the step 2 further comprises: and generating a verification set based on the plurality of non-noise voice frames to be trained and the plurality of noise voice frames to be trained, and verifying the trained deep neural network model by using the verification set.

8. A system for deep learning based speech endpoint detection, the system comprising:

a sample generation unit for generating sample audio data using the collected audio data;

the device comprises a sample processing unit, a training set and a training unit, wherein the sample processing unit is used for framing sample audio data to obtain voice frames to be trained, dividing the voice frames to be trained into non-noise voice frames to be trained containing voice and noise voice frames to be trained not containing voice according to whether each voice frame to be trained contains voice, and forming a training set by a plurality of non-noise voice frames to be trained and a plurality of noise voice frames to be trained;

the model training unit is used for training the deep neural network model by utilizing a training set to obtain the trained deep neural network model;

the model output unit is used for inputting the voice data to be detected at the end point into the trained deep neural network model, and the trained deep neural network model outputs all non-noise voice frames and noise voice frames in the voice data to be detected at the end point;

and the voice endpoint detection result obtaining unit is used for obtaining the non-noise voice section and the noise voice section in the voice data to be detected at the endpoint based on the non-noise voice frame and the noise voice frame, extracting the initial coordinate index and the ending coordinate index of all the non-noise voice sections in the voice data to be detected at the endpoint, and obtaining the voice endpoint detection result of the voice data to be detected.

9. The deep learning based speech endpoint detection system of claim 8, wherein the training process of the model training unit comprises:

step a: performing a first convolution operation on a speech frame to be trained in a training set, wherein the number of convolution kernels is n1, and learning the acoustic characteristics of the speech frame to be trained in a time domain to obtain a first characteristic vector;

step b: performing a second convolution operation on the voice frame to be trained in the training set, wherein the number of convolution kernels is n2, and learning the acoustic characteristics of the voice frame to be trained in the time domain to obtain a second characteristic vector;

step c: performing a third convolution operation on the voice frame to be trained in the training set, wherein the number of convolution kernels is n3, and learning the acoustic characteristics of the voice frame to be trained in the time domain to obtain a third characteristic vector;

step d: performing feature fusion on the first feature vector to the third feature vector to obtain a time domain feature vector after feature fusion;

step e: performing convolution operation on the time domain feature vector after feature fusion, and learning and extracting a frequency domain feature vector of a voice frame to be trained;

step f: learning the frequency domain feature vector layer by using the long-time memory to obtain a learned feature vector;

step g: obtaining a probability value A of a non-noise speech frame belonging to a speech frame to be trained and a probability value B of a noise speech frame belonging to the speech frame to be trained by using the feature vector after classification learning of the full connection layer;

step h: for each voice frame to be trained, if A is larger than B, judging the voice frame to be trained as a non-noise voice frame; if the value is less than or equal to B, the speech frame to be trained is judged to be a noise speech frame.

10. The system according to claim 8, wherein the process of obtaining the detection result by the voice endpoint detection result obtaining unit is as follows:

step I: sequentially splicing the speech frames output by the trained deep neural network model according to the time sequence to obtain spliced speech segments;

step II: marking continuous non-noise speech frames in the spliced speech segments as non-noise speech segments, and marking continuous noise speech frames in the spliced speech segments as noise speech segments;

step III: marking a single noise speech frame between two sections of non-noise speech segments as a noise speech segment, and marking a single non-noise speech frame between two noise speech segments as a non-noise speech segment to obtain a marked speech segment;

step IV: setting a merging threshold value a;

step V: counting the total number of sampling points between two non-noise speech segments in the marked speech segment, if the total number of the sampling points between the two non-noise speech segments is less than a merging threshold a, marking the noise speech segment in the middle of the two non-noise speech segments in the marked speech segment as a non-noise speech segment, namely merging two adjacent non-noise speech segments of the noise speech segment;

step VI: counting the total number of sampling points between two sections of noise voice sections in the marked voice sections, and if the total number of the sampling points between the two sections of noise voice sections is smaller than a merging threshold a, marking a non-noise voice section in the middle of the two sections of noise voice sections in the marked voice sections as a noise voice section, namely merging two adjacent sections of noise voice sections of the non-noise voice section;

step VII: and extracting the initial coordinate index and the ending coordinate index of all the non-noise voice sections in the whole voice section to obtain a voice endpoint detection result of the voice data to be detected.

Technical Field

The invention relates to the field of voice signal processing, in particular to a voice endpoint detection method and system based on deep learning.

Background

The voice is an important mode of information interaction, voice endpoint detection refers to finding out a starting point and an end point of a voice part in a continuous sound signal, and is a processing technology applied to the front end of the voice. With the development of artificial intelligence, people hope to realize man-machine interaction through voice, identify the identity of a speaker and recognize specific voice content, and voice endpoint detection is a key link. In the field of communications, it is necessary to reduce the data transmission efficiency of the mute section of a signal as much as possible to ensure the quality of the received voice signal, and accurate signal endpoint detection is also essential. In addition, voice endpoint detection in communication line monitoring work in national security and privacy work also plays a crucial role, and information detection is not influenced while resource cost is saved.

The conventional main methods for voice endpoint detection include: (1) an endpoint detection method based on single threshold or multi-threshold decision. The method mainly distinguishes noise and non-noise by statistically testing certain characteristic parameters (short-time energy, zero-crossing rate, information entropy and the like). For example: the end point detection method based on the short-time energy divides the whole voice into voice frames, calculates the short-time energy of the voice frames, and judges whether the short-time energy is larger than the set threshold value. And judging the voice frames larger than the threshold value as non-noise, and judging the voice frames smaller than the threshold value as noise. (2) The voice endpoint detection method based on the statistical model mainly comprises the following steps: receiving an input voice signal to be detected; extracting first voice characteristic information of a voice signal to be detected in a framing mode, and performing anti-noise processing on the first voice characteristic information to generate second voice characteristic information of the voice signal to be detected; and obtaining the recognition result of the voice signal to be detected according to the second voice characteristic information and the acoustic model.

The traditional voice endpoint detection method has the problems of poor noise immunity and difficult feature selection. Under the condition of low signal-to-noise ratio, the voice endpoint detection effect is poor, and the specific position of the voice is difficult to identify; the speech features are numerous, such as short-term energy, zero-crossing rate, information entropy, Mel cepstrum coefficient and the like, and the effects obtained by different selected speech features are different. How to select and analyze the voice features in a targeted manner is also a big problem in voice endpoint detection.

Disclosure of Invention

The invention provides an intelligent voice positioning detection method and system combining traditional signal processing and deep learning aiming at voice under real and complex conditions, and aims to solve the problems of low signal-to-noise ratio identification accuracy, low identification speed of part of methods and difficult voice feature selection of the current traditional voice endpoint detection technology-based method.

In order to achieve the above object, the present invention provides a method for detecting a voice endpoint, so as to solve the technical problems of poor dryness resistance and difficult feature extraction of the endpoint detection method in the conventional technology. The specific invention content is as follows:

step 1 voice data enhancement.

Step 1.1, collecting voice audio data and noise audio data; wherein, the sound frame may be the sound containing human speaking or the sound containing non-human speaking; a speech frame containing the voice of a human being is called speech, and a speech frame not containing the human being is called noise;

step 1.2, performing up-sampling or down-sampling operation on the collected audio data, and unifying the sampling rates of the voice audio data and the noise audio data;

step 1.3, randomly extracting a plurality of sections of voice audio data and a section of noise audio data;

step 1.4, fusing voice audio data and noise audio data by using an audio data fusion method, wherein the specific method is to add the voice audio data at random on the noise audio data;

step 1.5 the above steps 1.3-1.4 are repeated to generate a large amount of sample audio data.

Step 2 framing and labeling the sample audio data.

Step 2.1, processing sample audio data in a frame division mode according to preset unit time to obtain a voice frame to be trained;

step 2.2, judging whether each voice frame to be trained contains voice; marking the speech frames to be trained containing speech as a first class, namely non-noise speech frames to be trained, and dividing the speech frames to be trained not containing speech into a second class, namely noise speech frames to be trained;

and 2.3, extracting a plurality of non-noise speech frames to be trained as positive samples, and extracting a plurality of noise speech frames to be trained as negative samples to jointly form a training set for training the deep neural network model.

And 3, training a deep neural network model.

Step 3.1, performing a first convolution operation on the voice frame to be trained in the training set, wherein the number of convolution kernels is n1, and learning the acoustic characteristics of the voice frame to be trained in a time domain to obtain a first characteristic vector;

step 3.2, performing a second convolution operation on the voice frame to be trained in the training set, wherein the number of convolution kernels is n2, and learning the acoustic characteristics of the voice frame to be trained in the time domain to obtain a second characteristic vector;

step 3.3, performing a third convolution operation on the voice frame to be trained in the training set, wherein the number of convolution kernels is n3, and learning the acoustic characteristics of the voice frame to be trained in the time domain to obtain a third characteristic vector;

the convolution kernel number of the three convolution operations can be adjusted according to actual conditions;

step 3.4, performing feature fusion on the three different feature vectors, namely splicing the three feature vectors;

step 3.5, performing convolution operation on the feature vector after feature fusion, and learning and extracting the frequency domain features of the voice frame;

step 3.6, learning the feature vector extracted in the step 3.5 by using an LSTM layer to obtain a learned feature vector;

step 3.7, classifying the learned feature vectors by using a full connection layer, and outputting the probability value that the speech frame to be trained belongs to a non-noise speech frame and a noise speech frame;

and 3.8, comparing the probability values of the non-noise speech frames and the noise speech frames of the speech frame to be trained, and if the probability value of the non-noise speech frames output by the deep neural network model is greater than the probability value of the noise speech frames, determining that the speech frame is the non-noise speech frame. And conversely, if the probability value of the non-noise speech frame is less than the probability value of the noise speech frame given by the deep neural network model, the speech frame is considered as the noise speech frame.

And step 4, combining the voice frames.

Step 4.1, splicing the voice frames in sequence according to the time sequence;

step 4.2, marking the continuous non-noise speech frames as non-noise speech segments, and marking the continuous noise speech frames as noise speech segments;

step 4.3, marking a single noise speech frame between two non-noise speech segments as a noise speech segment, and marking a single non-noise speech frame between two noise speech segments as a non-noise speech segment;

step 4.4, setting a merging threshold value a and a false recognition threshold value b; the threshold value a is set to solve the problem that the recognition result is discontinuous speech because the speech is actually continuous speech; the threshold b is set to solve the problem of single speech frame misrecognition.

Step 4.5, counting the number of sampling points between all two non-noise voice sections for the whole section of voice; if the number of sampling points between two non-noise speech segments is less than a merging threshold a, marking a noise speech segment in the middle of the two non-noise speech segments as a non-noise speech segment, namely merging two adjacent non-noise speech segments of the noise speech segment;

step 4.6, counting the number of sampling points between all two sections of noise voice sections for the whole section of voice; if the number of sampling points between two sections of noise speech segments is less than the merging threshold a, the non-noise speech segment in the middle of the two sections of noise speech segments is marked as the noise speech segment, namely, two sections of noise speech segments adjacent to the non-noise speech segment are merged.

And 4.7, extracting the starting coordinate index and the ending coordinate index of all the non-noise voice sections in the whole voice section.

Corresponding to the method in the invention, the invention also provides a voice endpoint detection system based on deep learning, which comprises:

a sample generation unit for generating sample audio data using the collected audio data;

the device comprises a sample processing unit, a training set and a training unit, wherein the sample processing unit is used for framing sample audio data to obtain voice frames to be trained, dividing the voice frames to be trained into non-noise voice frames to be trained containing voice and noise voice frames to be trained not containing voice according to whether each voice frame to be trained contains voice, and forming a training set by a plurality of non-noise voice frames to be trained and a plurality of noise voice frames to be trained;

the model training unit is used for training the deep neural network model by utilizing a training set to obtain the trained deep neural network model;

the model output unit is used for inputting the voice data to be detected at the end point into the trained deep neural network model, and the trained deep neural network model outputs all non-noise voice frames and noise voice frames in the voice data to be detected at the end point;

and the voice endpoint detection result obtaining unit is used for obtaining the non-noise voice section and the noise voice section in the voice data to be detected at the endpoint based on the non-noise voice frame and the noise voice frame, extracting the initial coordinate index and the ending coordinate index of all the non-noise voice sections in the voice data to be detected at the endpoint, and obtaining the voice endpoint detection result of the voice data to be detected.

One or more technical schemes provided by the invention at least have the following technical effects or advantages:

the method and the system have certain noise immunity, and have better accuracy rate for the endpoint detection of the voice audio data acquired under various complex conditions. For voice audio data with low noise and clear voice speaking, the recognition accuracy and recall rate can reach more than 95 percent; for more complex environments, namely voice audio data with low signal-to-noise ratio, the accuracy and recall rate can reach more than 90%. The output result is displayed in the form of pictures and characters, wherein the pictures display the specific position of the voice of the whole voice audio data, and the characters display the specific time of the voice.

The method and the system use the deep neural network model to automatically learn the characteristics of the voice data and detect the voice endpoint, thereby obtaining remarkable effect.

Drawings

The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention;

FIG. 1 is a flow chart of a deep learning-based speech endpoint detection method according to the present invention;

FIG. 2 is a diagram illustrating an end-point detection result of a certain segment of audio data according to the present invention;

FIG. 3 is a schematic diagram of the true signature of a segment of audio data in the present invention;

FIG. 4 is a diagram illustrating a picture result of a certain piece of audio data according to the present invention;

FIG. 5 is a schematic diagram of a deep learning-based speech endpoint detection system according to the present invention.

Detailed Description

In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflicting with each other.

In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described and thus the scope of the present invention is not limited by the specific embodiments disclosed below.

Please refer to fig. 1, which is a flowchart illustrating a method for detecting a voice endpoint based on deep learning according to an embodiment of the present invention. As shown in fig. 1, the voice endpoint detection method mainly includes a model training phase and a voice endpoint detection phase. The method comprises the following specific steps:

1. model training phase

Voice audio data and noise audio data are collected. The voice audio data only contains human voice without other noise interference. The noise audio data contains only background noise, and the types of noise include, but are not limited to, white noise, pink noise, factory noise, and station noise.

And performing data enhancement on the collected voice audio data and the noise audio data. The method comprises the following specific operations: firstly, down-sampling or up-sampling operation is carried out on voice audio data and noise audio data, and the sampling rates of the two audio data are unified. And secondly, randomly selecting voice audio data and noise audio data to perform audio fusion to generate a section of new voice audio data. The audio data includes a speech audio segment and a noise audio segment.

And (4) framing the generated new voice audio data. A section of voice comprises a plurality of voice frames, the voice length of each frame can be defined according to requirements, and the default length of the voice frame is 35 ms. If the sampling rate of the audio is 16000Hz, each speech frame contains 560 sampling points. Finally, the speech frame containing speech is marked as the first class and is marked as a non-noise speech frame, and the non-speech audio segment is marked as the second class and is marked as a noise speech frame.

And training the deep neural network model. First, positive and negative samples are randomly drawn and divided into a training set and a validation set. Wherein the ratio of training set to validation set is kept at 8: 2. the deep neural network model selects cross entropy loss as a target loss function of the model and selects a random gradient descent method to train the model. The number of model training times is set as e before training, the loss of the validation set is monitored in real time during training, and the model is monitored for updating by using an early-stop method. And stopping training the model when the number of the cycles of the deep neural network model training is equal to the preset number e or the loss stopping descending lasts for a plurality of cycles. At this time, the model is optimized and saved.

2. Detection phase

And collecting voice audio data to be tested.

And performing up-sampling or down-sampling operation on the collected voice audio data to be tested to ensure that the sampling rate of the voice audio data to be tested is the same as the sampling rate of the voice audio data in the training set.

And processing the voice audio data to be detected in a frame mode. And if the number of sampling points of the last voice frame after the voice to be detected is framed is less than one voice frame, zero padding operation is adopted for the last voice frame. The zero filling operation is an operation of adding zero after a plurality of sampling points which are not enough to form a voice frame are formed, and a complete voice frame can be formed by the plurality of sampling points after the zero filling operation is carried out; and if the voice frames of the original voice audio data to be tested after framing are all complete voice frames, skipping the step.

And calling and loading the trained deep neural network model. According to different requirements, various deep neural network models can be trained in advance. And loading different deep neural network models for use according to the requirements. Inputting continuous speech frames, and predicting the continuous speech frames. The output is the class of each of the successive speech frames, i.e. the speech frame belongs to the first class (speech) or the second class (noise).

Post-processing operations are performed on the predicted speech frames. The method comprises the following specific operations: the predicted continuous speech frames are mapped to the original speech audio data, and then specific starting and ending coordinate points (end points) of speech and noise can be obtained. Then the zeros filled in the last speech frame are removed. In this embodiment, after the classification of each frame of voice data is obtained, the voice endpoint is determined. In practical situations, a speech segment can be considered to start without a non-silent frame, or a speech segment can be considered to end without a silent frame, and the start end point and the end point of a speech segment need to be determined according to the number of continuous speech frames. The invention sets two thresholds: the threshold a and the misrecognized threshold b are combined. When the sampling point between the two pieces of voice is smaller than a merging threshold value a, merging the two pieces of voice into one piece of voice; and when the sampling point of the recognized voice segment is smaller than the false recognition threshold b, rejecting the voice segment. In a preferred embodiment of the present invention, the first threshold a may take a value of 32000, and the second threshold b may take a value of 8000.

The final result of the method is the concrete time position of the voice audio data where the voice is located and a visual image. Fig. 2 shows the text result of a specific time of a voice after a certain section of voice audio data is detected by the method, wherein the appearing time period is the time end point of the voice, and the non-appearing time is the time end point of the noise. Wherein the left time is a start time point of the voice and the right time represents a termination time point of the voice. Fig. 3 shows the real time of the audio data. Fig. 4 is a result display of a visual picture of the same voice audio data detected by the method, wherein the ordinate is the voice amplitude value (meter) and the abscissa is time (second). The solid line segment is a waveform image of the voice audio data, the dotted line segment is a detection result, when the value of the dotted line segment is 10000, the voice segment is indicated as voice, and when the value of the dotted line segment is 0, the voice segment is indicated as noise.

Referring to fig. 5, an embodiment of the present invention provides a system for detecting a voice endpoint based on deep learning, where the system corresponds to the foregoing method, and the system includes:

a sample generation unit for generating sample audio data using the collected audio data;

the device comprises a sample processing unit, a training set and a training unit, wherein the sample processing unit is used for processing sample audio data in a framing mode to obtain voice frames to be trained, dividing the voice frames to be trained into non-noise voice frames to be trained containing voice and noise voice frames to be trained not containing voice according to the fact that whether each voice frame to be trained contains voice or not, and forming the training set by a plurality of non-noise voice frames to be trained and a plurality of noise voice frames to be trained;

the model training unit is used for training the deep neural network model by using a training set to obtain the trained deep neural network model;

the model output unit is used for inputting the voice data to be detected at the end point into the trained deep neural network model, and the trained deep neural network model outputs all non-noise voice frames and noise voice frames in the voice data to be detected at the end point;

and the voice endpoint detection result obtaining unit is used for obtaining the non-noise voice section and the noise voice section in the voice data to be detected at the endpoint based on the non-noise voice frame and the noise voice frame, extracting the initial coordinate index and the ending coordinate index of all the non-noise voice sections in the voice data to be detected at the endpoint, and obtaining the voice endpoint detection result of the voice data to be detected.

While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种数据标注方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!