Model training method, voice recognition method, device, equipment and storage medium

文档序号:925435 发布日期:2021-03-02 浏览:2次 中文

阅读说明:本技术 模型训练方法、语音识别方法、装置、设备及存储介质 (Model training method, voice recognition method, device, equipment and storage medium ) 是由 罗剑 王健宗 程宁 于 2020-12-11 设计创作,主要内容包括:本申请提供一种模型训练方法、语音识别方法、装置、设备及存储介质,该方法包括:根据多个第一训练样本,对第一预设语音识别模型进行迭代训练,得到第一语音识别模型;将第一语音识别模型与预设语言模型进行融合,得到第二语音识别模型;将多个第二训练样本中的第二语音序列输入至第二语音识别模型,得到每个第二语音序列对应的第二文本和融合分数;根据每个第二语音序列的融合分数,从多个第二语音序列中筛选出目标语音序列;根据每个目标语音序列、每个目标语音序列对应的第二文本和多个第一训练样本,对第二预设语音识别模型进行迭代训练,得到目标语音识别模型。本申请涉及人工智能,能够提高语音识别模型的训练效率。(The application provides a model training method, a voice recognition device, equipment and a storage medium, wherein the method comprises the following steps: performing iterative training on a first preset voice recognition model according to a plurality of first training samples to obtain a first voice recognition model; fusing the first voice recognition model with a preset language model to obtain a second voice recognition model; inputting second voice sequences in a plurality of second training samples into a second voice recognition model to obtain a second text and a fusion score corresponding to each second voice sequence; screening a target voice sequence from the plurality of second voice sequences according to the fusion score of each second voice sequence; and performing iterative training on the second preset speech recognition model according to each target speech sequence, the second text corresponding to each target speech sequence and the plurality of first training samples to obtain the target speech recognition model. The application relates to artificial intelligence, and the training efficiency of a voice recognition model can be improved.)

1. A method of model training, comprising:

obtaining a plurality of first training samples and a plurality of second training samples, wherein the first training samples comprise a first voice sequence and a first text corresponding to the first voice sequence which is labeled, and the second training samples comprise a second voice sequence;

performing iterative training on a first preset voice recognition model according to the plurality of first training samples to obtain a first voice recognition model;

fusing the first voice recognition model with a preset language model to obtain a second voice recognition model;

inputting a plurality of second voice sequences into the second voice recognition model to obtain a second text and a fusion score corresponding to each second voice sequence;

screening out a target voice sequence from the plurality of second voice sequences according to the fusion score of each second voice sequence;

and performing iterative training on a second preset speech recognition model according to each target speech sequence, a second text corresponding to each target speech sequence and a plurality of first training samples to obtain a target speech recognition model.

2. The model training method of claim 1, wherein the screening the plurality of second speech sequences for a target speech sequence based on the fusion score of each of the second speech sequences comprises:

filtering the plurality of second voice sequences according to a preset score threshold value and the fusion score of each second voice sequence to obtain a plurality of candidate voice sequences;

and screening out a target voice sequence from the candidate voice sequences according to the probability distribution information of the first training samples.

3. The model training method of claim 2, wherein the screening the candidate speech sequences for the target speech sequence according to the probability distribution information of the first training samples comprises:

generating a plurality of speech sequence sets from the plurality of candidate speech sequences, wherein each of the speech sequence sets comprises at least one of the candidate speech sequences;

determining probability distribution information for each of the sets of speech sequences;

and selecting a target voice sequence set from the plurality of voice sequence sets according to the probability distribution information of the plurality of first training samples and the probability distribution information of each voice sequence set.

4. The model training method of claim 3, wherein said selecting a target set of speech sequences from a plurality of sets of speech sequences based on the probability distribution information of the plurality of first training samples and the probability distribution information of each of the sets of speech sequences comprises:

calculating K-L divergence of each speech sequence set according to the probability distribution information of the plurality of first training samples and the probability distribution information of each speech sequence set;

and selecting a target voice sequence set from the plurality of voice sequence sets according to the K-L divergence of each voice sequence set.

5. The model training method according to any one of claims 1 to 4, wherein the iteratively training a first preset speech recognition model according to the plurality of first training samples to obtain a first speech recognition model comprises:

performing data enhancement on a plurality of the first training samples;

and performing iterative training on a first preset voice recognition model according to the plurality of first training samples subjected to data enhancement until the first preset voice recognition model is converged to obtain a first voice recognition model.

6. The model training method according to any one of claims 1 to 4, wherein the iteratively training the second preset speech recognition model according to each of the target speech sequences, the second text corresponding to each of the target speech sequences, and a plurality of the first training samples to obtain the target speech recognition model comprises:

generating a plurality of third training samples according to each target voice sequence and a second text corresponding to each target voice sequence;

obtaining a training sample set according to the plurality of third training samples and the plurality of first training samples;

and performing iterative training on the second preset speech recognition model through the training sample set until a preset condition is reached to obtain a target speech recognition model.

7. A speech recognition method, comprising:

acquiring a voice sequence to be recognized;

performing voice recognition on the voice sequence through a target voice recognition model to obtain text information corresponding to the voice sequence;

the target speech recognition model is trained according to the model training method of any one of claims 1 to 6.

8. A model training apparatus, characterized in that the model training apparatus comprises:

the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of first training samples and a plurality of second training samples, the first training samples comprise a first voice sequence and a first text which is labeled and corresponds to the first voice sequence, and the second training samples comprise a second voice sequence;

the first training module is used for carrying out iterative training on a first preset voice recognition model according to the plurality of first training samples to obtain a first voice recognition model;

the fusion module is used for fusing the first voice recognition model with a preset language model to obtain a second voice recognition model;

the input module is used for inputting the second voice sequences into the second voice recognition model to obtain a second text and a fusion score corresponding to each second voice sequence;

the screening module is used for screening a target voice sequence from the plurality of second voice sequences according to the fusion score of each second voice sequence;

and the second training module is used for performing iterative training on the second preset voice recognition model according to each target voice sequence, a second text corresponding to each target voice sequence and a plurality of first training samples to obtain a target voice recognition model.

9. A computer arrangement comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the model training method of any one of claims 1 to 6 or the steps of the speech recognition method of claim 7.

10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, wherein the computer program, when being executed by a processor, carries out the model training method as set forth in any one of claims 1 to 6, or the steps of the speech recognition method as set forth in claim 7.

Technical Field

The present application relates to the technical field of model construction in artificial intelligence, and in particular, to a model training method, a speech recognition apparatus, a device, and a storage medium.

Background

Automatic Speech Recognition (Automatic Speech Recognition) is a technology that converts Speech into text. Speech recognition is an important technology in the field of artificial intelligence, and is applied to various industries related to internet, communication, smart home and the like, and a speech recognition model is generally used for automatic speech recognition. In order to train the speech recognition model, a large amount of speech data and text data corresponding to the speech data need to be prepared. In the prior art, the text data sample is obtained as follows: a large number of people are organized to listen to voice data and write down correct text data. However, as algorithms and computer capabilities advance, the speech recognition model allows more and more speech data and corresponding text data to be trained to improve the accuracy of the speech recognition model, which makes labor cost a bottleneck for resource investment, and a large amount of manpower is invested to label the speech data, which is time-consuming, expensive and inefficient.

Disclosure of Invention

The present application mainly aims to provide a model training method, a speech recognition method, an apparatus, a device and a storage medium, and aims to improve the training effect and the training efficiency of a speech recognition model.

In a first aspect, the present application provides a model training method, including:

obtaining a plurality of first training samples and a plurality of second training samples, wherein the first training samples comprise a first voice sequence and a first text corresponding to the first voice sequence which is labeled, and the second training samples comprise a second voice sequence;

performing iterative training on a first preset voice recognition model according to the plurality of first training samples to obtain a first voice recognition model;

fusing the first voice recognition model with a preset language model to obtain a second voice recognition model;

inputting a plurality of second voice sequences into the second voice recognition model to obtain a second text and a fusion score corresponding to each second voice sequence;

screening out a target voice sequence from the plurality of second voice sequences according to the fusion score of each second voice sequence;

and performing iterative training on a second preset speech recognition model according to each target speech sequence, a second text corresponding to each target speech sequence and a plurality of first training samples to obtain a target speech recognition model.

In a second aspect, the present application further provides a speech recognition method, including:

acquiring a voice sequence to be recognized;

performing voice recognition on the voice sequence through a target voice recognition model to obtain text information corresponding to the voice sequence;

the target speech recognition model is trained according to the model training method described above.

In a third aspect, the present application further provides a model training apparatus, including:

the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a plurality of first training samples and a plurality of second training samples, the first training samples comprise a first voice sequence and a first text which is labeled and corresponds to the first voice sequence, and the second training samples comprise a second voice sequence;

the first training module is used for carrying out iterative training on a first preset voice recognition model according to the plurality of first training samples to obtain a first voice recognition model;

the fusion module is used for fusing the first voice recognition model with a preset language model to obtain a second voice recognition model;

the input module is used for inputting the second voice sequences into the second voice recognition model to obtain a second text and a fusion score corresponding to each second voice sequence;

the screening module is used for screening a target voice sequence from the plurality of second voice sequences according to the fusion score of each second voice sequence;

and the second training module is used for performing iterative training on the second preset voice recognition model according to each target voice sequence, a second text corresponding to each target voice sequence and a plurality of first training samples to obtain a target voice recognition model.

In a fourth aspect, the present application also provides a computer device comprising a processor, a memory, and a computer program stored on the memory and executable by the processor, wherein the computer program, when executed by the processor, implements the steps of the model training method or the speech recognition method as described above.

In a fifth aspect, the present application further provides a computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the model training method or the speech recognition method as described above.

The application provides a model training method, a voice recognition method, a device, equipment and a storage medium, the model training method comprises the steps of obtaining a plurality of first training samples and a plurality of second training samples, wherein the first training samples comprise a first voice sequence and a first text corresponding to a labeled first voice sequence, the second training samples comprise a second voice sequence, then carrying out iterative training on a first preset voice recognition model according to the plurality of first training samples to obtain a first voice recognition model, fusing the first voice recognition model and a preset language model to obtain a second voice recognition model, inputting the plurality of second voice sequences into the second voice recognition model to obtain a second text and a fusion score corresponding to each second voice sequence, screening out a target voice sequence from the plurality of second voice sequences according to the fusion score of each second voice sequence, and performing iterative training on the second preset speech recognition model according to each target speech sequence, the second text corresponding to each target speech sequence and the plurality of first training samples to obtain the target speech recognition model. According to the method and the device, the self-training learning model of the teacher-noise student is trained through the plurality of labeled first training samples and the plurality of unlabeled second training samples, the training effect of the voice recognition model can be greatly improved, the requirement for the number of labeled training samples is reduced, and the training efficiency of the voice recognition model is improved.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1 is a schematic flowchart illustrating steps of a model training method according to an embodiment of the present disclosure;

FIG. 2 is a flow diagram illustrating sub-steps of the model training method of FIG. 1;

FIG. 3 is a schematic diagram of a scenario for implementing the model training method provided in this embodiment;

fig. 4 is a schematic flowchart illustrating steps of a speech recognition method according to an embodiment of the present application;

FIG. 5 is a schematic block diagram of a model training apparatus according to an embodiment of the present disclosure;

FIG. 6 is a schematic block diagram of a sub-module of the model training apparatus of FIG. 5;

fig. 7 is a schematic block diagram of a speech recognition apparatus according to an embodiment of the present application;

fig. 8 is a schematic block diagram of a structure of a computer device according to an embodiment of the present application.

The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation. In addition, although the division of the functional blocks is made in the device diagram, in some cases, it may be divided in blocks different from those in the device diagram.

The embodiment of the application provides a model training method, a voice recognition device, equipment and a storage medium. The model training method can be applied to terminal equipment or a server, and the terminal equipment can be electronic equipment such as a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant and wearable equipment; the server may be a single server or a server cluster including a plurality of servers. The following explanation takes the application of the model training method to a server as an example.

Some embodiments of the present application will be described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.

Referring to fig. 1, fig. 1 is a schematic flow chart illustrating steps of a model training method according to an embodiment of the present disclosure.

As shown in fig. 1, the model training method includes steps S101 to S106.

Step S101, a plurality of first training samples and a plurality of second training samples are obtained, wherein the first training samples comprise first voice sequences and first texts corresponding to the labeled first voice sequences, and the second training samples comprise second voice sequences.

The first training sample comprises a first voice sequence and a first text corresponding to the first voice sequence, the first text is a label of the corresponding first voice sequence, and the second training sample comprises a second voice sequence. It should be noted that the first speech sequence and the second speech sequence are audio data, and the first text corresponding to the first speech sequence is text content recognized by the speech recognition of the first speech sequence. For example, the first speech sequence is a song and the corresponding first text is lyrics.

The Noise Student Training (NST) model is a semi-supervised learning model composed of a teacher and a Student, wherein the teacher model (a first preset speech recognition model) is used for learning a first Training sample with a label, and predicting a second Training sample without the label to obtain a second Training sample with the label and a second text corresponding to the second Training sample, and then the Student model (a second preset speech recognition model) is used for Training the first Training sample with the label, the second Training sample with the label and the second text corresponding to the second Training sample, and iterating the processes. Through self-training learning of teacher-noise students, the training effect of the voice recognition model can be greatly improved.

In an embodiment, the total audio length of the first training samples is higher than a first preset time threshold, and the total audio length of the second training samples is higher than a second preset time threshold, so that the accuracy of speech recognition of a subsequently trained speech recognition model can be ensured.

Further, the second preset time threshold is higher than the first preset time threshold. In practical application, the first preset time threshold and the second preset time threshold may be set according to a practical application scenario, for example, the first preset time threshold is 100h, and the second preset time threshold is 500h, which is not described herein again.

It should be noted that, in order to further ensure the privacy and security of the related information such as the plurality of first training samples and the plurality of second training samples, the related information such as the first training samples and the second training samples may also be stored in a node of a block chain. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.

Step S102, performing iterative training on a first preset voice recognition model according to a plurality of first training samples to obtain a first voice recognition model.

The first preset voice recognition model is a teacher model, a plurality of first training samples are input into the first preset voice recognition model, a voice recognition result corresponding to each first training sample is obtained, and parameters of the first preset voice recognition model are adjusted according to the voice recognition result corresponding to each first training sample and the corresponding first text until the first voice recognition model with the performance meeting the preset training conditions is obtained.

For example, the performance is recognition accuracy, and the preset training condition may be that the recognition accuracy is higher than a preset accuracy threshold. It should be noted that both the preset training condition and the preset accuracy threshold may be set according to an actual application scenario, for example, the preset accuracy threshold is 0.9, which is not specifically limited herein.

The first predetermined speech recognition model is, for example, a LAS (Listen, attentive, and Spell) model, which includes a Listen layer, an attentive layer, and a Spell layer. The method comprises the steps that a first preset voice recognition model extracts voice signal characteristics of an input first training sample, the voice signal characteristics are input into a Listen layer to be coded, then an Attention layer is used for paying Attention to different input parts (attentions) at different moments, and finally a Spell layer is used for decoding to obtain a voice recognition result of the first training sample.

In an embodiment, data enhancement is performed on a plurality of first training samples; and performing iterative training on the first preset voice recognition model according to the plurality of first training samples subjected to data enhancement until the first preset voice recognition model is converged to obtain the first voice recognition model. It should be noted that the number of samples of the first training sample can be increased by Data enhancement (Data Augmentation), for example, by normalizing the channel length, synthesizing noisy audio by superimposing a clean audio signal and a noisy audio signal, or by speed disturbance of the original audio. The specific implementation process of performing iterative training on the first preset speech recognition model may refer to the foregoing embodiment, the convergence of the first preset speech recognition model may be that the performance meets the preset training condition, the iteration number is greater than the preset iteration number, and/or the iteration duration is greater than the preset iteration duration, and the like, and this embodiment is not particularly limited. Some noises are added to the first preset voice recognition model through data enhancement, so that a subsequent second preset voice recognition model (student model) can be forced to learn a voice recognition result output by the first preset voice recognition model (teacher model) in a more effort manner, and the training effect of the target voice recognition model is improved.

Further, data enhancement is performed on the plurality of first training samples using SpecAugment, and some noise is added to the first preset speech recognition model, so that robustness of the first speech recognition model is increased. Specifically, each first voice sequence is converted into a spectrogram, and a plurality of spectrograms are subjected to time-warping, frequency masking and/or time masking through SpecAugment. Before the iterative training of the first preset voice recognition model, the spectrogram of the first voice sequence can be enhanced through SpecAugment, the training speed of the first voice recognition model on the first training sample can be increased, and therefore the training efficiency of the target voice recognition model is improved.

It should be noted that, the time deformation of the spectrograms through specproof means that for a mel spectrogram with pi time transition, a random horizontal line is constructed to pass through the center of the mel spectrogram by taking time as an x axis and frequency as a y axis, and then a time period (W, W-pi) is deformed to the left or the right; frequency masking of multiple spectrograms means that [ f ] is masked on the frequency axis of a continuous Mel spectrum0,f0+f]A mask, wherein F is a uniform parameter from 0 to F; frequency mapping multiple spectrogramsThe mask is that [ t ] is divided on the time axis of a continuous Mel spectrogram0,t0+t]Mask, where T is a uniform distribution of 0 to T. The SpecAugment is used for carrying out data enhancement on a plurality of first training samples, along with the implementation of iterative training, the robustness and the performance of a first preset voice recognition model can be improved, the SpecAugment strength can be increased at the moment, more noise is brought to the input of the first preset voice recognition model, and therefore the training effect of the first voice recognition model is improved.

In one embodiment, noise is added to the first preset speech recognition model; and performing iterative training on the first preset voice recognition model added with the noise according to the plurality of first training samples until the first preset voice recognition model added with the noise converges to obtain a first voice recognition model.

Illustratively, Dropout is used to add noise to the first predetermined speech recognition model, i.e., randomly making some hidden neurons of the neural network inoperative, the output of the hidden neurons being 0, and not updating the weights during each training of the first predetermined speech recognition model. For example, if the Dropout ratio is set to p, each hidden neuron does not work with the probability p, and therefore, in the noise student training, noise is added to the first preset speech recognition model through Dropout, so that the second speech recognition model (student model) is forced to learn the speech recognition result output by the first preset speech recognition model (teacher model) more hard, and the training effect of the target speech recognition model is improved.

In one embodiment, data enhancement is performed on a plurality of first training samples, and noise is added to a first preset speech recognition model; and performing iterative training on the first preset voice recognition model added with the noise according to the plurality of first training samples subjected to data enhancement until the first preset voice recognition model added with the noise is converged to obtain a first voice recognition model. Through carrying out data enhancement on the first training sample and adding noise to the first preset voice recognition model, parameters of the first voice recognition model which is finished by iteration can be more accurate, and therefore the training effect of a subsequent target voice recognition model is improved.

And S103, fusing the first voice recognition model with a preset language model to obtain a second voice recognition model.

The preset Language Model is a pre-trained Language Model (Language Model), and the preset Language Model is, for example, a statistical Language Model, a feedforward neural network Language Model, a recurrent neural network Language Model, or the like. Through fusing the first voice recognition model with the preset language model, the obtained second voice recognition model has better performance, and is favorable for improving the training effect of the target voice recognition model, so that the voice recognition accuracy of the target voice recognition model is higher.

In an embodiment, the data amount of the training sample of the language model is much larger than the data amount of the first training sample of the first speech recognition model, and the fusion of the first speech recognition model and the preset language model can help the second speech recognition model to perform modeling of semantic information, where the fusion mode includes a Voting (Voting method), an Averaging (Averaging method), a Bagging (guided aggregation method) algorithm, a Boosting (lifting method), and the like, and this embodiment is not particularly limited.

And step S104, inputting the plurality of second voice sequences into a second voice recognition model to obtain a second text and a fusion score corresponding to each second voice sequence.

And inputting the plurality of second training samples into a second voice recognition model to obtain a voice recognition result corresponding to each second voice sequence, wherein the voice recognition result comprises a second text and a fusion score corresponding to the second voice sequence. And predicting the plurality of second voice sequences through a second voice recognition model, and outputting a second text and a fusion score corresponding to each second voice sequence so as to screen out the second voice sequences meeting the preset conditions from the plurality of second voice sequences.

Illustratively, the second speech recognition model is a LAS (Listen, attented and Spell) model, which includes a Listen layer, an attentive layer, and a Spell layer. The second speech sequence is a voice signal feature vector x with length T, and after the voice signal feature vector x with length T is input into the second speech recognition model, the voice signal is retained through a Listen layerFor the relevant content, noise irrelevant to the sound signal is removed, the Listen layer is, for example, a bidirectional LSTM network, and a feature vector h with a length T is output as bilstm (x); in the Attend layer, a scaled attribution mechanism may be adopted to obtain the hidden layer state S of the RNN network in the Attend layer at the current timetEigenvectors h and hidden layer states S output according to Listen layertComputing a Context Vector (Context Vector) at the current time, namely a Context Vector Ct=Attention(StH); in the Spell layer, an RNN network is used as a decoder, the hidden layer state at the last moment, the output vector and the context vector of the Spell layer are determined, and the hidden layer state S at the current moment is calculatedt=RNN(St-1,Yt-1,Ct-1) Then, the output vector at the current moment is processed by the softmax network to output the character distribution probability (the distribution probability of the second text) Y corresponding to the second voice sequencet=CharacterDistribution(St). Because the second speech recognition model is obtained by fusing the trained first speech recognition model LAS and the language model LM, the character distribution probabilities of the LAS model and the LM model are weighted and summed, and a fusion score corresponding to the second speech sequence can be obtained. For example, the fusion score S ═ log p (Y)t=k)=log pLAS(Yt=k)+βlog pLM(YtK), where β refers to a hyper-parameter that requires an up-scaling of the second speech sequence, k refers to the second text with the highest probability of character distribution at time t, log pLAS(YtK) refers to the probability of the character distribution of the second speech sequence at the output of the LAS model, log pLM(YtK) refers to the character distribution probability of the second speech sequence at the LM model output.

And S105, screening a target voice sequence from the plurality of second voice sequences according to the fusion score of each second voice sequence.

For a second text corresponding to a second voice sequence output by a second voice recognition model, a target voice sequence meeting preset conditions needs to be screened out, the target voice sequence can be screened out from a plurality of second voice sequences according to the fusion score of each second voice sequence, and the target voice sequence can be used as training data of a high-quality student model (the second preset voice recognition model), so that the training effect of the second preset voice recognition model is improved.

In one embodiment, as shown in fig. 2, step S105 includes: substeps 1051 to substep S1052.

And a substep S1051, filtering the plurality of second voice sequences according to a preset score threshold value and the fusion score of each second voice sequence to obtain a plurality of candidate voice sequences.

In an embodiment, the preset score threshold may be flexibly set by a user, the second speech sequence with the fusion score greater than or equal to the preset score threshold is reserved, and the second speech sequence with the fusion score smaller than the preset score threshold is screened out to obtain a plurality of candidate speech sequences. It should be noted that the accuracy of the second text corresponding to the second speech sequence with a high fusion score is higher, so that the second speech sequence with a higher accuracy of the second text is retained, which is beneficial to screening out a high-quality second speech sequence.

In an embodiment, the sentence lengths of the second speech sequences affect the speech recognition results of the second speech recognition models, so that the accuracy of the second text and the fusion score corresponding to each second speech sequence is inconsistent. Therefore, the fusion score of the second voice sequence is normalized, and then the normalized fusion score is compared with the preset score threshold value to screen out the second voice sequence with the fusion score smaller than the preset score threshold value, so as to obtain a plurality of candidate voice sequences with high quality.

Wherein the regularization formula is:l is the character length of the second speech sequence, mu, beta are for a plurality of second speech sequences (li,Si) Parameters obtained by performing linear regression, σ being the calculationStandard deviation of (2). In some embodiments, the preset score threshold may be over iteration timeAnd increasing and decreasing, wherein the preset score threshold value is smaller and smaller in iterative training, so that more and more candidate voice sequences can be used as training samples of the target voice recognition model.

And a substep S1052, screening out a target voice sequence from the candidate voice sequences according to the probability distribution information of the first training samples.

In one embodiment, screening a target speech sequence from a plurality of candidate speech sequences according to probability distribution information of a plurality of first training samples includes: generating a plurality of speech sequence sets according to a plurality of candidate speech sequences, wherein each speech sequence set comprises at least one candidate speech sequence; determining probability distribution information of each voice sequence set; and selecting a target voice sequence set from the plurality of voice sequence sets according to the probability distribution information of the plurality of first training samples and the probability distribution information of each voice sequence set. Wherein the target speech sequence set comprises at least one target speech sequence. It should be noted that the distribution of the filtered candidate speech sequences is relatively large, and if the filtered candidate speech sequences are directly used as training samples of the second preset speech recognition model, the performance of the second preset speech recognition model is affected, so that a target speech sequence set similar to the probability distribution information of the first training samples is searched from the plurality of speech sequence sets, and at least one target speech sequence in the target speech sequence set is used as a training sample of the second preset speech recognition model, so that the performance of the second preset speech recognition model can be improved, and the training effect of the target speech recognition model can be improved.

Wherein a plurality of batches (Batch) are randomly chosen from the plurality of candidate speech sequences to generate a plurality of sets of speech sequences, each Batch comprising at least one candidate speech sequence. Each candidate voice sequence carries attribute information, the attribute information carried by a plurality of candidate voice sequences can form probability distribution information of one voice sequence set, the probability distribution information is determined according to specific set services, such as the length of audio, the proportion of men and women of a speaker, the age of the speaker, the surrounding environment and the like, and the probability distribution information corresponding to each voice sequence set is compared with the probability distribution information of a plurality of first training samples to find out a target voice sequence set approximate to the probability distribution information of the plurality of first training samples.

In one embodiment, calculating the K-L divergence of each speech sequence set according to the probability distribution information of the plurality of first training samples and the probability distribution information of each speech sequence set; and selecting a target voice sequence set from the plurality of voice sequence sets according to the K-L divergence of each voice sequence set. And selecting the voice sequence set with the lowest K-L divergence as a target voice sequence set, wherein the closer the probability distribution information of the voice sequence set with the lower K-L divergence is compared with the probability distribution information of the plurality of first training samples, and the target voice sequence set comprises at least one target voice sequence. By calculating the K-L Divergence (K-L Divergence) of each speech sequence set, a target speech sequence set similar to the probability distribution information of the plurality of first training samples can be accurately found.

The K-L divergence calculation formula is as follows:

wherein f (m) (u) is a speech sequence set, p (i) is probability distribution information of a plurality of first training samples, and q (i) is probability distribution information of the speech sequence set.

And S106, performing iterative training on a second preset voice recognition model according to each target voice sequence, a second text corresponding to each target voice sequence and a plurality of first training samples to obtain a target voice recognition model.

After a plurality of target voice sequences are obtained, a plurality of first training samples are input into a voice recognition student model, a first voice recognition result is output, and parameters of a second preset voice recognition model are adjusted according to the similarity between the first voice recognition result and a first text corresponding to each first training sample; stopping model training when the adjusted second preset speech recognition model is determined to meet the preset performance condition, and obtaining a trained initial speech recognition model; and training the initial voice recognition model according to the plurality of target voice sequences to obtain a target voice recognition model. It should be noted that the self-training learning model of the teacher-noise student is trained through a plurality of labeled first training samples and a plurality of unlabeled second training samples, so that the training effect of the speech recognition model can be greatly improved, the requirement on the number of labeled training samples is reduced, and the training efficiency of the speech recognition model is improved.

Wherein the preset performance condition is determined according to the voice recognition accuracy and the voice recognition speed of the voice recognition student model. In practical application, the preset performance condition may also be set according to a practical application scenario. And initializing a second preset voice recognition model through a plurality of first training samples so as to ensure the convergence of training data. The initial voice recognition model is trained through a plurality of target voice sequences to obtain a target voice recognition model with a good training effect, and the accuracy of voice recognition of the target voice recognition model is high.

In one embodiment, the second predetermined speech recognition model is, for example, a LAS (Listen, attentive, and Spell) model, which includes a Listen layer, an attentive layer, and a Spell layer.

In an embodiment, a plurality of third training samples are generated according to each target voice sequence and the second text corresponding to each target voice sequence; obtaining a training sample set according to the plurality of third training samples and the plurality of first training samples; and performing iterative training on the second preset speech recognition model through the training sample set until a preset condition is reached to obtain a target speech recognition model. The preset condition may be that the performance meets a preset training condition, the iteration number is greater than a preset iteration number, and/or the iteration duration is greater than a preset iteration duration, and the like, and the embodiment of the present application is not particularly limited.

Referring to fig. 3, fig. 3 is a schematic view of a scene for implementing the model training method provided in this embodiment.

As shown in fig. 3, a plurality of first training samples and a plurality of second training samples are obtained, the first training samples include a first speech sequence and a first text corresponding to the labeled first speech sequence, the second training samples include a second speech sequence, then the plurality of first training samples are input to the first preset speech recognition model 10 to perform iterative training on the first preset speech recognition model 10 to obtain a first speech recognition model 20, the preset language model 30 is fused with the first speech recognition model 20 to obtain a second speech recognition model 40, then the second speech sequence in the plurality of second training samples is input to the second speech recognition model 40 to obtain a second text and a fusion score corresponding to each second speech sequence, a target speech sequence is selected from the plurality of second speech sequences according to the fusion score of each second speech sequence, and inputting each target voice sequence, the second text corresponding to each target voice sequence and a plurality of first training samples into the second preset voice recognition model 50 to perform iterative training on the second preset voice recognition model 50 to obtain the target voice recognition model 60.

The model training method provided in the foregoing embodiment includes obtaining a plurality of first training samples and a plurality of second training samples, where the first training samples include a first speech sequence and a first text corresponding to a labeled first speech sequence, the second training samples include a second speech sequence, then performing iterative training on a first preset speech recognition model according to the plurality of first training samples to obtain a first speech recognition model, fusing the first speech recognition model and a preset language model to obtain a second speech recognition model, then inputting the plurality of second speech sequences into the second speech recognition model to obtain a second text and a fusion score corresponding to each second speech sequence, screening out a target speech sequence from the plurality of second speech sequences according to the fusion score of each second speech sequence, and selecting a second text and a plurality of first training samples corresponding to each target speech sequence and each target speech sequence, and performing iterative training on the second preset voice recognition model to obtain a target voice recognition model. According to the method and the device, the self-training learning model of the teacher-noise student is trained through the plurality of labeled first training samples and the plurality of unlabeled second training samples, the training effect of the voice recognition model can be greatly improved, the requirement for the number of labeled training samples is reduced, and the training efficiency of the voice recognition model is improved.

Referring to fig. 4, fig. 4 is a flowchart illustrating steps of a speech recognition method according to an embodiment of the present application.

As shown in fig. 4, the model training method includes steps S201 to S202.

Step S201, a voice sequence to be recognized is obtained.

For example, the voice sequence to be recognized is a piece of voice data sent by the user in the social application.

And S202, performing voice recognition on the voice sequence through the target voice recognition model to obtain text information corresponding to the voice sequence.

The target speech recognition model is obtained by training according to the model training method described in the foregoing embodiment. For example, a user a receives a voice sequence sent by a user B through a social application of a terminal device, and performs voice recognition on the voice sequence through a target voice recognition model to obtain text information of "hello" (voice recognition result).

In the speech recognition method provided by the embodiment, the speech sequence to be recognized is obtained, and the target speech recognition model in the embodiment is used for performing speech recognition on the speech sequence to obtain the text information corresponding to the speech sequence.

Referring to fig. 5, fig. 5 is a schematic block diagram of a model training apparatus according to an embodiment of the present disclosure.

As shown in fig. 5, the model training apparatus 300 includes: an acquisition module 301, a first training module 302, a fusion module 303, an input module 304, a screening module 305, and a second training module 306.

An obtaining module 301, configured to obtain a plurality of first training samples and a plurality of second training samples, where the first training samples include a first speech sequence and a first text corresponding to a labeled first speech sequence, and the second training samples include a second speech sequence;

the first training module 302 is configured to perform iterative training on a first preset speech recognition model according to a plurality of first training samples to obtain a first speech recognition model;

a fusion module 303, configured to fuse the first speech recognition model with a preset language model to obtain a second speech recognition model;

an input module 304, configured to input the multiple second speech sequences into the second speech recognition model, so as to obtain a second text and a fusion score corresponding to each second speech sequence;

a screening module 305, configured to screen a target speech sequence from the multiple second speech sequences according to the fusion score of each second speech sequence;

and the second training module 306 is configured to perform iterative training on a second preset speech recognition model according to each target speech sequence, the second text corresponding to each target speech sequence, and the plurality of first training samples, so as to obtain a target speech recognition model.

In one embodiment, as shown in FIG. 6, the screening module 305 includes:

the filtering submodule 3051, configured to filter the multiple second speech sequences according to a preset score threshold and the fusion score of each second speech sequence, so as to obtain multiple candidate speech sequences;

the screening submodule 3052 is configured to screen a target speech sequence from the multiple candidate speech sequences according to the probability distribution information of the multiple first training samples.

In one embodiment, the screening submodule 3052 is further configured to:

generating a plurality of speech sequence sets from the plurality of candidate speech sequences, wherein each of the speech sequence sets comprises at least one of the candidate speech sequences;

determining probability distribution information for each of the sets of speech sequences;

and selecting a target voice sequence set from the plurality of voice sequence sets according to the probability distribution information of the plurality of first training samples and the probability distribution information of each voice sequence set.

In one embodiment, the screening submodule 3052 is further configured to:

calculating K-L divergence of each speech sequence set according to the probability distribution information of the plurality of first training samples and the probability distribution information of each speech sequence set;

and selecting a target voice sequence set from the plurality of voice sequence sets according to the K-L divergence of each voice sequence set.

In one embodiment, the first training module 302 is further configured to:

performing data enhancement on a plurality of the first training samples;

and performing iterative training on a first preset voice recognition model according to the plurality of first training samples subjected to data enhancement until the first preset voice recognition model is converged to obtain a first voice recognition model.

In one embodiment, the second training module 306 is further configured to:

generating a plurality of third training samples according to each target voice sequence and a second text corresponding to each target voice sequence;

obtaining a training sample set according to the plurality of third training samples and the plurality of first training samples;

and performing iterative training on the second preset speech recognition model through the training sample set until a preset condition is reached to obtain a target speech recognition model.

Referring to fig. 7, fig. 7 is a schematic block diagram of a speech recognition apparatus according to an embodiment of the present application.

As shown in fig. 7, the speech recognition apparatus 400 includes:

an obtaining module 401, configured to obtain a speech sequence to be recognized.

And the recognition module 402 is configured to perform speech recognition on the speech sequence through a target speech recognition model to obtain text information corresponding to the speech sequence.

The target speech recognition model is obtained by training according to the model training method described in the foregoing embodiment.

It should be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of each module and unit of the speech recognition apparatus described above may refer to the corresponding processes in the foregoing speech recognition method embodiments, and are not described herein again.

The apparatus provided by the above embodiments may be implemented in the form of a computer program, which can be run on a computer device as shown in fig. 8.

Referring to fig. 8, fig. 8 is a schematic block diagram illustrating a structure of a computer device according to an embodiment of the present disclosure. The computer device may be a server or a terminal device.

As shown in fig. 8, the computer device includes a processor, a memory, and a network interface connected by a system bus, wherein the memory may include a nonvolatile storage medium and an internal memory.

The non-volatile storage medium may store an operating system and a computer program. The computer program includes program instructions that, when executed, cause a processor to perform any one of a model training method or a speech recognition method.

The processor is used for providing calculation and control capability and supporting the operation of the whole computer equipment.

The internal memory provides an environment for the execution of a computer program on a non-volatile storage medium, which when executed by the processor, causes the processor to perform any one of a model training method or a speech recognition method.

The network interface is used for network communication, such as sending assigned tasks and the like. Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

It should be understood that the Processor may be a Central Processing Unit (CPU), and the Processor may be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

Wherein, in one embodiment, the processor is configured to execute a computer program stored in the memory to implement the steps of:

obtaining a plurality of first training samples and a plurality of second training samples, wherein the first training samples comprise a first voice sequence and a first text corresponding to the first voice sequence which is labeled, and the second training samples comprise a second voice sequence;

performing iterative training on a first preset voice recognition model according to the plurality of first training samples to obtain a first voice recognition model;

fusing the first voice recognition model with a preset language model to obtain a second voice recognition model;

inputting a plurality of second voice sequences into the second voice recognition model to obtain a second text and a fusion score corresponding to each second voice sequence;

screening out a target voice sequence from the plurality of second voice sequences according to the fusion score of each second voice sequence;

and performing iterative training on a second preset speech recognition model according to each target speech sequence, a second text corresponding to each target speech sequence and a plurality of first training samples to obtain a target speech recognition model.

In one embodiment, the processor, when implementing the filtering out the target speech sequence from the plurality of second speech sequences according to the fusion score of each second speech sequence, is configured to implement:

filtering the plurality of second voice sequences according to a preset score threshold value and the fusion score of each second voice sequence to obtain a plurality of candidate voice sequences;

and screening out a target voice sequence from the candidate voice sequences according to the probability distribution information of the first training samples.

In one embodiment, the processor, when implementing the screening out the target speech sequence from the candidate speech sequences according to the probability distribution information of the first training samples, is configured to implement:

generating a plurality of speech sequence sets from the plurality of candidate speech sequences, wherein each of the speech sequence sets comprises at least one of the candidate speech sequences;

determining probability distribution information for each of the sets of speech sequences;

and selecting a target voice sequence set from the plurality of voice sequence sets according to the probability distribution information of the plurality of first training samples and the probability distribution information of each voice sequence set.

In one embodiment, the processor, when implementing the selecting of the target speech sequence set from the plurality of speech sequence sets according to the probability distribution information of the plurality of first training samples and the probability distribution information of each of the speech sequence sets, is configured to implement:

calculating K-L divergence of each speech sequence set according to the probability distribution information of the plurality of first training samples and the probability distribution information of each speech sequence set;

and selecting a target voice sequence set from the plurality of voice sequence sets according to the K-L divergence of each voice sequence set.

In an embodiment, when implementing the iterative training of the first preset speech recognition model according to the plurality of first training samples to obtain the first speech recognition model, the processor is configured to implement:

performing data enhancement on a plurality of the first training samples;

and performing iterative training on a first preset voice recognition model according to the plurality of first training samples subjected to data enhancement until the first preset voice recognition model is converged to obtain a first voice recognition model.

In an embodiment, when the processor performs iterative training on the second preset speech recognition model according to each target speech sequence, the second text corresponding to each target speech sequence, and the plurality of first training samples to obtain a target speech recognition model, the processor is configured to:

generating a plurality of third training samples according to each target voice sequence and a second text corresponding to each target voice sequence;

obtaining a training sample set according to the plurality of third training samples and the plurality of first training samples;

and performing iterative training on the second preset speech recognition model through the training sample set until a preset condition is reached to obtain a target speech recognition model.

In one embodiment, the processor is configured to execute a computer program stored in the memory to perform the steps of:

acquiring a voice sequence to be recognized;

performing voice recognition on the voice sequence through a target voice recognition model to obtain text information corresponding to the voice sequence;

the target speech recognition model is obtained by training according to the model training method described in the foregoing embodiment.

It should be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the computer device may refer to the corresponding process in the foregoing model training method or speech recognition method embodiment, and is not described herein again.

Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed, a method implemented by the computer-readable storage medium may refer to various embodiments of a model training method or a speech recognition method of the present application.

The computer-readable storage medium may be an internal storage unit of the computer device described in the foregoing embodiment, for example, a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the computer device.

It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.

It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.

The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments. While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:语音识别方法、装置、计算机设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!