Model training method, system, terminal device and storage medium

文档序号:344506 发布日期:2021-12-03 浏览:27次 中文

阅读说明:本技术 模型训练方法、系统、终端设备及存储介质 (Model training method, system, terminal device and storage medium ) 是由 徐敏 肖龙源 叶志坚 于 2021-07-16 设计创作,主要内容包括:本发明提供了一种模型训练方法、系统、终端设备及存储介质,该方法包括:对样本语音进行抽样得到抽样语音,对抽样语音进行语音标注,得到转写文本,对剩余样本语音进行切分得到切分语音,根据切分语音设置自监督学习模型的回归任务标签;对切分语音进行采样得到样本对,将样本对输入自监督学习模型进行模型训练;根据自监督学习模型训练声纹识别模型;根据转写文本训练语言模型,根据抽样语音和收敛后的自监督学习模型训练声学模型;根据训练后的声学模型和语言模型构建语音识别模型。本发明采用自监督学习的方式进行语音识别模型的构建和声纹识别模型的训练,无需大量的已标注数据,降低了数据标注的工作量,提高了模型训练效率。(The invention provides a model training method, a system, a terminal device and a storage medium, wherein the method comprises the following steps: sampling sample voice to obtain sampled voice, performing voice labeling on the sampled voice to obtain a transcribed text, segmenting residual sample voice to obtain segmented voice, and setting a regression task label of an automatic supervision learning model according to the segmented voice; sampling the segmented voice to obtain a sample pair, and inputting the sample pair into an automatic supervision learning model for model training; training a voiceprint recognition model according to the self-supervision learning model; training a language model according to the transcribed text, and training an acoustic model according to the sampled voice and the converged self-supervision learning model; and constructing a voice recognition model according to the trained acoustic model and the language model. The method adopts a self-supervision learning mode to construct the voice recognition model and train the voiceprint recognition model, does not need a large amount of labeled data, reduces the workload of data labeling and improves the model training efficiency.)

1. A method of model training, the method comprising:

sampling sample voice to obtain sampled voice, and carrying out voice labeling on the sampled voice to obtain a transcribed text;

segmenting the residual sample voice to obtain segmented voice, and setting a regression task label of an automatic supervision learning model according to the segmented voice;

sampling the segmented voice to obtain a sample pair, and inputting the sample pair into the self-supervision learning model for model training until the self-supervision learning model is converged;

training a voiceprint recognition model according to the sampled voice and the converged self-supervision learning model until the voiceprint recognition model is converged;

training a language model according to the transcribed text, and training an acoustic model according to the sampled voice and the converged self-supervision learning model;

and constructing a voice recognition model according to the trained acoustic model and the language model.

2. The model training method of claim 1, wherein said model training of said sample pair input into said unsupervised learning model comprises:

coding the encoder input into the self-supervision learning model by the sample to obtain coded data, and inputting the coded data into a discriminator in the self-supervision learning model for data discrimination;

inputting the identification result of the identifier into a classifier in the self-supervision learning model for loss calculation to obtain a model loss parameter;

and updating parameters of the encoder and the discriminator according to the model loss parameters until the encoder and the discriminator are converged, and outputting the converged self-supervision learning model.

3. The model training method of claim 1, wherein the sample pairs comprise positive sample pairs and negative sample pairs, and wherein sampling the segmented speech to obtain sample pairs comprises:

sampling the segmented voices to obtain sampled voices, and setting the sampled voices as the positive sample pairs when the sampled voices in the same round are from the same voice;

and when the sampled voices in the same round are from different voices, setting the sampled voices as the negative sample pairs.

4. The model training method of claim 1, wherein the setting of the regression task label of the unsupervised learning model from the segmented speech comprises:

respectively extracting MFCC features, MFCC first-order difference features, MFCC second-order difference features, Fbank features, LPC features, rhythm features, time warping features and frequency mask features of the segmented voice;

and respectively setting the segmented voice, the MFCC features, the MFCC first-order difference features, the MFCC second-order difference features, the Fbank features, the LPC features, the rhythm features, the time warping features and the frequency mask features as regression task labels of the self-supervision learning model.

5. The model training method of claim 2, wherein the inputting of the discrimination result of the discriminator into the classifier in the self-supervised learning model for loss calculation results in model loss parameters:

where Θ is the parameter of the encoder, Φ is the parameter of the discriminator, subscript p denotes positive samples, n denotes negative samples, (x)1,x2) Represents the positive sample pair, (x)1,xrnd) Representing the pair of negative samples, the g-function representing the output of the discriminator, and L (Θ, Φ) being the model loss parameter.

6. The model training method of claim 2, wherein said updating parameters of said encoder and said discriminator based on said model loss parameters comprises:

calculating partial differentials of the encoder and the discriminator according to a back propagation algorithm;

and updating the parameters of the encoder and the discriminator by adopting a gradient descent algorithm according to the partial differential and the maximum model loss parameter.

7. The model training method of claim 1, wherein said segmenting the remaining sample speech to obtain segmented speech comprises:

if the voice duration of any remaining sample voice is less than a preset duration, deleting the sample voice;

and segmenting the residual sample voice according to a preset time interval to obtain the segmented voice.

8. A model training system, the system comprising:

the regression task tag setting module is used for sampling sample voice to obtain sampled voice, and performing voice labeling on the sampled voice to obtain a transcribed text; segmenting the residual sample voice to obtain segmented voice, and setting a regression task label of an automatic supervision learning model according to the segmented voice;

the voice sampling module is used for sampling the segmented voice to obtain a sample pair, and inputting the sample pair into the self-supervision learning model for model training until the self-supervision learning model is converged;

the voiceprint model training module is used for training a voiceprint recognition model according to the sampled voice and the converged self-supervision learning model until the voiceprint recognition model converges;

the acoustic model training module is used for training a language model according to the transcribed text and training an acoustic model according to the sampled voice and the converged self-supervision learning model;

and the voice model training module is used for constructing a voice recognition model according to the trained acoustic model and the language model.

9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.

Technical Field

The invention belongs to the field of artificial intelligence, and particularly relates to a model training method, a model training system, terminal equipment and a storage medium.

Background

Voiceprint recognition and voice recognition are two most important applications in the voice field, but since voiceprint recognition focuses on speaker characteristics and light speaking content and is essentially a classification problem, voice recognition focuses on speaker characteristics and light speaking content, and voice recognition needs to consider the relation between the front and the back of voice and text, the two recognition modes are always independently researched.

In the existing voice recognition and voiceprint recognition processes, models are constructed and trained respectively, so that the training of the voice recognition models and the voiceprint recognition models requires a large amount of labeled data to perform model training, the model training is complicated, and the model training efficiency is reduced.

Disclosure of Invention

The embodiment of the invention aims to provide a model training method, a model training system, terminal equipment and a storage medium, and aims to solve the problem of low model training efficiency caused by the fact that a large amount of labeled data is needed for model training in the existing training processes of a voice recognition model and a voiceprint recognition model.

The embodiment of the invention is realized in such a way that a model training method comprises the following steps:

sampling sample voice to obtain sampled voice, and carrying out voice labeling on the sampled voice to obtain a transcribed text;

segmenting residual sample voice to obtain segmented voice, and setting a regression task label of an automatic supervision learning model according to the segmented voice;

sampling the segmented voice to obtain a sample pair, and inputting the sample pair into the self-supervision learning model for model training until the self-supervision learning model is converged;

training a voiceprint recognition model according to the sampled voice and the converged self-supervision learning model until the voiceprint recognition model is converged;

training a language model according to the transcribed text, and training an acoustic model according to the sampled voice and the converged self-supervision learning model;

and constructing a voice recognition model according to the trained acoustic model and the language model.

Further, the model training of the sample pair input into the self-supervised learning model comprises:

coding the encoder input into the self-supervision learning model by the sample to obtain coded data, and inputting the coded data into a discriminator in the self-supervision learning model for data discrimination;

inputting the identification result of the identifier into a classifier in the self-supervision learning model for loss calculation to obtain a model loss parameter;

and updating parameters of the encoder and the discriminator according to the model loss parameters until the encoder and the discriminator are converged, and outputting the converged self-supervision learning model.

Further, the sampling the segmented speech to obtain the sample pair includes:

sampling the segmented voices to obtain sampled voices, and setting the sampled voices as the positive sample pairs when the sampled voices in the same round are from the same voice;

and when the sampled voices in the same round are from different voices, setting the sampled voices as the negative sample pairs.

Further, the setting of the regression task label of the self-supervised learning model according to the segmented speech includes:

respectively extracting MFCC features, MFCC first-order difference features, MFCC second-order difference features, Fbank features, LPC features, rhythm features, time warping features and frequency mask features of the segmented voice;

and respectively setting the segmented voice, the MFCC features, the MFCC first-order difference features, the MFCC second-order difference features, the Fbank features, the LPC features, the rhythm features, the time warping features and the frequency mask features as regression task labels of the self-supervision learning model.

Further, the loss function used for inputting the discrimination result of the discriminator into the classifier in the self-supervised learning model for loss calculation is as follows:

where Θ is the parameter of the encoder, Φ is the parameter of the discriminator, subscript p denotes positive samples, n denotes negative samples, (x)1,x2) Represents the positive sample pair, (x)1,xrnd) Representing the pair of negative samples, the g-function representing the output of the discriminator, and L (Θ, Φ) being the model loss parameter.

Further, the updating the parameters of the encoder and the discriminator according to the model loss parameters includes:

calculating partial differentials of the encoder and the discriminator according to a back propagation algorithm;

and updating the parameters of the encoder and the discriminator by adopting a gradient descent algorithm according to the partial differential and the maximum model loss parameter.

Further, the segmenting the residual sample voice to obtain segmented voice includes:

if the voice duration of any remaining sample voice is less than a preset duration, deleting the sample voice;

and segmenting the residual sample voice according to a preset time interval to obtain the segmented voice.

It is another object of an embodiment of the present invention to provide a model training system, including:

the regression task tag setting module is used for sampling sample voice to obtain sampled voice, and performing voice labeling on the sampled voice to obtain a transcribed text; segmenting the residual sample voice to obtain segmented voice, and setting a regression task label of an automatic supervision learning model according to the segmented voice;

the voice sampling module is used for sampling the segmented voice to obtain a sample pair, and inputting the sample pair into the self-supervision learning model for model training until the self-supervision learning model is converged;

the voiceprint model training module is used for training a voiceprint recognition model according to the sampled voice and the converged self-supervision learning model until the voiceprint recognition model converges;

the acoustic model training module is used for training a language model according to the transcribed text and training an acoustic model according to the sampled voice and the converged self-supervision learning model;

and the voice model training module is used for constructing a voice recognition model according to the trained acoustic model and the language model, and inputting the voice to be recognized into the voice recognition model for voice recognition to obtain a voice recognition result.

It is another object of the embodiments of the present invention to provide a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method when executing the computer program.

It is a further object of embodiments of the present invention to provide a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the above-mentioned method steps.

The embodiment of the invention improves the anti-noise, anti-reverberation and anti-deformation distortion capabilities of the converged self-supervised learning model by segmenting the voice and setting the regression task label of the self-supervised learning model.

Drawings

FIG. 1 is a flow chart of a model training method provided by a first embodiment of the present invention;

FIG. 2 is a flow chart of a model training method provided by a second embodiment of the present invention;

FIG. 3 is a schematic structural diagram of a model training system according to a third embodiment of the present invention;

fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

In order to explain the technical means of the present invention, the following description will be given by way of specific examples.

Example one

Referring to fig. 1, a flowchart of a model training method according to a first embodiment of the present invention is shown, where the model training method is applicable to any terminal device, where the terminal device includes a server, a mobile phone, a tablet or a wearable smart device, and the model training method includes the steps of:

step S10, sampling the sample voice to obtain a sampled voice, and performing voice labeling on the sampled voice to obtain a transcribed text;

in this embodiment, the target language is mandarin, the language to be recognized is Minnan, and optionally, in this step, the sample speech further includes code conversion speech, and the code conversion speech is speech in which the target language is interspersed in the language to be recognized;

in the step, the sampling quantity for sampling the sample voice can be set according to requirements, and in the step, the voice marking is carried out on the sample voice by constructing a voice recognition pronunciation dictionary and based on the constructed voice recognition pronunciation dictionary to obtain a transcription text corresponding to each sample voice;

step S20, segmenting the residual sample voice to obtain segmented voice, and setting a regression task label of the self-supervision learning model according to the segmented voice;

and the data volume of each subsequent model training data is improved by segmenting the residual sample voice.

In this step, before segmenting the remaining sample speech, the method further includes: respectively determining the voice duration of each residual sample voice; if the voice duration of any residual sample voice is less than the preset duration, deleting the sample voice;

in this step, sample voices with voice durations smaller than the preset duration are deleted, so that each sample voice can carry more voice information.

Further, in this step, the sample voices are respectively segmented according to a preset time interval to obtain the segmented voices, where the preset time interval may be set as required, for example, the preset time interval may be set to 1 second, 2 seconds, or 3 seconds, and the like.

Optionally, in this step, the setting of the regression task label of the self-supervised learning model according to the segmented speech includes:

respectively extracting Mel-scale Frequency Cepstral Coefficients (MFCC) features, MFCC first-order difference features, MFCC second-order difference features, Fbank features, voice signal linear prediction features (LPCs), prosodic features, time warping features and Frequency mask features of the segmented voice;

setting the segmented speech, the MFCC features, the MFCC first-order difference features, the MFCC second-order difference features, the Fbank features, the LPC features, the prosody features, the time warping features and the frequency mask features as regression task labels of the auto-supervised learning model respectively;

the method comprises the steps of setting an MFCC feature, an MFCC first-order difference feature, an MFCC second-order difference feature, an Fbank feature, an LPC feature, a rhythm feature, a time bending feature and a frequency mask feature as regression task labels of an auto-supervised learning model respectively, improving the accuracy of the training of the auto-supervised learning model, enabling the auto-supervised learning model to learn parameters for extracting the features, and in the step, setting segmented speech as the regression task labels of the auto-supervised learning model, improving the noise resistance, reverberation resistance and distortion resistance of the auto-supervised learning model after convergence.

Step S30, sampling the segmented voice to obtain a sample pair, and inputting the sample pair into the self-supervision learning model for model training until the self-supervision learning model is converged;

the self-supervision learning model comprises an encoder, a discriminator and a classifier, wherein the encoder is used for carrying out feature encoding on a sample pair input into the self-supervision learning model, the discriminator is used for discriminating whether features encoded by the encoder come from the same speaker or not, and the classifier carries out loss calculation on discrimination results of the discriminator so as to obtain model loss parameters representing parameter errors of the encoder and the discriminator.

Optionally, in this step, the sample pair includes a positive sample pair and a negative sample pair, and the sampling the segmented speech to obtain a sample pair includes:

sampling the segmented voices to obtain sampled voices, and setting the sampled voices as the positive sample pairs when the sampled voices in the same round are from the same voice;

in the step, the number of each sampling round is two, namely, two voices are randomly sampled in different segmented voices to obtain two sampled voices, and when the two sampled voices in the same sampling round are from the same voice, the two sampled voices are set as a positive sample pair;

when the sampled voices in the same round are from different voices, setting the sampled voices as the negative sample pairs;

when two sampled voices sampled in the same round come from different voices, the two sampled voices are set to be negative sample pairs, in the step, based on the judgment of whether each sampled voice comes from the same voice in the sampling process in the same round, the sample setting of the sample pairs is improved, and based on the set positive sample pairs and the set negative sample pairs, the accuracy of subsequent self-supervision learning model training is improved.

Further, in this step, the model training of the sample pair input into the self-supervised learning model includes:

coding the encoder input into the self-supervision learning model by the sample to obtain coded data, and inputting the coded data into a discriminator in the self-supervision learning model for data discrimination;

inputting the identification result of the identifier into a classifier in the self-supervision learning model for loss calculation to obtain a model loss parameter;

and updating parameters of the encoder and the discriminator according to the model loss parameters until the encoder and the discriminator are converged, and outputting the converged self-supervision learning model.

Wherein, the loss function adopted for inputting the identification result of the identifier into the classifier in the self-supervision learning model to carry out loss calculation is as follows:

where Θ is the parameter of the encoder, Φ is the parameter of the discriminator, subscript p denotes the positive samples, n denotes the negative samples, (x1, x2) denotes the positive sample pair, (x1, xrnd) denotes the negative sample pair, the g-function denotes the output of the discriminator, and L (Θ, Φ) is the model loss parameter.

Further, the step of performing model training on the sample pair input into the self-supervised learning model includes:

inputting the sample pair into a CNN network in the self-supervision learning model, wherein an activation function in the CNN network adopts a sinc function, and the number of parameters in the CNN network cannot change along with the change of a convolution kernel by adopting the sinc function, so that the convolution kernel in the CNN network can be set to be larger in size to capture context information in a larger interval, and on the other hand, the sinc function can better capture speaker characteristics and is favorable for improving a voiceprint recognition effect;

inputting the output of the CNN network into a 34-layer residual error neural network (ResNet network) in the self-supervision learning model, and inputting the output of the ResNet network into a three-layer full-connection layer in the self-supervision learning model;

and (3) taking the output of the full connection layer as acoustic embedding characteristics, respectively using the acoustic embedding characteristics for training the self-supervision learning task, and then iteratively updating all neural network parameters through a back propagation algorithm and a gradient descent method until the self-supervision learning model converges.

Step S40, training a voiceprint recognition model according to the sampled voice and the converged self-supervision learning model until the voiceprint recognition model is converged;

based on the converged voiceprint recognition model, the voiceprint recognition method can effectively perform voiceprint recognition on input voiceprint data to be recognized.

Step S50, training a language model according to the transcribed text, and training an acoustic model according to the sampled voice and the converged self-supervision learning model;

the method comprises the steps of training a language model according to a transcription text, enabling the trained language model to effectively disassemble the probability of an input sentence into the product of the probabilities of all words, and training an acoustic model according to a sampled voice and a converged self-supervision learning model, enabling the trained acoustic model to effectively calculate the sounding probability corresponding to the input characters.

Step S60, constructing a voice recognition model according to the trained acoustic model and the language model;

the voice to be recognized is input into the voice recognition model for voice recognition to obtain a voice recognition result, and the voice recognition model is built according to the trained acoustic model and the trained language model, so that the built voice recognition model can effectively perform voice recognition on the input voice to be recognized to obtain a corresponding voice recognition result.

In the embodiment of the invention, the voice is segmented to set the regression task label of the self-supervised learning model, so that the anti-noise, anti-reverberation and anti-deformation distortion capabilities of the self-supervised learning model after convergence are improved.

Example two

Referring to fig. 2, it is a flowchart of a model training method according to a second embodiment of the present invention, which is used to further refine step S30, and includes the steps of:

step S31, calculating partial differentials of the encoder and the discriminator according to a back propagation algorithm;

wherein the pairs of samples are iteratively processed by employing a back-propagation algorithm, the network prediction for each pair of samples is compared to the true result for learning, and for each pair of samples, the weights of the encoder and the discriminator are modified such that the error between the prediction and the result of the self-supervised learning model is minimized.

Step S32, updating the parameters of the encoder and the discriminator by adopting a gradient descent algorithm according to the partial differential and the maximum model loss parameter;

the parameters of the encoder and the discriminator may be updated by using a Full gradient descent algorithm (Full gradient parameter), a random gradient descent algorithm (random gradient parameter), a random average gradient descent algorithm (random average gradient parameter) or a small-batch gradient descent algorithm (Mini-batch gradient parameter), and the gradient descent algorithm is used to correctly adjust the weight vectors in the encoder and the discriminator, and a gradient is calculated for each weight, so as to update the weight, and minimize the target function as much as possible.

In the embodiment, the partial differential of the encoder and the discriminator is calculated, and based on the partial differential and the maximum model loss parameter, the parameters in the encoder and the discriminator can be effectively updated, so that the effect of updating the self-supervision learning model is achieved until the self-supervision learning model converges, and the accuracy of the self-supervision learning model is improved.

EXAMPLE III

Referring to fig. 3, a schematic structural diagram of a model training system 100 according to a third embodiment of the present invention is shown, including: regression task label sets up module 10, voice sampling module 11, voiceprint model training module 12, acoustic model training module 13 and speech model training module 14, wherein:

the regression task tag setting module 10 is used for sampling sample voice to obtain sampled voice, and performing voice labeling on the sampled voice to obtain a transcribed text; and segmenting the residual sample voice to obtain segmented voice, and setting a regression task label of an automatic supervision learning model according to the segmented voice.

Wherein, the regression task tag setting module 10 is further configured to: respectively extracting MFCC features, MFCC first-order difference features, MFCC second-order difference features, Fbank features, LPC features, rhythm features, time warping features and frequency mask features of the segmented voice;

and respectively setting the segmented voice, the MFCC features, the MFCC first-order difference features, the MFCC second-order difference features, the Fbank features, the LPC features, the rhythm features, the time warping features and the frequency mask features as regression task labels of the self-supervision learning model.

Further, the regression task tag setting module 10 is further configured to: if the voice duration of any remaining sample voice is less than a preset duration, deleting the sample voice;

and segmenting the residual sample voice according to a preset time interval to obtain the segmented voice.

And the voice sampling module 11 is configured to sample the segmented voice to obtain a sample pair, and input the sample pair into the self-supervision learning model to perform model training until the self-supervision learning model converges.

Wherein, this pronunciation collection module 11 is still used for: coding the encoder input into the self-supervision learning model by the sample to obtain coded data, and inputting the coded data into a discriminator in the self-supervision learning model for data discrimination;

inputting the identification result of the identifier into a classifier in the self-supervision learning model for loss calculation to obtain a model loss parameter;

and updating parameters of the encoder and the discriminator according to the model loss parameters until the encoder and the discriminator are converged, and outputting the converged self-supervision learning model.

Preferably, the voice collecting module 11 is further configured to: calculating partial differentials of the encoder and the discriminator according to a back propagation algorithm;

and updating the parameters of the encoder and the discriminator by adopting a gradient descent algorithm according to the partial differential and the maximum model loss parameter.

Further, the loss function adopted for inputting the identification result of the identifier into the classifier in the self-supervision learning model for loss calculation is as follows:

where Θ is the parameter of the encoder, Φ is the parameter of the discriminator, subscript p denotes the positive samples, n denotes the negative samples, (x1, x2) denotes the positive sample pair, (x1, xrnd) denotes the negative sample pair, the g-function denotes the output of the discriminator, and L (Θ, Φ) is the model loss parameter.

Optionally, the voice collecting module 11 is further configured to: sampling the segmented voices to obtain sampled voices, and setting the sampled voices as the positive sample pairs when the sampled voices in the same round are from the same voice;

and when the sampled voices in the same round are from different voices, setting the sampled voices as the negative sample pairs.

And the voiceprint model training module 12 is used for training a voiceprint recognition model according to the sampled voice and the converged self-supervision learning model until the voiceprint recognition model converges.

And the acoustic model training module 13 is used for training a language model according to the transcribed text and training an acoustic model according to the sampled voice and the converged self-supervision learning model.

And the speech model training module 14 is configured to construct a speech recognition model according to the trained acoustic model and the language model, and input the speech to be recognized into the speech recognition model for speech recognition to obtain a speech recognition result.

Wherein the speech model training module 14 is further configured to: and constructing a voice recognition model according to the trained acoustic model and the third language model.

In the embodiment of the invention, the voice is segmented to set the regression task label of the self-supervised learning model, so that the anti-noise, anti-reverberation and anti-deformation distortion capabilities of the self-supervised learning model after convergence are improved.

Example four

Fig. 4 is a block diagram of a terminal device 2 according to a fourth embodiment of the present application. As shown in fig. 4, the terminal device 2 of this embodiment includes: a processor 20, a memory 21 and a computer program 22, such as a program of a model training method, stored in said memory 21 and executable on said processor 20. The processor 20, when executing the computer program 23, implements the steps of the above-mentioned various embodiments of the model training method, such as S10 to S50 shown in fig. 1, or S31 to S32 shown in fig. 2. Alternatively, when the processor 20 executes the computer program 22, the functions of the units in the embodiment corresponding to fig. 3, for example, the functions of the units 10 to 14 shown in fig. 3, are implemented, for which reference is specifically made to the relevant description in the embodiment corresponding to fig. 3, and details are not described here.

Illustratively, the computer program 22 may be divided into one or more units, which are stored in the memory 21 and executed by the processor 20 to accomplish the present application. The one or more units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 22 in the terminal device 2. For example, the computer program 22 may be divided into a regression task labeling module 10, a speech sampling module 11, a voiceprint model training module 12, an acoustic model training module 13, and a speech model training module 14, each of which functions as described above.

The terminal device may include, but is not limited to, a processor 20, a memory 21. It will be appreciated by those skilled in the art that fig. 4 is merely an example of a terminal device 2 and does not constitute a limitation of the terminal device 2 and may include more or less components than those shown, or some components may be combined, or different components, for example the terminal device may also include input output devices, network access devices, buses, etc.

The Processor 20 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The memory 21 may be an internal storage unit of the terminal device 2, such as a hard disk or a memory of the terminal device 2. The memory 21 may also be an external storage device of the terminal device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing the computer program and other programs and data required by the terminal device. The memory 21 may also be used to temporarily store data that has been output or is to be output.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated module, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. The computer readable storage medium may be non-volatile or volatile. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like. It should be noted that the computer readable storage medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable storage media that does not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.

The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种语音处理方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!