Fast voice cloning method

文档序号:36609 发布日期:2021-09-24 浏览:26次 中文

阅读说明:本技术 一种快速语音克隆方法 (Fast voice cloning method ) 是由 赵莉 陈非凡 赵瑞霞 史嘉琪 许鹤馨 于 2021-06-12 设计创作,主要内容包括:本发明涉及一种快速语音克隆方法,包括如下步骤:步骤101、利用编码器模块获取声学特征;步骤102、利用合成器模块合成梅尔谱图;步骤103、利用声码器模块将梅尔谱图转换成克隆语音;该快速语音克隆方法,采用了3个模型联合建模,分别采用不同数据集,进行独立的训练。其可以使用目前的开源数据集并在低性能设备上克隆出良好效果的克隆语音,具有失真率低,频谱相似度高,对齐度高的优点。(The invention relates to a rapid voice cloning method, which comprises the following steps: step 101, acquiring acoustic characteristics by using an encoder module; step 102, synthesizing a Mel spectrogram by using a synthesizer module; step 103, converting the Mel spectrogram into clone voice by using a vocoder module; the fast voice cloning method adopts 3 models for combined modeling, and respectively adopts different data sets for independent training. The method can use the current open source data set and clone the clone voice with good effect on low-performance equipment, and has the advantages of low distortion rate, high spectrum similarity and high alignment.)

1. A fast voice cloning method is characterized by comprising the following steps:

step 101, acquiring acoustic characteristics by using an encoder module;

step 102, synthesizing a Mel spectrogram by using a synthesizer module;

step 103, converting the Mel spectrogram into clone voice by using a vocoder module.

2. The fast speech cloning method of claim 1, wherein: the specific process of acquiring the acoustic features by using the encoder module in the step 101 is as follows:

step 201, preprocessing a target audio file to obtain a 40-dimensional MFCC;

step 202, inputting 40-dimensional MFCC into a 3-layer LSTM, and extracting acoustic hidden features from the 3-dimensional MFCC;

step 203, inputting the acoustic hidden features into the full connection layer, and classifying the acoustic features;

and step 204, scaling the classified acoustic features, and removing redundant data through a RELU layer to make the acoustic features of the target sparse.

3. The fast speech cloning method of claim 2, wherein: the acoustic features are represented by a similarity matrix: the following formula (3):

wherein, the jth voice of the ith speaker is defined as uij(1≤i≤N,1≤j≤M),xijRepresenting speech uijLog mel frequency spectrum of eijRepresenting the feature of the object, the mean of the object feature being defined as the centroid c of the object featureiAs shown in formula (1):

therein, exclusive characteristicsDefined as the following formula (4):

4. the fast speech cloning method of claim 1, wherein: the specific process of synthesizing the mel spectrum by using the synthesizer module in the step 102 is as follows:

step 301, processing the acoustic features obtained in step 101 to obtain prosody embedding;

step 302, converting the input text into character embedding (text representation);

step 303, splicing the character embedding (text representation) and the acoustic features, and then sequentially entering a convolutional layer, a long short-term memory neural network layer and a location sensitive authentication (attention based on location) module to obtain a context vector with a fixed length;

step 304, the context vector with fixed length enters a decoder network of autoregressive cycle to obtain a prediction result of a Mel spectrogram;

step 305, entering the prediction result of the mel spectrogram into a prenet layer, and then entering the prenet layer together with the result of a location sensitive attention (attention based on location) module to obtain the result of an LSTM layer;

step 306, combining the result of the LSTM layer and the context vector with fixed length, and predicting the target spectogram by linear project;

and 307, entering a spectrum of the target into a post-net layer to predict residual errors, and adding the prosody embedding extracted in the step 301 to predict together to obtain a Mel spectrum.

5. The fast speech cloning method of claim 1, wherein: the specific process of converting the mel spectrum into the clone voice by using the vocoder module in the step 103 is as follows:

step 401, taking the synthesized mel spectrum obtained in step 102 as input voice, and obtaining a banded sub-band signal H (ω) through a quadrature mirror filter bank analyzer (QMF), as shown in formula (6);

where x () is the input audio sequence and ω is the digital angular frequency.

Step 402, sampling the obtained sub-band signal through an LPC (linear predictive coding) structure;

and step 403, combining the sampling signals processed in the step 402 by using a quadrature mirror filter bank synthesizer, and outputting the cloned voice.

6. The fast speech cloning method of claim 5, wherein: the operation of LPC (linear predictive coding) is as follows (10):

wherein the excitation at time t is etThe generated audio is stP is the order of the filter, apIs the coefficient of the filter; solving for a by minimizing the mean square error between the true signal and the predicted signalpThe formula is shown as (11):

7. the fast speech cloning method of claim 5, wherein: the LPC (linear predictive coding) includes a frame rate network, a sampling rate network.

8. The fast speech cloning method of claim 7, wherein: the GRU of the sample rate network is calculated as follows (7):

wherein u is(...)The vector is found by finding the column vector i to the corresponding V(...)In matrix, GRUB(.) is a normal, non-sparse GRU; u shape()For GRU non-recursive weight matrix, set U(u,s)Is U(u)Is used for st-1Inputting the composition of the sample imbedding columns, and deriving a new imbedding matrix V(u,s)=U(u,s)E, E is an embedding matrix,the vector is found by finding the column vector i to the corresponding V(,)In a matrix, and GRUB() Is a normal, non-sparse GRU.

9. The fast speech cloning method of claim 7, wherein: the dual fully-connected layer of the sample rate network is defined as the following equation (8):

dualfc(x)=a1*tanh(W1x)+a2*tanh(W2x)…+a8*tanh(W8x) (8)

where W is the weight matrix, a is the weight vector, tanh is the function, and x is the input speech signal.

Technical Field

The invention belongs to the technical field of voice cloning, and particularly relates to a rapid voice cloning method.

Background

With the research and development of phonetics, the technology of speech is also changing day by day. Today's speech technology mainly includes two main categories of speech synthesis and speech recognition. In general, a technique of changing or adjusting acoustic features in speech is called speech conversion. The technique of processing speech by changing the acoustic characteristics of a speaker, such as frequency spectrum, formants, etc., so that it is similar to the timbre of another speaker is phonetic cloning. There are two general approaches to implementing voice cloning, which convert the original voice into voice of the timbre of the target speaker by changing the acoustic features in the original voice to approximate the acoustic features of the target speaker. The speech clone is another one, and the specific speech synthesis is carried out according to characters after the speech features of the target speaker are extracted.

The study of phonetic cloning began in the 18 th century, where Kratzenstein simulated human vocal organs and processes using materials such as air bag bellows reeds and was modified to produce different vowels. In the beginning of the 20 th century, bell laboratories created an electronic synthesizer that could produce sound by simulating the resonance of sound. By the late 20 th century, there was also a succession of formant synthesizers using integrated circuit technology that could construct the acoustic channels of the filter to synthesize natural speech by carefully adjusting the parameters. Then waveform concatenation synthesis methods also ensue. Doctor Liu Qing Feng in the beginning of the 21 st century characterized complex speech by an auditory sense quantization unit, and occupied 80% of the Chinese speech synthesis market at that time by the technology. With the improvement of hardware computing power, the voice cloning technology based on artificial intelligence is endless, and various neural network configurations such as a convolutional neural network and a long-term and short-term memory neural network can be used for training a voice cloning system, so that the rhythm of voice can be adjusted more accurately, and a large amount of manpower is not needed for manual adjustment after a mature model is trained.

The traditional voice cloning method comprises vector quantization, a hidden Markov model, a Gaussian mixture model and the like, the methods have the defects of over-smoothing and weak voice characteristic processing, and meanwhile, the prosody and spectrogram need to be manually adjusted, and the required labor cost is high. The existing voice cloning scheme relies on a large amount of data sets and manual prosody adjustment, and is harsh in conditions, time-consuming and labor-consuming. Meanwhile, the high-quality open source voice data of Chinese is deficient, and many voice data are monopolized by companies such as science news flyers and the like.

Disclosure of Invention

In order to solve the problem of the defects of the existing voice cloning technology, the invention provides a fast voice cloning method which can use the current open source data set and realize good effect on low-performance equipment.

The invention relates to a rapid voice cloning method, which comprises the following steps:

step 101, acquiring acoustic characteristics by using an encoder module;

step 102, synthesizing a Mel spectrogram by using a synthesizer module;

step 103, converting the Mel spectrogram into clone voice by using a vocoder module.

Further, in step 101, the specific process of acquiring the acoustic features by using the encoder module is as follows:

step 201, preprocessing a target audio file to obtain a 40-dimensional MFCC;

step 202, inputting 40-dimensional MFCC into a 3-layer LSTM, and extracting acoustic hidden features from the 3-dimensional MFCC;

step 203, inputting the acoustic hidden features into the full connection layer, and classifying the acoustic features;

and step 204, scaling the classified acoustic features, and removing redundant data through a RELU layer to make the acoustic features of the target sparse.

Further, the acoustic features are represented by a similarity matrix: the following formula (3):

wherein, the jth voice of the ith speaker is defined as uij(1≤i≤N,1≤j≤M),xijRepresenting speech uijLog mel frequency spectrum of eijFeatures representing objects, mean determination of object featuresDefining as the centroid c of the target featureiAs shown in formula (1):

therein, exclusive characteristicsDefined as the following formula (4):

further, the specific process of synthesizing the mel spectrum by using the synthesizer module in the step 102 is as follows:

step 301, processing the acoustic features obtained in step 101 to obtain prosody embedding;

step 302, converting the input text into character embedding (text representation);

step 303, splicing the character embedding (text representation) and the acoustic features, and then sequentially entering a convolutional layer, a long short-term memory neural network layer and a location sensitive authentication (attention based on location) module to obtain a context vector with a fixed length;

step 304, the context vector with fixed length enters a decoder network of autoregressive cycle to obtain a prediction result of a Mel spectrogram;

step 305, entering the prediction result of the mel spectrogram into a prenet layer, and then entering the prenet layer together with the result of a location sensitive attention (attention based on location) module to obtain the result of an LSTM layer;

step 306, combining the result of the LSTM layer and the context vector with fixed length, and predicting the target spectogram by linear project;

and 307, entering a spectrum of the target into a post-net layer to predict residual errors, and adding the prosody embedding extracted in the step 301 to predict together to obtain a Mel spectrum.

Further, the specific process of converting the mel spectrum into the clone voice by using the vocoder module in the step 103 is as follows:

step 401, taking the synthesized mel spectrum obtained in step 102 as input voice, and obtaining a banded sub-band signal H (ω) through a quadrature mirror filter bank analyzer (QMF), as shown in formula (6);

where x () is the input audio sequence and ω is the digital angular frequency.

Step 402, sampling the obtained sub-band signal through an LPC (linear predictive coding) structure;

and step 403, combining the sampling signals processed in the step 402 by using a quadrature mirror filter bank synthesizer, and outputting the cloned voice.

Further, the operation of LPC (linear predictive coding) is as follows (10):

wherein the excitation at time t is etThe generated audio is stP is the order of the filter, apIs the coefficient of the filter; solving for a by minimizing the mean square error between the true signal and the predicted signalpThe formula is shown as (11):

further, the LPC (linear predictive coding) includes a frame rate network and a sampling rate network.

Further, the GRU of the sample rate network is calculated as the following equation (7):

wherein u is(...)The vector is found by finding the column vector i to the corresponding V(...)In matrix, GRUB(.) is a normal, non-sparse GRU; u shape()For GRU non-recursive weight matrix, set U(u,s)Is U(u)Is used for st-1Inputting the composition of the sample imbedding columns, and deriving a new imbedding matrix V(u,s)=U(u,s)E, E is an embedding matrix,the vector is found by finding the column vector i to the corresponding V(,)In a matrix, and GRUB() Is a normal, non-sparse GRU.

Further, the dual fully-connected layer of the sampling rate network is defined as the following equation (8):

dualfc(x)=a1*tanh(W1x)+a2*tanh(W2x)…+a8*tanh(W8x) (8)

where W is a weight matrix, a is a weight vector, tanh is a function, and x is the input speech signal

The invention has the beneficial effects that: the rapid voice cloning method provided by the invention adopts 3 models for combined modeling, and respectively adopts different data sets for independent training. The method can use the current open source data set and clone the clone voice with good effect on low-performance equipment, and has the advantages of low distortion rate, high spectrum similarity and high alignment.

The present invention will be described in further detail below with reference to the accompanying drawings.

Drawings

Fig. 1 is a schematic diagram of a system architecture.

Fig. 2 is a schematic diagram of an Encoder network structure.

FIG. 3 is a schematic diagram of prosody extraction.

Fig. 4 is a diagram of a synthesizer network architecture.

Fig. 5 is a schematic diagram of the overall architecture of Vocoder.

Fig. 6 is a schematic diagram of the LPCNet network structure.

Fig. 7 is a schematic diagram of noise injection during training.

FIG. 8 is a schematic diagram of MFCC for both male raw and cloned speech.

Fig. 9 is a comparison and alignment chart of male speech spectrograms.

FIG. 10 is a schematic diagram of female original speech and synthesized speech.

Fig. 11 is a comparison and alignment chart of female voice spectrogram.

FIG. 12 is a schematic diagram comparing the present method with the prior art method.

Detailed Description

To further explain the technical means and effects of the present invention adopted to achieve the intended purpose, the following detailed description of the embodiments, structural features and effects of the present invention will be made with reference to the accompanying drawings and examples.

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In the description of the present invention, it is to be understood that the terms "center", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "aligned", "overlapping", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.

The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature; in the description of the present invention, "a plurality" means two or more unless otherwise specified.

Example 1

The embodiment provides a fast voice cloning method as shown in fig. 1 to 6, which includes the following steps:

step 101, acquiring acoustic characteristics by using an encoder module;

step 102, synthesizing a Mel spectrogram by using a synthesizer module;

step 103, converting the Mel spectrogram into clone voice by using a vocoder module.

The scheme is divided into 3 modules of encoder, synthesizer and vocoder, as shown in fig. 1.

Wherein the encoder module converts the voice of the speaker into a speaker embedding (which can be understood as an acoustic feature). The synthesizer module synthesizes the maker embedding and the character embedding (text representation) converted from the input text into Mel-spectogram. The vocoder module converts the Mel-spectrogram to waveform.

Further, in step 101, the specific process of acquiring the acoustic features by using the encoder module is as follows:

step 201, preprocessing a target audio file to obtain a 40-dimensional MFCC;

step 202, inputting 40-dimensional MFCC into a 3-layer LSTM, and extracting acoustic hidden features from the 3-dimensional MFCC;

step 203, inputting the acoustic hidden features into the full connection layer, classifying the acoustic features, and classifying the acoustic features of the same person into one class;

step 204, using L2 to scale the classified acoustic features, and removing a large amount of redundant data through the RELU layer to make the acoustic features of the target sparse, so that the extracted acoustic features are easier to understand, as shown in fig. 2.

The encoder is able to derive its unique acoustic features from the input speaker's speech and train the model accordingly. The method needs to learn acoustic parameters of different speakers, and can accurately output acoustic characteristics of the target speaker only by voice of the target speaker for a few seconds, even under the condition that the voice is unclear and contains some background noise.

To improve the encoder's ability to learn embedding, it is trained on the speaker verification task. Speaker verification is to determine whether different voices are spoken by the same person, and may also be understood as to determine the speaker's affiliation of voices. And inputting a section of voice into the model, extracting voice characteristics of the section of voice, comparing the voice characteristics with other known characteristics, and if the voice characteristics exceed a given similarity threshold, putting the voice characteristics into a corresponding characteristic library. If it does not match any of the remaining features that are known, a new identity is created for it. Speech data uttered by the same person is highly correlated even if the contents are different, whereas the same semantics of different speakers are not correlated. The present scheme simulates this process using the GE2E loss function to optimize the model.

Assume that there is a set of speech data grouped by speakers. The jth voice of the ith speaker is defined as uij(1≤i≤N,1≤j≤M),xijRepresenting speech uijThe log mel spectrum, which can extract speech features from the waveform. e.g. of the typeijRepresenting the feature of the object, the mean of the object feature being defined as the centroid c of the object featureiAs shown in formula (1):

by all imbedding eijWith each speaker embedding ck(k is more than or equal to 1 and less than or equal to N) are compared to construct a similarity matrix sij,kSee formula (2):

Sij,k=ω·cos(eij,ck)+b=ω·eij·||ck||2+b (2)

where ω and b are learnable parameters. When the feature data of the input audio matches the speaker, the model expects to output a high similarity value, and a lower value at the mismatch. The mapping relation between the voice and the acoustic features is analyzed and judged through the similarity matrix, and therefore the accuracy of extracting the acoustic features is improved.

When loss is calculated, each piece of speech eijAll can be embedded with the speakeriThe comparison is performed, including the speaker to which the utterance belongs, which affects the computation of the loss. To prevent this from interfering with the loss calculation, the voices to be compared by the speaker are deleted from the speaker embedding data. The acoustic features are represented by a similarity matrix: the following formula (3):

when the voice is the speaker (i ═ k), exclusive imbedding is used to replace the imbedding for operation, so as to avoid the influence of the speaker on training. Therein, exclusive characteristicsDefined as the following formula (4):

the loss function of GE2E includes both softmax and const, the softmax loss function is shown in equation 5-a, and the const loss function is shown in equation 5-b.

Wherein, 1 ═ i, k ═ N, and 1 ═ j ═ M. During the training process, the score of the verification sample and the center of the speaker gradually approaches to 1, and the score of the verification sample and the center of other speakers approaches to 0. The speaker classification task can be better completed through GE2E, so that the ability of the encoder to capture acoustic features is improved.

Further, the specific process of synthesizing the mel spectrum by using the synthesizer module in the step 102 is as follows:

step 301, processing the acoustic features obtained in step 101 to obtain prosody embedding;

while there are many signal processing algorithms today that can control explicit variables of speech, such as pitch contour and voicing decisions, it avoids the effects of text information and speaker information entanglement on the resulting speech, while only adding processing to the f0 pitch and vuv (speech or not) information, which can be used to better control both features. However, some of the speech is hard to represent and the audio is controlled using underlying variables of digital signal processing, which can only be learned using deep learning methods. One of the latent variables is a style label, and the embedding style can be learned. Another potential variable is the alignment of text and mel-frequency spectrogram, by which the rhythm of the audio can be controlled. Therefore, in order to learn these latent variables, before training the synthesizer, the spatker embedding output by the last module encoder needs to be processed first, and prosody embedding (prosody embedding) is extracted from the spatker embedding, which includes information such as F0 fundamental frequency and pitch contour, as shown in fig. 3.

The spectrum extraction network consists of two-dimensional convolutional layers and a ReLu layer, each convolutional layer consists of 32 filters, the kernel size of each filter is 3 x 3, and the step size is 1 x 1. The output is passed through a flat layer, which is rendered one dimensional, and the frame-level feature sequence is converted to a word-case-level feature sequence using average pooling, and projected into a three-dimensional latent space using two linear layers. The vector quantization codebook consists of 256 codewords and is used to measure and quantize the three-dimensional potential vector to the nearest codeword with L2 distance. These prosodic tags are passed to a linear layer, resulting in prosody embedding.

Step 302, converting the input text into character embedding (text representation);

step 303, splicing the character embedding (text representation) and the acoustic features, and then sequentially entering a convolutional layer, a long short-term memory neural network layer and a location sensitive authentication (attention based on location) module to obtain a context vector with a fixed length;

step 304, the context vector with fixed length enters a decoder network of autoregressive cycle to obtain a prediction result of a Mel spectrogram;

step 305, entering the prediction result of the mel spectrogram into a prenet layer, and then entering the prenet layer together with the result of a location sensitive attention (attention based on location) module to obtain the result of an LSTM layer;

step 306, combining the result of the LSTM layer and the context vector with fixed length, and predicting the target spectogram by linear project;

and 307, entering a spectrum of the target into a post-net layer to predict residual errors, and adding the prosody embedding extracted in the step 301 to predict together to obtain a Mel spectrum.

The input to the synthesizer is the text and the extracted speedembedding. The text is firstly converted into character embedding through an encoder, then spliced with the spoke embedding, and passes through 3 convolutional layers and a long-short term memory neural network layer. Then, the method enters a location-based attention module, which converts the coded sequence into a context vector with a fixed length through the weight obtained when the text and the audio are decoded, so that the generated audio is prevented from being too long or too short, and the model does not repeatedly generate the generated audio sequence or miss the generated audio sequence. Then a decoder network is a self-regressive loop network, which can be used for predicting Mel-spectra, the prediction result of each step will enter into the prenet layer, then enter into the LSTM layer together with the result of the attribute, the result of the LSTM layer and the vector of the attribute are reconciled and then predict the target spectra through the linear project, then the prediction result enters into the post-net layer to predict the residual, and the last Mel-spectra is obtained by adding the probody embedding extracted from the spaker embedding for prediction together.

And (3) while performing mel-spectrum prediction, entering the predicted sequence and the attention context vector into a projection layer, outputting the sequence and the attention context vector to a sigmoid activation function, judging the completion degree of the predicted sequence of the current Mel spectrogram, and stopping subsequent generation of the frequency spectrum if the completion degree is finished. The network of synthesizers is shown in fig. 4.

The vocoder part of the existing speech synthesis system generally uses WaveNet, which has high naturalness and fidelity. It does not make any a priori assumptions about speech, but rather learns the distributions from the data using a neural network and generates speech through a sampling process. Its speech quality is better than all the previously commonly used parameter-based vocoders, but its speed of generating speech is slow because the convolutional layer is too complex to design in order to obtain a sufficiently large receptive field. It is not suitable for some scenarios where fast speech generation is required. The vocoder of the scheme improves Wavernn and adds an LPC (linear predictive coding) structure.

In Wavernn, the model directly predicts the sampling points, and the whole process is an autoregressive model. The vocoder of the scheme uses the neural network to predict the sound source, and the filter part uses the digital signal processing method to calculate, so that the task is simpler, and the network efficiency is further improved. Fig. 5 shows the general structure of the model.

Further, the specific process of converting the mel spectrum into the clone voice by using the vocoder module in the step 103 is as follows:

step 401, taking the synthesized mel spectrum obtained in step 102 as input voice, and obtaining a banded sub-band signal H (ω) through a quadrature mirror filter bank analyzer (QMF), as shown in formula (6);

where x () is the input audio sequence and ω is the digital angular frequency.

Step 402, sampling the obtained sub-band signal through an LPC (linear predictive coding) structure;

and step 403, combining the sampling signals processed in the step 402 by using a quadrature mirror filter bank synthesizer, and outputting the cloned voice.

The scheme combines a multiband strategy and a multi-time strategy to further reduce the overall computation complexity, and the overall computation complexity is about 1.0 GFLOPS. The original speech signal is divided into 4 sub-bands after passing through the QMF filter, and then each sub-band is down-sampled 4 times, on one hand, the information in the original signal is not lost, and on the other hand, the 4 down-sampled sub-band signals are predicted simultaneously by using a frame rate network, and the calculation times are only one fourth compared with the direct calculation. While QMF is a low-cost filter bank, the cost of reconstructing the original signal from subband signals is much less than the cost saved by reducing the number of repeated subbands. The multi-band strategy improves the efficiency of the LPCNet in the frequency domain, and the multi-time strategy considers two adjacent sampling points in the sub-band signal. The frame rate network can greatly improve the speed of the frame rate network by simultaneously predicting adjacent points in 4 sub-bands. Wherein the LPCNet network structure is shown in fig. 6. In fig. 6, the left side is a frame rate network and the right side is a sample rate network. The synthesized input is limited to the features of 16 mel-frequency cepstral coefficients and 2 pitch parameters. For low bit rate coding applications, the above features need to be quantized.

The input to the model is the processed 16-dimensional acoustic features. The frame rate network consists of two 3 x 1 convolutional layers and two fully-connected layers, converts the input acoustic features into a condition vector f and outputs the condition vector f to the sampling rate network. The vector f remains constant for the duration of each frame. In forward propagation, the remaining sample rate network layers are shared except for the 8-layer dual full-link layer. The audio excitation, the audio samples from the last adjacent frame and the previous and current frame acquisitions are used as the GRUAIs input.

(1) GRU calculation

When the trained model is actually used, parameters such as weight and the like are trained, and the maximum cost is calculated in a GRU link of a sampling rate network. Only need to firstly embedding is converted into 128 dimensions, and then 256 possible embedding and GRU related acyclic matrix multiplication results are stored, so that the part of calculation can be completed through a lookup table during synthesis. U shape()For GRU non-recursive weight matrix, set U(u,s)Is UuIs used for st-1Inputting the composition of the sample imbedding columns, and deriving a new imbedding matrix V(u,s)=U(u,s)E, E is an embedding matrix, and a sample s is directly connectedt-1Mapping to the acyclic term of the update gate calculation. The same conversion applies for all gates (u, r, h) and all embedding inputs (s, p, e), for a total of 9 pre-calculated V(...)And (4) matrix. In this way, the embedding contribution can be reduced to the sum of each gate and each embedding. Like embedding, the frame adjustment vector f is constant over a frame, and therefore can also be simplified. Can calculate g(·)=U(·)f for each GRU gate, the results are tabulated for faster operating speeds.

The above simplification can essentially make the computation cost of all non-cyclic inputs of the GRU negligible, the GRU of the sample rate network is calculated as follows (7):

wherein u is(...)The vector is found by finding the column vector i to the corresponding V(...)In matrix, GRUB(.) is a normal, non-sparse GRU. Meanwhile, the GRU can be simplified by using a sparse matrix, only non-zero elements in the GRU are stored and processed, and a large amount of useless zero elements are discarded, so that the occupied space of data is reduced. Meanwhile, due to the reduction of the data volume, the calculation amount is synchronously reduced. The sparse matrix uses a 16 x 1 block sparse matrix instead of individually thinning out each element, which hinders effective vectorization. In addition to the default preserved non-zero elements when sparse, the diagonal terms that are easy to vectorize are preserved, which reduces complexity while preserving more acoustic features.

(2) Output layer

GRUBIs sent to 8 independent dual fully-connected layers (DualFC) to predict the subband excitations at adjacent times. Since computing in 8 layers results in more overhead due to direct computation, eight fully-connected layers are combined using an element-wise weighted sum. The dual fully-connected layer of the sample rate network is defined as the following equation (8):

dualfc(x)=a1*tanh(W1x)+a2*tanh(W2x)…+a8*tanh(W8x) (8)

where W is the weight matrix, a is the weight vector, tanh is the function, and x is the input speech signal. The output layer may determine whether a value is in the μ -law quantization interval. Its output is used as SoftMax activation to calculate etIs determined for each possible excitation value p (e)t)。

(3) Linear prediction

With this design, the audio of the current adjacent time can be recursively generated. Excitation at time t is etThe generated audio is stThe generated prediction is ptThen the recursive formula is shown as (9):

st=et+pt (9)

pt+1=lpc(st-15:st)

st+1=et+1+pt+1

wherein, the operation of the LPC (linear predictive coding) is as follows:

wherein the excitation at time t is etThe generated audio is stP is the order of the filter, apIs the coefficient of the filter; solving for a by minimizing the mean square error between the true signal and the predicted signalpThe formula is shown as (11):

calculating the partial derivative of J for each filter coefficient and making its value equal to 0 can be given by equation (12):

wherein u is more than or equal to 1 and less than or equal to P, P in the formula 3.14 is replaced by numerical values of 1, 2 and … … P respectively, and the equation set is connected. The Levinson-Durbin algorithm is used to solve the system of equations and calculate the predictor. Cepstral computation of the predictor may ensure that no other information is transmitted or synthesized.

(4) Noise adding device

When actually synthesizing speech, the input speech of the target speaker often contains a certain degree of noise, unlike high-quality background-noise-free speech in the data set. If a high-quality background-noise-free speech data set is used for direct training, when the training is actually used, the difficulty of extracting acoustic features and generating speech with the same tone color is increased by inputting speech with noise, so that the effect is reduced. Thus to enable the neural network to adapt to speech containing noise, noise may be added to its input during Vocoder training, as shown in fig. 7.

Where Q denotes μ -law quantization and Q ^ (-1) denotes the conversion from μ -law to linear. The prediction filter is defined as the following equation (13).

Wherein, akIs a linear prediction coefficient of order k, z, of the current frame-kRepresenting the Z-transform, the neural network can effectively reduce the error of the signal by injecting the noise shown in the signal, so that the quality of the generated speech is further improved.

In summary, the fast speech cloning method provided in this example adopts 3 models for joint modeling, and adopts different data sets for independent training. The method can use the current open source data set and clone the clone voice with good effect on low-performance equipment, and has the advantages of low distortion rate, high spectrum similarity and high alignment.

Example 2

The architecture adopted by the experiment is X64, the CPU is 2 blocks of E5-2680V3(2.5Hz, 9.6GT/s), the GPU is 4 blocks of NVIDIA TITAN V12 GB, and the memory size is 128 GB. Training in this hardware environment takes approximately 2 days. The hardware configuration information used for the experiments is detailed in table 1.

TABLE 1 hardware configuration information

The operating system used in this experiment was Ubuntu 16.04, the version of Python language was 3.7.6, the version of PyTorch was 1.4.0, the version of Cuda was 10.0.130, and the version of Cudnn was 5.6.2. The software version information used for the experiments is detailed in table 2.

TABLE 2 software version information

The evaluation of the voice clone performance is an important step in the voice clone task, and the performance of the voice clone can be effectively judged and improved through a perfect evaluation mechanism. In the chapter, the method of combining a subjective evaluation method and an objective evaluation method is used for evaluating the cloned speech, and meanwhile, the method is compared with other clone models to prove the effectiveness and superiority of the algorithm.

Objective evaluation and analysis

Comparing the test-generated cloned speech with the original speech in terms of MFCC and spectrum:

taking STCMD00044A as an example, the contents are: "the child asks me what i like", is a male, as shown in fig. 8 and 9.

Taking STCMD00052I as an example, the contents are: the "prime notch" is a female, as shown in figures 10 and 11.

From the above fig. 8, 9, 10, and 11, it can be seen that the original speech and the cloned speech have high similarity in the middle and rear parts, but have distortion at the beginning, high spectral similarity, and high alignment. Further optimization of this aspect may be made as improvements are made in the future. Meanwhile, the cloning effect of female voices is better than that of male voices due to the fact that less male voice data are adopted during training and female voices are better recognized. Because its sound frequency is higher, it is easier to extract the spectrum.

Subjective evaluation and analysis

The subjective evaluation was performed by evaluating the cloned speech through the flesh ear. The listener evaluates the intelligibility, quality, and similarity of speech by comparing the cloned speech with the original speech. The method used is mainly Mean Opinion Score (MOS).

And (3) MOS testing: the MOS test is to make the evaluation personnel listen to the original voice and the synthesized voice respectively, and evaluate the quality of the tested voice and the score according to the subjective feeling of the evaluation personnel. In addition to assessing speech quality, the timbre similarity of the clones is also scored. And the average value of all scores after scoring is the MOS score.

Generally, the MOS score can be classified into 5 grades, where 1 grade corresponds to the worst incomprehensible score and 5 grades corresponds to the best close to nature score, as shown in table 3.

TABLE 3 MOS fraction evaluation mode

The MOS scores for male and female voices are shown in tables 4 and 5:

TABLE 4 female Voice MOS test scores

TABLE 5 Male Voice MOS test scores

From tables 4 and 5, it can be calculated that the MOS score of female voice is 4.3, and the MOS score of male voice is 4.2. The effect of male voice cloning is different from that of female voice cloning, because the female voice is sharper than the male voice due to the characteristics of the male voice and the female voice, and the voice characteristics are better extracted, so that the generated voice has higher naturalness and is more similar to the target voice.

Comparison of Experimental results

Various existing methods are selected for comparison with the method.

As shown in fig. 12, the method is compared with HMM, DNN, Tacotron, Wavenet, and Human (real Human) speech, respectively. The MOS score of the method is close to Wavenet, and is obviously superior to other methods, and is only second to HUMAN (HUMAN real voice). Meanwhile, the method has higher speed than the Wavenet.

The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:双流语音转换方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!