Sequence-to-sequence speech synthesis method and system for double-layer autoregressive decoding

文档序号:972881 发布日期:2020-11-03 浏览:2次 中文

阅读说明:本技术 一种双层自回归解码的序列到序列语音合成方法及系统 (Sequence-to-sequence speech synthesis method and system for double-layer autoregressive decoding ) 是由 周骁 凌震华 戴礼荣 于 2020-07-14 设计创作,主要内容包括:本发明提出一种双层自回归解码的序列到序列语音合成方法及系统,系统包括编码器和解码器,所述解码器包括:音素级表征模块、音素级预测模块、帧级预测模块;所述编码器将音素名、音调和韵律短语边界信息用向量表征,然后使用卷积神经网络和双向长短时记忆网络将这些信息编码融合得到句子中每个音素的上下文单元表征;所述音素级表征模块,通过帧一级的长短时记忆网络(LSTM)和池化处理获得每个音素单元的声学单元表征;所述音素级预测模块,采用音素级自回归结构来预测当前音素的声学单元表征并建立连续音素之间的依赖关系;所述帧级预测模块,通过解码器LSTM来预测帧级的声学特征。(The invention provides a method and a system for synthesizing sequence-to-sequence speech through double-layer autoregressive decoding, wherein the system comprises an encoder and a decoder, and the decoder comprises: the device comprises a phoneme level representation module, a phoneme level prediction module and a frame level prediction module; the encoder represents the phoneme name, tone and prosodic phrase boundary information by using vectors, and then encodes and fuses the information by using a convolutional neural network and a two-way long-and-short-term memory network to obtain the representation of a context unit of each phoneme in a sentence; the phoneme level representation module obtains the acoustic unit representation of each phoneme unit through frame-level long-time memory network (LSTM) and pooling processing; the phoneme level prediction module predicts the acoustic unit representation of the current phoneme by adopting a phoneme level autoregressive structure and establishes a dependency relationship between continuous phonemes; the frame-level prediction module predicts acoustic features at the frame level through the decoder LSTM.)

1. A two-layer autoregressive decoded sequence-to-sequence speech synthesis system comprising an encoder and a decoder, the decoder comprising: the device comprises a phoneme level representation module, a phoneme level prediction module and a frame level prediction module;

the input of the encoder is a language representation of phonemes in a sentence, namely phoneme name, tone and prosodic phrase boundary information are represented by vectors, and then a convolutional neural network and a two-way long-short time memory network are used for coding and fusing the information to obtain a context unit representation of each phoneme in the sentence;

the phoneme level representation module inputs an acoustic feature of a frame level in a phoneme and obtains an acoustic unit representation of each phoneme unit through a frame-level long-time memory network (LSTM) and pooling;

the phoneme level prediction module is used for inputting acoustic unit representations of all historical phonemes and context unit representations of a current phoneme, and predicting the acoustic unit representations of the current phoneme by adopting a phoneme level autoregressive structure and establishing a dependency relationship between continuous phonemes;

the frame-level prediction module is input with two parts, one part is an acoustic unit representation of a current unit predicted by the phoneme-level prediction module, and the other part is a hidden state acted on a frame-level LSTM in the phoneme-level representation module; the frame-level acoustic features are finally predicted by the decoder LSTM.

2. A sequence-to-sequence speech synthesis method of bi-layer autoregressive decoding is characterized by comprising the following steps:

step 1: language representation coding, namely converting the language representation corresponding to the phoneme sequence to be synthesized into a context unit representation by using an encoder;

step 2: and (2) acoustic feature prediction, namely, obtaining Mel spectral features corresponding to the predicted text in the context unit representation from the step 1 by using a decoder, and specifically comprising the following substeps:

step 2.1: generating a phoneme level representation, and encoding the acoustic features of the frame level in the phoneme into the acoustic unit representation of the phoneme through a phoneme representation module;

step 2.2: predicting the phoneme-level representation, namely predicting the acoustic unit representation of the current phoneme by using the acoustic unit representation of the historical phoneme obtained in the step 2.1 and the context unit representation of the current phoneme;

step 2.3: and (3) frame-level feature prediction, namely predicting the acoustic features of the next frame by using the acoustic unit characterization of the current phoneme predicted in the step 2.2 and the acoustic features of the current frame.

3. The method of claim 2, wherein the step 1 comprises:

inputting the language representation sequence corresponding to the phoneme sequence with the length of N into an encoder, and obtaining the context unit representation sequence through three 1-dimensional convolutional neural networks and a two-way long-short-time memory network, namely BiLSTM

Figure FDA0002583011900000011

4. A method of two-level autoregressive decoded sequence-to-sequence speech synthesis according to claim 2, characterized in that said step 2.1 comprises:

the phoneme-level characterization module is used for summarizing all frame-level acoustic features in a phoneme to obtain acoustic unit characterization of the phoneme level; in the synthesis stage, the input of the phoneme-level representation module is a predicted Mel spectrum corresponding to the previous frame; in the training stage, the input is the corresponding natural Mel spectrum of the previous frame; the Mel spectrum of the previous frame firstly passes through a full-connection preprocessing network, and then the dependency among frame-level acoustic feature sequences in phonemes is modeled by adopting a frame-level LSTM; resetting the LSTM state at the beginning frame position of the phoneme according to the known phoneme boundaries; finally, in order to obtain the fixed-length phoneme-level acoustic unit representation corresponding to the unitConverting hidden state sequence obtained by LSTM into acoustic unit characterization vector by using pooling method

5. A method for sequence-to-sequence speech synthesis with bi-level autoregressive decoding as claimed in claim 2, wherein in step 2.1, in order to link the context unit characterization and the acoustic unit characterization, an attention mechanism is used to obtain the recognition probability of the acoustic unit characterization of each phoneme, and further calculate the phoneme recognition loss; assuming a sentence contains N phonemes, the Query value Query is an acoustic unit representation for the nth phoneme

Figure FDA0002583011900000026

6. The method of claim 5, wherein the phoneme recognition loss is

Figure FDA0002583011900000028

first, the value is interrogated

Figure FDA0002583011900000029

In the second step, the energy e corresponding to all key values is set as { e } by using a softmax function1,e2,…,eNNormalizing to obtain a probability value alpha corresponding to the key value { alpha ═ alpha }12,…,αN};

Thirdly, calculating the cross entropy of multiple categories to obtain the probability value alpha corresponding to the current nth phonemenConversion to phoneme recognition lossWherein the vector vaAnd matrix WaIs the model parameter to be trained, D is the dimension of the unit representation, h is the dimension of the hidden layer in the attention mechanism,

Figure FDA00025830119000000215

Figure FDA0002583011900000031

Figure FDA0002583011900000032

7. a method of sequence-to-sequence speech synthesis for bi-layer autoregressive decoding as claimed in claim 2 wherein step 2.2 comprises the phone-level prediction module employing a phone-level autoregressive structure to predict current acoustic unit characterizations and describe dependencies between consecutive phones, the phone-level prediction module comprising a phone-level LSTM and a cyclic predictor gcWherein phone-level LSTM characterizes acoustic units of the historical phone unitConversion to acoustic history vector

Figure FDA0002583011900000034

Figure FDA0002583011900000035

wherein the content of the first and second substances,

Figure FDA0002583011900000036

Figure FDA00025830119000000310

then, the predicted acoustic unit representation is up-sampled to the frame level and sent to the next frame level prediction module;

in order to guarantee the autoregressive structure constituting the phoneme level, it is necessary to calculate, during the training phase, a consistency loss function, defined as a predicted acoustic unit characterisation

Figure FDA00025830119000000311

8. The method of claim 2, wherein the frame-level prediction module predicts frame-level acoustic features through a decoder LSTM; the input of the frame-level prediction module consists of two parts, one is an acoustic unit characterization of the current phoneme predicted by the phoneme-level prediction module

Figure FDA00025830119000000313

the hidden state of the decoder LSTM is fully connected through another one, and the possibility of describing the current frame belonging to the head frame of the next phoneme, namely the transition probability of the frame, is predicted through an attention mechanism; calculating transition probability by using a module based on attention mechanism, wherein if the current frame belongs to the nth phoneme, a Key value (Key) of the attention mechanism is a context unit representation of the current phoneme and the next phoneme

Figure FDA00025830119000000314

9. The method of claim 2, wherein the training stage requires phone boundaries in the corpus as input in addition to Mel-spectral and semantic representation sequences, and is obtained by forced alignment based on HMM; for implicit modeling of the duration, a branch loss needs to be calculated during training, which is defined as the cross entropy between the predicted branch probability and the true branch probability determined by the phoneme boundary; taking into account the imbalance in the number between skipped frames and non-skipped frames, a weighting strategy is employed to enhance the impact of the skipped frames on the transition loss.

10. The method of claim 9, wherein the transition loss is calculated by calculating the transition lossThe steps are as follows:

first step challenge value

Figure FDA0002583011900000042

in the second step, the energy e corresponding to the two key values is set as { e } by using a softmax functions,ejNormalizing to obtain probability value alpha of key value { alpha ═ alpha }sj};

The third step is to calculate the cross entropy and the real transition probability y ═ y { y } determined by the phoneme boundary in the sentences,yj(for the skip frame ys=0,yj1, for non-skipped frames ys=1,yj0) to obtain a phoneme recognition lossWherein v isb、WbIs the model parameter to be trained, D is the dimension of the unit representation, h is the dimension of the hidden layer in the attention mechanism,

Figure FDA0002583011900000049

Figure FDA00025830119000000410

Figure FDA00025830119000000413

Technical Field

The invention belongs to the field of speech signal processing, and particularly relates to a sequence-to-sequence speech synthesis method and system for double-layer autoregressive decoding.

Background

Speech synthesis (speech synthesis) aims at making machines speak smoothly and naturally like humans, which benefits many speech interaction applications, such as intelligent personal assistants and robots. Currently, Statistical Parametric Speech Synthesis (SPSS) is one of the mainstream methods.

Statistical parametric speech synthesis uses acoustic models to model the relationship between text features and acoustic features and vocoders (vocoders) to derive speech waveforms given the predicted acoustic features. Although this approach can produce clear sound, the quality of the synthesized speech is always degraded due to the limitations of the acoustic model and vocoder. Recently, Wang and Shen et al proposed a sequence-to-sequence (sequence-to-sequence) speech synthesis acoustic model based on neural networks and demonstrated excellent performance for predicting mel spectra directly from text. The method solves the defects of the traditional SPSS method, such as the need of professional knowledge in a large number of fields, the possible accumulative error caused by the independent training of each module of the SPSS, and the like. The sequence-to-sequence speech synthesis method has low dependence on manual participation and only needs to train on paired texts and speech.

However, since the sequence-to-sequence speech synthesis method unifies the acoustic model and the duration model into one model, and since the additive attention mechanism (additive attack) of the Tacotron model is not robust enough, some errors may occur in the predicted acoustic features, especially when complex out-of-domain text is input. To alleviate this problem, some improvements have been proposed to attention mechanisms, such as forward attention, gradual monotonic attention (SMA) and position versus attention (ta) mechanisms. Wherein the forward attention mechanism proposes an alignment path that only satisfies a monotonic condition is considered at each decoding step; a stepwise monotonic attention mechanism (SMA) further limits the path of alignment and solves the problem of attention collapse. However, these methods are always autoregressive at the frame level and do not have the ability to model acoustic features for long periods of time, thus allowing the model to naturally gain robustness.

At present, sequence-to-sequence speech synthesis methods based on neural networks are designed based on frame-level autoregressive decoding structures, and have the defects of long-term correlation modeling capability, unsatisfactory robustness of an attention mechanism adopted by a model, and speech synthesis errors such as repetition, missing reading, incapability of stopping and the like when complex texts are synthesized.

Disclosure of Invention

In order to solve the above problems, the present invention provides a sequence-to-sequence speech synthesis method and system for bi-level autoregressive decoding. A decoder of the system predicts an acoustic feature sequence by using a phoneme and frame two-stage autoregressive structure, and simultaneously utilizes clear phoneme boundary information in training data and interpretable phoneme transition probability to replace an attention mechanism in a traditional model to realize alignment between the acoustic feature sequence and a text feature sequence. The model provided by the invention can effectively reduce the acoustic characteristic prediction error and improve the robustness of the voice synthesis on the premise of ensuring the naturalness of the synthesized voice. The method comprehensively utilizes the characteristics of two technical fields of neural network and statistical parameter speech synthesis, and adopts a method for predicting the transition probability between phonemes to replace an attention mechanism aiming at the insufficient robustness of a synthesized text; aiming at the problem that the long-term dependency of the features is difficult to model by autoregressive only at a frame level, a phoneme-level autoregressive method is introduced and a decoder is redesigned.

The technical scheme of the invention is as follows: a two-layer autoregressive decoded sequence-to-sequence speech synthesis system comprising an encoder and a decoder, the decoder comprising: the device comprises a phoneme level representation module, a phoneme level prediction module and a frame level prediction module;

the input of the encoder is a language representation of phonemes in a sentence, namely phoneme name, tone and prosodic phrase boundary information are represented by vectors, and then a convolutional neural network and a two-way long-short time memory network are used for coding and fusing the information to obtain a context unit representation of each phoneme in the sentence;

the phoneme level characterization module is used for inputting frame-level acoustic features in a phoneme and obtaining the acoustic unit characterization of each phoneme unit through frame-level long-time memory network (namely LSTM) and pooling processing;

the phoneme level prediction module is used for inputting acoustic unit representations of all historical phonemes and context unit representations of a current phoneme, and predicting the acoustic unit representations of the current phoneme by adopting a phoneme level autoregressive structure and establishing a dependency relationship between continuous phonemes;

the frame-level prediction module is input with two parts, one part is an acoustic unit representation of a current unit predicted by the phoneme-level prediction module, and the other part is a hidden state acted on a frame-level LSTM in the phoneme-level representation module; the frame-level acoustic features are finally predicted by the decoder LSTM.

According to another aspect of the present invention, a sequence-to-sequence speech synthesis method for bi-level autoregressive decoding is provided, which includes the following steps:

step 1: language representation coding, namely converting the language representation corresponding to the phoneme sequence to be synthesized into a context unit representation by using an encoder;

step 2: and (2) acoustic feature prediction, namely, obtaining Mel spectral features corresponding to the predicted text in the context unit representation from the step 1 by using a decoder, and specifically comprising the following substeps:

step 2.1: generating a phoneme level representation, and encoding the acoustic features of the frame level in the phoneme into the acoustic unit representation of the phoneme through a phoneme representation module;

step 2.2: predicting the phoneme-level representation, namely predicting the acoustic unit representation of the current phoneme by using the acoustic unit representation of the historical phoneme obtained in the step 2.1 and the context unit representation of the current phoneme;

step 2.3: and (3) frame-level feature prediction, namely predicting the acoustic features of the next frame by using the acoustic unit characterization of the current phoneme predicted in the step 2.2 and the acoustic features of the current frame.

Further, the step 1 is as follows:

inputting the language representation sequence corresponding to the phoneme sequence with the length of N into an encoder, and obtaining the context unit representation sequence through three 1-dimensional convolutional neural networks and a bidirectional long-term memory network (BilSTM)The BilSTM is formed by splicing a forward LSTM and a backward LSTM, and the hidden state vectors of the BilSTM along two directions are connected to obtain a context unit characterization sequence H, wherein the context unit characterization of the nth unitThe function concat represents the concatenation of the vectors,andforward and backward LSTM correspond to the hidden states of the nth cell, respectively.

Further, the step 2.1 comprises:

the phoneme-level characterization module is used for summarizing all frame-level acoustic features in a phoneme to obtain acoustic unit characterization of the phoneme level; in the synthesis stage, the input of the phoneme-level representation module is a predicted Mel spectrum corresponding to the previous frame; in the training stage, the input is the corresponding natural Mel spectrum of the previous frame; the Mel spectrum of the previous frame firstly passes through a full-connection preprocessing network, and then the dependency among frame-level acoustic feature sequences in phonemes is modeled by adopting a frame-level LSTM; resetting the LSTM state at the beginning frame position of the phoneme according to the known phoneme boundaries; finally, in order to obtain the fixed-length phoneme-level acoustic unit representation corresponding to the unitConverting LSTM-derived hidden state sequences into LSTM-derived hidden state sequences using poolingAcoustic unit characterization vector

Figure BDA0002583011910000036

Further, in the step 2.1, in the training stage, in order to link the context unit characterization and the acoustic unit characterization, an attention mechanism is adopted to obtain the recognition probability of the acoustic unit characterization of each phoneme, and further the phoneme recognition loss is calculated; assuming a sentence contains N phonemes, the Query value (Query) is an acoustic unit representation for the nth phoneme

Figure BDA0002583011910000037

Key values (Keys) are context unit characterization sequencesThe weight corresponding to the nth key value in the attention mechanism is used as an estimated value of the recognition probability of the nth phoneme, and the training stage compares the estimated value with the one-hot coding of the phoneme in the sentence through a cross entropy function so as to obtain the phoneme recognition loss.

Further, the phoneme recognition loss

Figure BDA0002583011910000039

The calculation steps are as follows:

first, the value is interrogatedWith context unit characterizationSplicing, and then connecting with the matrixMultiplying, calculating by tanh function, and adding the vector

Figure BDA00025830119100000313

The transpose of (2) performs dot product operation to obtain the energy corresponding to each key value

In the second step, the energy e corresponding to all key values is set as { e } by using a softmax function1,e2,…,eNNormalizing to obtain a probability value alpha corresponding to the key value { alpha ═ alpha }12,…,αN};

Thirdly, calculating the cross entropy of multiple categories to obtain the probability value alpha corresponding to the current nth phonemenConversion to phoneme recognition lossWherein the vector vaAnd matrix WaIs the model parameter to be trained, D is the dimension of the unit representation, h is the dimension of the hidden layer in the attention mechanism,representing the real domain space, concat represents a function of vector concatenation.

Figure BDA0002583011910000043

Figure BDA0002583011910000044

Further, step 2.2 includes the phone-level prediction module using a phone-level autoregressive structure to predict the current acoustic unit characterization and describe the dependency between consecutive phones, the phone-level prediction module including phone-level LSTM and cyclic predictor gcWherein phone-level LSTM characterizes acoustic units of the historical phone unitConversion to acoustic history vector

Figure BDA0002583011910000047

The following formula is adopted:

wherein the content of the first and second substances,

Figure BDA0002583011910000049

set to a zero vector; loop predictor gcIs a fully connected network whose input is an acoustic history vector

Figure BDA00025830119100000410

And context unit characterization of the current phonemeThe output of which is a predicted acoustic unit characterization of the current phonemeThe following formula is adopted:

then, the predicted acoustic unit representation is up-sampled to the frame level and sent to the next frame level prediction module;

in order to guarantee the autoregressive structure constituting the phoneme level, it is necessary to calculate, during the training phase, a consistency loss function, defined as a predicted acoustic unit characterisation

Figure BDA00025830119100000414

Characterization from real acoustic units

Figure BDA00025830119100000415

Mean square error between.

Further, the frame-level prediction module predicts frame-level acoustic features through a decoder LSTM; the input of the frame-level prediction module consists of two parts, one is the current predicted by the phoneme-level prediction moduleAcoustic unit characterization of phonemesThe other is a hidden state corresponding to the frame-level LSTM in the phoneme-level representation module on the current frame; after the two parts are spliced, the two parts enter a decoder LSTM, the Mel spectrum of the current frame is predicted in a hidden state through a full connection, and after the prediction is finished, a post-processing network is used for generating a residual error so as to refine the predicted Mel spectrum; when the network is trained, the reconstruction error loss of the Mel spectrum needs to be calculated; the sum of the mean square errors between the predicted Mel spectrum and the natural Mel spectrum before and after the post-processing network is defined;

the hidden state of the decoder LSTM is fully connected through another one, and the possibility of describing the current frame belonging to the head frame of the next phoneme, namely the transition probability of the frame, is predicted through an attention mechanism; calculating transition probability by using a module based on attention mechanism, wherein if the current frame belongs to the nth phoneme, a Key value (Key) of the attention mechanism is a context unit representation of the current phoneme and the next phonemeAnd

Figure BDA00025830119100000418

the Query value (Query) is a linear transformation of the decoder's current frame LSTM hidden state; by usingThe corresponding attention weight serves as the transition probability.

Further, in the training stage, besides the mel spectrum and the semantic representation sequence, the phoneme boundary in the corpus is also required to be used as input, and the phoneme boundary is obtained through forced alignment based on the HMM; for implicit modeling of the duration, a branch loss needs to be calculated during training, which is defined as the cross entropy between the predicted branch probability and the true branch probability determined by the phoneme boundary; taking into account the imbalance in the number between skipped frames and non-skipped frames, a weighting strategy is employed to enhance the impact of the skipped frames on the transition loss.

Further, the transfer loss is calculatedThe steps are as follows:

first step challenge valueWith context unit characterizationSplicing, and then connecting with the matrix

Figure BDA0002583011910000054

Multiplying, calculating by tanh function, and adding the vectorThe transpose of (2) is subjected to dot product operation to obtain energy corresponding to the non-skip frame

Figure BDA0002583011910000056

ejIs characterized using context units

Figure BDA0002583011910000057

The rest with esThe same calculation is carried out;

in the second step, the energy e corresponding to the two key values is set as { e } by using a softmax functions,ejNormalizing to obtain probability value alpha of key value { alpha ═ alpha }sj};

The third step is to calculate the cross entropy and the real transition probability y ═ y { y } determined by the phoneme boundary in the sentences,yj(for the skip frame ys=0,yj1, for non-skipped frames ys=1,yj0) to obtain a phoneme recognition lossWherein v isb、WbIs the model parameter to be trained, D is the dimension of the unit representation, h is the dimension of the hidden layer in the attention mechanism,representing the real domain space, concat represents a function of vector concatenation.

Figure BDA00025830119100000511

Figure BDA00025830119100000512

Advantageous effects

The invention has the advantages that:

firstly, a phoneme and frame two-stage autoregressive structure is used for predicting an acoustic feature sequence in a decoder so as to better model a long-term dependency relationship between acoustic and text features;

second, alignment between acoustic feature sequences and text feature sequences is achieved using explicit phone boundary information in the training data and predicting interpretable phone transition probabilities instead of the attention mechanism in the conventional model. The experimental result shows that compared with the traditional sequence-to-sequence speech synthesis method, the model effectively reduces the acoustic feature prediction error and improves the robustness of speech synthesis on the premise of ensuring the naturalness of the synthesized speech.

In conclusion, the traditional attention-based sequence-to-sequence neural network lacks robustness and is easy to synthesize errors for complex texts, and in addition, the traditional attention-based sequence-to-sequence neural network predicts acoustic features based on a frame-level autoregressive model and has insufficient modeling capability for long-term dependency of the features. The method for synthesizing the sequence-to-sequence speech by double-layer autoregressive decoding can establish an autoregressive model at two levels of a frame level and a phoneme level, can more fully mine the mapping relation between a text and speech, and improves the robustness of speech synthesis.

Drawings

FIG. 1: the invention discloses a sequence-to-sequence speech synthesis method of double-layer autoregressive decoding.

Detailed Description

The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, rather than all embodiments, and all other embodiments obtained by a person skilled in the art based on the embodiments of the present invention belong to the protection scope of the present invention without creative efforts.

According to one embodiment of the present invention, a two-layer autoregressive decoded sequence-to-sequence speech synthesis system is provided, comprising an encoder and a decoder. The encoder structure is the same as a Tacotraon2 model, and a decoder thereof comprises three modules of phoneme-level representation, phoneme-level prediction and frame-level prediction. In addition, a total of four loss functions were proposed for guiding the model training.

1. An encoder module. The input of the module is a language representation of phonemes in a sentence, namely, phoneme name, tone and prosodic phrase boundary information are represented by vectors, and then, the information is coded and fused by using Convolutional Neural Networks (CNNs) and bidirectional long-short-term memory networks (BilSTM) to obtain a context unit representation of each phoneme in the sentence.

2. And a phoneme level characterization module. The input of this module is an acoustic feature of frame level in phoneme, and the acoustic unit representation of each phoneme unit is obtained through a frame-level long-term memory (LSTM) and pooling process.

3. A phoneme level prediction module. The input to this module is the acoustic unit characterization of all the phonemes in the history and the context unit characterization of the current phoneme, and this module adopts phoneme-level autoregressive structure to predict the acoustic unit characterization of the current phoneme and establish the dependency relationship between the continuous phonemes.

4. A frame-level prediction module. The input to this module is two parts, one is the acoustic unit characterization of the current unit predicted by the phone-level prediction module, and the other is the hidden state of the LSTM acting at frame level in the phone-level characterization module. The frame-level acoustic features are finally predicted by the decoder LSTM.

5. The model uses a total of four loss functions during the training phase. 1) A reconstruction error for calculating a difference between the predicted mel-spectrum and the natural mel-spectrum; 2) transition loss, namely performing implicit modeling on the phoneme duration through the transition probability of the phoneme corresponding to the modeling frame; 3) a consistency loss for constructing a phoneme-level autoregressive structure; 4) phoneme recognition loss, which is used to constrain the differences between the acoustic unit representation and the context unit representation.

So far, a multi-module cooperative speech synthesis neural network structure is built. The training of the neural network parameters is performed by minimizing the weighted sum of the loss functions of the neural network model in the training set by a stochastic gradient algorithm or its modified algorithms, such as SGD, Adam, AdaDelta, etc.

Finally, in the synthesis stage, the context feature sequence of the test text is input into the trained model prediction Mel spectrum, and then the voice waveform is reconstructed through the vocoder.

According to one embodiment of the present invention, as shown in FIG. 1, the input of the encoder is a linguistic representation corresponding to a phoneme sequence of a sentence, and for the nth unit, the text semantics of the nth unit are encoded by the encoder and then output as a context unit representation

The input of the phoneme level representation module of the decoder is all frame level acoustic features in the phoneme of the nth unit, and the output is acoustic unit representation

Figure BDA0002583011910000072

The input to the phoneme level prediction module of the decoder is two parts, one is the context unit table of the current phonemeSign forAnother is acoustic unit characterization corresponding to historical phonemesThe output is a predicted acoustic unit characterization of the current phoneme

Figure BDA0002583011910000075

The frame-level prediction module input to the decoder is two parts, one is the predicted acoustic unit representation of the current phoneme

Figure BDA0002583011910000076

The other is a frame-level feature in a phoneme-level characterization module in the decoder, and the output is a Mel spectrum corresponding to the corresponding text.

According to an embodiment of the present invention, the encoder module is specifically:

in order to better utilize context information, the invention inputs a language representation sequence corresponding to a phoneme sequence with the length of N into an encoder, and obtains the context unit representation sequence through three 1-dimensional convolutional neural networks and a bidirectional long-term memory network (BilSTM)Since BilSTM is formed by splicing a forward LSTM and a backward LSTM, the hidden state vectors of BilSTM along two directions are connected to obtain a context unit characterization sequence H, wherein the context unit characterization for the nth unitThe function concat represents the concatenation of the vectors,

Figure BDA0002583011910000079

andare respectively frontThe forward and backward LSTM correspond to the hidden state of the nth cell.

Further, the phoneme-level characterization module obtains the phoneme-level acoustic unit characterization by summarizing all frame-level acoustic features in a phoneme. In the synthesis stage, the input of the phoneme-level representation module is a predicted Mel spectrum corresponding to the previous frame; in the training phase, the input is the corresponding natural Mel spectrum of the previous frame. The Mel spectrum of the previous frame is first passed through a fully connected preprocessing network, and then the dependency between the frame-level acoustic feature sequences in the phonemes is modeled by using frame-level LSTM. In order to consider only the sequence of frames within a phoneme and ignore the effect of neighboring phonemes, the present invention resets the LSTM state at the beginning frame location of the phoneme according to the known phoneme boundaries. Finally, in order to obtain the fixed-length phoneme-level acoustic unit representation corresponding to the unitConverting hidden state sequence obtained by LSTM into acoustic unit characterization vector by using general pooling method

In the training phase, in order to link the context unit representation and the acoustic unit representation, an attention mechanism is adopted to obtain the recognition probability of the acoustic unit representation of each phoneme, and then the phoneme recognition loss is calculatedAssuming a sentence contains N phonemes, the Query value (Query) is an acoustic unit representation for the nth phoneme

Figure BDA0002583011910000081

Key values (Keys) are context unit characterization sequences

Figure BDA0002583011910000082

Note that the weight corresponding to the nth key value in the attention mechanism is used as an estimate of the recognition probability of the nth phoneme. The training stage carries out the one-hot coding of the phoneme in the sentence and the cross entropy functionComparing to obtain phoneme recognition lossThe phoneme recognition loss helps to limit the space of the two unit representations, and the acoustic unit representations can be integrated with more information on the aspect of partial text, so that the pronunciation error is reduced. Calculating the phoneme recognition loss

Figure BDA0002583011910000084

The steps of loss are as follows:

first step challenge valueWith context unit characterizationSplicing, and then connecting with the matrix

Figure BDA0002583011910000087

Multiplying, calculating by tanh function, and adding the vectorThe transpose of (2) performs dot product operation to obtain the energy corresponding to each key value

In the second step, the energy e corresponding to all key values is set as { e } by using a softmax function1,e2,…,eNNormalizing to obtain a probability value alpha corresponding to the key value { alpha ═ alpha }12,…,αN};

Thirdly, calculating the cross entropy of multiple categories to obtain the probability value alpha corresponding to the current nth phonemenConversion to phoneme recognition loss

Figure BDA00025830119100000810

Wherein the vector vaAnd matrix WaIs the model parameter to be trained, D is the dimension of the unit representation, h is the dimension of the hidden layer in the attention mechanism,representing the real domain space, concat represents a function of vector concatenation.

Figure BDA00025830119100000812

Since the state of frame-level LSTM in the phone-level characterization module is truncated at the phone boundary, the previous phone information cannot be used when decoding the current phone. Furthermore, the phone-level characterization module cannot output an acoustic unit characterization of its phone level until all its frames have been decoded.

To address these issues, the phone-level prediction module employs a phone-level autoregressive structure to predict current acoustic unit characterizations and describe dependencies between successive phones. The phoneme-level prediction module consists of a phoneme-level LSTM and a loop predictor gcAnd (4) forming. Wherein phone-one LSTM characterizes the acoustic units of the historical phone unitsConversion to acoustic history vectorThe following formula is adopted:

wherein the content of the first and second substances,

Figure BDA00025830119100000818

is set to a zero vector. Loop predictor gcIs oneFully connected networks whose inputs are acoustic history vectors

Figure BDA00025830119100000819

And context unit characterization of the current phoneme

Figure BDA00025830119100000820

Its output is a predicted acoustic unit characterization of the current phoneme, using the following formula:

Figure BDA00025830119100000821

the predicted acoustic unit representation is then upsampled to the frame level and sent to the next frame level prediction module.

In order to guarantee the autoregressive structure constituting the phoneme level, during the training phase, there is a need to calculate the loss of consistencyCharacterization of acoustic units defined as predictions with an autoregressive structure ensuring construction of phoneme-levelCharacterization from real acoustic units

Figure BDA0002583011910000092

Mean square error between. By means of this loss function, the invention is able to reconcile the predicted acoustic unit characterization estimates as closely as possible to the reality. The loss of consistency can be calculated using the following formula, where i represents

Figure BDA0002583011910000093

The dimension (c) of (a) is,represents

Figure BDA0002583011910000095

Value of the ith dimension, forThe same is true. Where D is the dimension of the cell characterization, and MSE represents the function that computes the mean square error.

The frame-level prediction module predicts frame-level acoustic features through a decoder LSTM. The input of the frame-level prediction module consists of two parts, one is an acoustic unit characterization of the current phoneme predicted by the phoneme-level prediction module

Figure BDA0002583011910000098

The other is the hidden state of frame-level LSTM in the phoneme-level characterization module corresponding to the current frame. The two parts are spliced and enter a decoder LSTM, the Mel spectrum of the current frame is predicted in a hidden state through a full connection, and a preliminary Mel spectrum spec is obtained after the prediction is finishedpreGenerating a residual using a post-processing network to refine the predicted Mel spectrum to obtain a refined Mel spectrum specpost. When training the network, there is a need to calculate the reconstruction error loss of the Mel spectrum

Figure BDA0002583011910000099

The reconstruction error loss is defined as the predicted Mel spectrum and the natural Mel spectrum spec before and after the post-processing networknatThe sum of the mean square errors between, i.e.

Figure BDA00025830119100000910

The method aims to enable the predicted Mel spectrum to approach the real Mel spectrum more, and is helpful for obtaining voice with higher quality.

The hidden state of the decoder LSTM is then connected via another full link to predict the probability of describing the current frame belonging to the next phoneme head frame, i.e. the transition probability of this frame, by means of an attention mechanism. The invention uses a module based on attention mechanism to calculate the transition probability if it is presentThe frame belongs to the nth phoneme, and the key value (Keys) of the attention mechanism is a context unit representation of the current phoneme and the next phonemeAndthe query value q (query) is a linear transformation of the decoder's current frame LSTM hidden state. The purpose of using the attention mechanism here is not to obtain a weighted sum of key values, but rather to utilizeThe corresponding weight serves as the transition probability. Besides the mel-spectrum and semantic token sequences, the phone boundaries in the corpus are also required as input in the training phase, which can be obtained by Hidden Markov (HMM) based forced alignment. For the duration of the implicit modeling, the transfer loss needs to be calculated during trainingIt is defined as the cross entropy between the predicted transition probability and the true transition probability determined by the phoneme boundary. The transfer loss helps to obtain more real voice time through the implicit modeling time length, so that the rhythm of the synthesized voice is more natural. Calculating the transfer lossThe steps are as follows:

first step challenge value

Figure BDA00025830119100000917

With context unit characterizationSplicing, and then connecting with the matrix

Figure BDA00025830119100000919

Multiplying, calculating by tanh function, and adding the vectorThe transpose of (2) is subjected to dot product operation to obtain energy corresponding to the non-skip frameejIs characterized using context unitsThe rest with esThe same calculation is carried out;

in the second step, the energy e corresponding to the two key values is set as { e } by using a softmax functions,ejNormalizing to obtain probability value alpha of key value { alpha ═ alpha }sj};

The third step is to calculate the cross entropy and the real transition probability y ═ y { y } determined by the phoneme boundary in the sentences,yj(for the skip frame ys=0,yj1, for non-skipped frames ys=1,yj0) to obtain a phoneme recognition loss

Figure BDA0002583011910000101

Wherein v isb、WbIs the model parameter to be trained, D is the dimension of the unit representation, h is the dimension of the hidden layer in the attention mechanism,

Figure BDA0002583011910000102

representing the real domain space, concat represents a function of vector concatenation.

Figure BDA0002583011910000104

Figure BDA0002583011910000105

In consideration of the imbalance of the number between the jumped frames and the non-jumped frames, a weighting strategy is adopted to enhance the influence of the jumped frames in the transfer loss. Namely, it isModified as-yslog(αs)-ω*yjlog(αj) Where ω is a weight manually set to enhance the effect of jumping frames.

According to one embodiment of the invention, the loss function is:

1) reconstruction errorFor calculating a difference between the predicted mel-frequency spectrum and the natural mel-frequency spectrum;

2) transfer loss

Figure BDA0002583011910000109

Carrying out implicit modeling on the phoneme duration through the transition probability of the phoneme corresponding to the modeling frame;

3) loss of consistencyAn autoregressive structure for forming a phoneme level;

4) phoneme recognition lossTo constrain the differences between the acoustic unit characterization and the context unit characterization.

The whole neural network model is subjected to parameter training in an end-to-end mode, and the training aim is to minimize the weighted sum of the four loss functions introduced above on a training set.

According to one embodiment of the invention, the synthesis process is as follows: after the model is established, the synthesis process is basically the same as that of other sequence-to-sequence speech synthesis methods. The difference is that the model does not have an alignment mode based on an attention mechanism in the decoding process, and the phoneme duration prediction is realized based on the transition probability. In the process of generating a frame-corresponding Mel spectrum, once the transition probability in the frame-level prediction module exceeds the threshold of 0.5, the decoder resets the frame-level LSTM state in the phoneme-level characterization module and then starts decoding the next phoneme.

To verify the effectiveness of the proposed method of the present invention, the following experiment was designed.

(1) Experimental setup

As used herein, the chinese news female voice library contains 12319 words for about 17.51 hours. The 12319 utterance is divided into three data sets for training, validation and in-field testing, each containing 11608, 611 and 100 utterances, respectively. The training set is used for training the proposed model, the verification set is used for adjusting the hyper-parameters, and the in-field test set is used for testing the naturalness of the model. We also evaluated the robustness of the model on an out-of-field test set of 337 sentences, including chinese classical poems, novels, navigational text, and numeric strings, among others. And the voice naturalness and the robustness are used as final evaluation indexes. An 80-dimensional Mel spectrum is used as an acoustic feature when training the model, the frame length is 64ms, and the frame shift is 15 ms. And we take the phoneme sequence as model input instead of directly using the chinese character sequence. The phoneme sequence types input to the model include phonemes, tone, and prosodic phrase boundaries. The model was implemented using PyTorch, optimized with an Adam parameter optimizer, with 200 rounds of training on the training set, with a batch size of 80 for one training. Initial learning rate of 10-3Then the learning rate index decays 0.9 times every 10 rounds.

(2) Results of the experiment

The results of the experiments on the robustness of the different models are shown in tables 1 and 2. The reference model is a sequence-to-sequence speech synthesis method based on two attention mechanisms, respectively Tacotron2_ org based on the additive attention mechanism and Tacotron2_ SMA based on the monotonic attention mechanism. For domain-like sentences, the number of times the sentence stopper is mispredicted and the number of times the synthesized speech pitch, spectrum, and prosody are inappropriate are important to consider. For sentences outside the field, the number of times the sentence stopper prediction is wrong and the number of repetitions, overloads, and model attention collapses are important considerations.

Table 1: number of times of synthesis errors of different models for test sentences in field

Stop sign prediction error Incorrect tone Spectral noise Inappropriate rhythm
Tacotron2_org 3 20 82 52
Tacotron2_SMA 0 29 55 27
UniNet_SPSS 0 15 43 19

Table 2: number of times of synthesis errors of different models for out-of-domain test sentences

Stop sign prediction error Repetition of Missing reading Collapse of attention
Tacotron2_org 1 2 4 4
Tacotron2_SMA 0 2 1 0
UniNet_SPSS 0 0 0 0

The results of audiometry on the different models are shown in table 3, where the reference model is a sequential to sequential speech synthesis method based on two attention mechanisms Tacotron2_ org and Tacotron2_ SMA. The results of the subjective evaluations shown by tables 1, 2 and 3 indicate that: compared with two Tacotron2 systems with similar naturalness, the model proposed by the method has better robustness on the basis of a sequence-to-sequence speech synthesis method.

Table 3: audiometry of natural degree of different models in statistical parameter speech synthesis

Tacotron2_org Tacotron2_SMA UniNet N/P p
39.55 - 39.09 21.36 0.95
- 39.09 37.88 23.03 0.80

The above detailed description of the embodiments of the present invention, and the detailed description of the embodiments of the present invention used herein, is merely intended to facilitate the understanding of the methods and apparatuses of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

15页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:语音合成的方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!