Method and device for aligning synthesized voice and text and computer storage medium

文档序号:909795 发布日期:2021-02-26 浏览:2次 中文

阅读说明:本技术 一种合成语音与文本对齐的方法、装置及计算机储存介质 (Method and device for aligning synthesized voice and text and computer storage medium ) 是由 王昆 朱海 周琳岷 刘书君 于 2020-11-20 设计创作,主要内容包括:本发明公开了一种合成语音与文本对齐的方法、装置及计算机储存介质,其中方法包括:在待合成文本的每个字后面加入位置标签;对带位置标签的文本进行文本预处理及音素转换得到音素序列;将音素序列输入语音合成模型,预测音素的时长信息和声学特征;将声学特征通过声码器转换为合成语音;累加位于每个位置标签前面音素的时长信息,得到每个位置标签在合成语音中的时间信息。本发明通过在待合成文本中加入位置标签,在文本处理中保持位置标签的相对位置,利用语音合成模型的中间输出结果以极小的成本,实现了合成音频与待合成文本之间的字符级别的对齐。(The invention discloses a method, a device and a computer storage medium for aligning synthesized voice and text, wherein the method comprises the following steps: adding a position label behind each character of the text to be synthesized; carrying out text preprocessing and phoneme conversion on the text with the position label to obtain a phoneme sequence; inputting the phoneme sequence into a speech synthesis model, and predicting duration information and acoustic characteristics of phonemes; converting the acoustic features into synthesized speech through a vocoder; and accumulating the duration information of the phonemes positioned in front of each position label to obtain the time information of each position label in the synthesized speech. The invention realizes the alignment of the character level between the synthesized audio and the text to be synthesized by adding the position label into the text to be synthesized, keeping the relative position of the position label in the text processing and utilizing the intermediate output result of the speech synthesis model with extremely low cost.)

1. A method for aligning synthesized speech with text, comprising:

adding a position label behind each character of the text to be synthesized;

carrying out text preprocessing and phoneme conversion on the text with the position label to obtain a phoneme sequence;

inputting the phoneme sequence into a speech synthesis model, and predicting duration information and acoustic characteristics of phonemes;

converting the acoustic features into synthesized speech through a vocoder;

and accumulating the duration information of the phonemes positioned in front of each position label to obtain the time information of each position label in the synthesized speech.

2. The method of aligning synthesized speech with text according to claim 1, wherein the method of text pre-processing the position-tagged text comprises: and removing illegal characters in the text with the position labels, regularizing the text and predicting the prosody, and keeping the relative positions of the position labels in the sequence in the process of preprocessing the text.

3. The method of aligning synthesized speech with text according to claim 2, wherein the method of phoneme conversion of the position-tagged text comprises: the method for converting Chinese characters into pinyin and splitting the pinyin into initial consonants, vowels and phonemes is adopted, and the relative position of the position label in the sequence is kept in the phoneme conversion process.

4. The method of aligning synthesized speech to text according to claim 1, wherein said method of inputting a sequence of phonemes into a speech synthesis model and predicting duration information and acoustic characteristics of the phonemes comprises: removing the position labels in the phoneme sequence, coding the phoneme sequence into a digital sequence, and inputting the digital sequence into a speech synthesis model; and the voice synthesis model carries out forward operation and outputs a time length information sequence and an acoustic characteristic sequence.

5. The method of aligning synthesized speech to text according to claim 4, wherein said inputting the sequence of phonemes into a speech synthesis model before predicting duration information and acoustic characteristics of the phonemes further comprises: and making the voice synthesis model.

6. The method of claim 5, wherein the making of the speech synthesis model comprises training data acquisition, input output feature extraction, model design, and model training;

the training data comprises audio and marking information, and the marking information comprises phonemes, prosodic marks and duration information of each phoneme;

the input features are sequences after the phoneme sequences are digitized, and the output features comprise duration information of each phoneme and Mel frequency spectrum features extracted based on audio;

the model structure design adopts a coder decoder structure, the input features are embedded and coded, then position coding information is added, the input features are sent into a coder to predict the duration information of each phoneme, the acoustic features are output through the decoder, and the loss function of the model is set as the weighted sum of the distance between the predicted duration information and the real duration information of the phoneme and the distance between the predicted Mel frequency spectrum and the real Mel frequency spectrum;

and training the model through a gradient descent algorithm, and minimizing the loss function of the model until the loss function of the model converges.

7. The method of aligning synthesized speech with text according to claim 1, wherein said method of converting acoustic features into synthesized speech by a vocoder comprises: a vocoder based on pure digital signal processing; or a vocoder based on an artificial neural network.

8. The method of claim 1, wherein the step of accumulating the duration information of the phonemes located before each position tag to obtain the time information of each position tag in the synthesized speech comprises: referring to the phoneme sequence without removing the position labels, calculating phonemes contained in front of each position label, and accumulating duration information of the phonemes; the corresponding start and stop points of each character in the synthetic audio are respectively the time information of the front and rear position labels.

9. An apparatus for aligning synthesized speech with text, comprising:

the label adding module is used for adding a position label behind each character of the text to be synthesized;

the phoneme conversion module is used for carrying out text preprocessing and phoneme conversion on the text with the position label to obtain a phoneme sequence;

the prediction module is used for inputting the phoneme sequence into the speech synthesis model and predicting duration information and acoustic characteristics of the phonemes;

the voice synthesis module converts the acoustic characteristics into synthetic voice through a vocoder;

and the label alignment module is used for accumulating the duration information of the phoneme positioned in front of each position label to obtain the time information of each position label in the synthesized voice.

10. A computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method of any of claims 1 to 8.

Technical Field

The present invention relates to the field of speech synthesis technologies, and in particular, to a method and an apparatus for aligning synthesized speech with text, and a computer storage medium.

Background

The voice interaction is a natural human-computer interaction technology, which relates to technologies such as voice recognition (ASR), natural language understanding (NLP), voice synthesis (TTS) and the like, wherein the voice synthesis directly influences the auditory experience of a user, the effect of the voice synthesis directly influences the 'first impression' of the user, and the voice synthesis is always the research focus of academic research and industrial application. In the speech synthesis development process, a splicing method, a parameter synthesis method and an end-to-end dominant speech synthesis method are used. The end-to-end method can obtain higher synthetic speech quality, but the process of synthesizing speech is difficult to control accurately.

The alignment of speech and text refers to the marking of start-stop time information in the audio for each word's pronunciation. The information is generally obtained by manual labeling or semi-automatic labeling, pre-labeling by a forced alignment algorithm, and then manually adjusting, and is usually used for training a speech synthesis model. The alignment of speech and text has many applications, such as music playing, synchronization of music playing and lyrics, click-to-read system, etc.

In a speech synthesis system, a long sentence is usually synthesized by splitting into short sentences, and the final audio is obtained by splicing, and this way can be used for sentence-level alignment information, such as which sentence of text corresponds to which speech, but there is no word-level and character-level alignment information with more detailed strength. To obtain smaller granularity of alignment information, training can be performed by forced alignment techniques, but the time cost is high and there are cases of alignment failure. On the other hand, in speech synthesis, the text to be synthesized usually needs to be normalized, and some cases of inconsistent writing and pronunciation, such as special symbols, special characters, numbers, etc., are replaced. The original text and the normalized text do not have a simple consistent corresponding relationship, and the text sequence and the phoneme sequence before and after conversion do not have a consistent corresponding relationship in the phoneme conversion, so that the word-level speech and text alignment information is difficult to obtain.

Disclosure of Invention

The present invention provides a method, an apparatus and a computer storage medium for aligning synthesized speech and text, so as to solve the above problems in the prior art.

The technical scheme adopted by the invention is as follows: there is provided a method of aligning synthesized speech with text, comprising:

adding a position label behind each character of the text to be synthesized;

carrying out text preprocessing and phoneme conversion on the text with the position label to obtain a phoneme sequence;

inputting the phoneme sequence into a speech synthesis model, and predicting duration information and acoustic characteristics of phonemes;

converting the acoustic features into synthesized speech through a vocoder;

and accumulating the duration information of the phonemes positioned in front of each position label to obtain the time information of each position label in the synthesized speech.

Preferably, the method for text preprocessing of the text with the position tag includes: and removing illegal characters in the text with the position labels, regularizing the text and predicting the prosody, and keeping the relative positions of the position labels in the sequence in the process of preprocessing the text.

Preferably, the method for converting phonemes of the position-tagged text includes: the method for converting Chinese characters into pinyin and splitting the pinyin into initial consonants, vowels and phonemes is adopted, and the relative position of the position label in the sequence is kept in the phoneme conversion process.

Preferably, the method for inputting the phoneme sequence into the speech synthesis model and predicting duration information and acoustic features of the phonemes comprises: removing the position labels in the phoneme sequence, coding the phoneme sequence into a digital sequence, and inputting the digital sequence into a speech synthesis model; and the voice synthesis model carries out forward operation and outputs a time length information sequence and an acoustic characteristic sequence.

Preferably, before inputting the phoneme sequence into the speech synthesis model and predicting the duration information and the acoustic features of the phonemes, the method further includes: and making the voice synthesis model.

Preferably, the manufacturing of the speech synthesis model comprises training data acquisition, input and output feature extraction, model design and model training;

the training data comprises audio and marking information, and the marking information comprises phonemes, prosodic marks and duration information of each phoneme;

the input features are sequences after the phoneme sequences are digitized, and the output features comprise duration information of each phoneme and Mel frequency spectrum features extracted based on audio;

the model structure design adopts a coder decoder structure, the input features are embedded and coded, then position coding information is added, the input features are sent into a coder to predict the duration information of each phoneme, the acoustic features are output through the decoder, and the loss function of the model is set as the weighted sum of the distance between the predicted duration information and the real duration information of the phoneme and the distance between the predicted Mel frequency spectrum and the real Mel frequency spectrum;

and training the model through a gradient descent algorithm, and minimizing the loss function of the model until the loss function of the model converges.

Preferably, the method of converting acoustic features into synthesized speech by a vocoder comprises: a vocoder based on pure digital signal processing; or a vocoder based on an artificial neural network.

Preferably, the method for accumulating duration information of phonemes located in front of each position tag to obtain time information of each position tag in the synthesized speech includes: referring to the phoneme sequence without removing the position labels, calculating phonemes contained in front of each position label, and accumulating duration information of the phonemes; the corresponding start and stop points of each character in the synthetic audio are respectively the time information of the front and rear position labels.

The invention also provides a device for aligning the synthesized voice and the text, which comprises:

the label adding module is used for adding a position label behind each character of the text to be synthesized;

the phoneme conversion module is used for carrying out text preprocessing and phoneme conversion on the text with the position label to obtain a phoneme sequence;

the prediction module is used for inputting the phoneme sequence into the speech synthesis model and predicting duration information and acoustic characteristics of the phonemes;

the voice synthesis module converts the acoustic characteristics into synthetic voice through a vocoder;

and the label alignment module is used for accumulating the duration information of the phoneme positioned in front of each position label to obtain the time information of each position label in the synthesized voice.

The present invention also provides a computer storage medium having a computer program stored thereon, which when executed by a processor, performs the steps of the method for aligning synthesized speech and text as claimed above.

The invention has the beneficial effects that:

(1) the invention adds labels in the text, keeps the relative positions of the labels in the text preprocessing and the voice conversion, simultaneously stores the phoneme sequence information required by synthesis and the position labels required by alignment in the same sequence, does not influence the flow of voice synthesis, and provides the position information required by the alignment of the voice and the text.

(2) The invention realizes the alignment of the character level between the synthesized audio and the text to be synthesized by adding the position label into the text to be synthesized, keeping the relative position of the position label in the text processing and utilizing the intermediate output result of the speech synthesis model with extremely low cost.

Drawings

FIG. 1 is a flowchart illustrating a method for aligning synthesized speech with text according to the present invention.

Fig. 2 is a block diagram of an apparatus for aligning synthesized speech and text according to the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail below with reference to the accompanying drawings, but embodiments of the present invention are not limited thereto.

It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.

Example 1:

referring to fig. 1, a method of aligning synthesized speech with text, comprising:

s1, adding a position tag after each word of the text to be synthesized.

The characters needing to be added with the position labels after each character are the characters needing pronunciation, and each character comprises numbers and Chinese characters and does not contain unvoiced characters such as punctuation marks and the like.

The position tag may be indicated by a special character that is not commonly used, or by a special bracket mark.

In one embodiment, idx is a sequence number and increases from 0, such as "speech synthesis". "after adding the position tag" the words [ pos:0] sound [ pos:1] and [ pos:2] to [ pos:3 ]. ".

And S2, performing text preprocessing and phoneme conversion on the text with the position label to obtain a phoneme sequence.

The method for text preprocessing of the text with the position label comprises the following steps: and removing illegal characters in the text with the position labels, regularizing the text and predicting the prosody.

In the specific embodiment, the text characters are subjected to Unicode conversion, and punctuation marks are uniformly converted into punctuation marks in an English state; only the Chinese characters, numbers and, ". (ii) a | A Is there a '.% < [ ] punctuation, and eliminating the rest characters. Text regularization uses rule matching to replace numbers with Chinese characters, such as "one [ pos:0] with [ pos:1]3[ pos:2]5[ pos:3] meta [ pos:4 ]. The substitution is carried out by substituting for one [ pos:0] and one [ pos:1] thirty [ pos:2] five [ pos:3] membered [ pos:4 ]. ", the relative position of the position markers is maintained. Prosody prediction is used for adding prosody pause marks in a text sequence, such as indicating prosody words by #1, #2, prosody phrases by #3, and intonation phrases by #4, and removing punctuation marks. For example, the total [ pos:0] is [ pos:1] thirty [ pos:2] penta [ pos:3] membered [ pos:4 ]. "possible transformations" are "one [ pos:0] to [ pos:1] #2 thirty [ pos:2] five [ pos:3] membered [ pos:4] # 4", where the prosodic model can be trained offline using the Seq2Seq model.

The method for converting the phoneme of the text with the position label further comprises the following steps: a conversion method of converting Chinese characters into pinyin and splitting the pinyin into initial consonants, vowels and phonemes is adopted.

In a specific embodiment, a pypinyin tool can be used for converting Chinese characters into pinyin, the pinyin is divided into initial and final phonemes based on a pronunciation dictionary, the finals with different tones are different phonemes, the pinyin with zero initial is represented by # 5. For example, after the phoneme of 'one [ pos:0] total [ pos:1] #2 thirty [ pos:2] five [ pos:3] membered [ pos:4] #4' is converted, the "# 5i1[ pos:0] g ong4[ pos:1] #2s an1[ pos:2] sh iii2[ pos:3] #5u3[ pos:4] #5van2[ pos:5] # 4" is obtained, and the phoneme, prosody label and position label are separated by a space.

The method for performing text preprocessing and phoneme conversion on the text with the position label comprises the following steps: the relative positions of the position tags in the sequence are maintained during the text pre-processing and phoneme conversion steps.

And S3, inputting the phoneme sequence into a speech synthesis model, and predicting duration information and acoustic characteristics of the phonemes.

The method for inputting the phoneme sequence into the speech synthesis model and predicting the duration information and the acoustic characteristics of the phonemes comprises the following steps: removing the position labels in the phoneme sequence, coding the phoneme sequence into a digital sequence, and inputting the digital sequence into a speech synthesis model; and the voice synthesis model carries out forward operation and outputs a time length information sequence and an acoustic characteristic sequence.

In a specific embodiment, the input sequence is segmented by spaces and position tags are removed, such as "# 5i1[ pos:0] g ong4[ pos:1] #2s an1[ pos:2] sh iii2[ pos:3] #5u3[ pos:4] #5van2[ pos:5] # 4" converted to "[ '#5', 'i1', 'g', 'ong4', '2','s', 'an1', 'iii2', '5', 'u3', '5', 'van2', 'sh 4'. All phonemes and prosodic symbols are counted, each phoneme or prosodic symbol corresponds to a number, the phoneme sequence is digitalized according to the number, the numerical sequence is input into a speech synthesis model, the model carries out embedded coding on the input sequence, position coding is added, the numerical sequence is input into an encoder, duration information of each phoneme is predicted, an acoustic feature sequence is output through a decoder, and the acoustic feature sequence is generally a Mel frequency spectrum.

Before inputting the phoneme sequence into a speech synthesis model and predicting duration information and acoustic features of phonemes, the method further includes: and making the voice synthesis model. The manufacturing of the voice synthesis model comprises the steps of training data acquisition, input and output feature extraction, model design and model training.

The training data comprises audio and marking information, and the marking information comprises phonemes, prosodic marks and duration information of each phoneme; the input features are sequences after the phoneme sequences are digitized, and the output features comprise duration information of each phoneme and Mel frequency spectrum features extracted based on audio; the model structure design adopts a coder decoder structure, the input features are embedded and coded, then position coding information is added, the input features are sent into a coder to predict the duration information of each phoneme, the acoustic features are output through the decoder, and the loss function of the model is set as the weighted sum of the L2 distance between the predicted duration information of the phoneme and the real duration information and the L1 distance between the predicted Mel frequency spectrum and the real Mel frequency spectrum; and training the model through a gradient descent algorithm, and minimizing the loss function of the model until the loss function of the model converges.

In particular embodiments, speech synthesis models that may be used include, but are not limited to, Tacotron, Fastspeech.

And S4, converting the acoustic features into synthetic voice through a vocoder.

The method for converting the acoustic features into the synthesized voice through the vocoder comprises the following steps: vocoders based on purely digital signal processing may be used including, but not limited to Griffin Lim.

The method for converting acoustic features into synthesized speech through a vocoder further comprises: the vocoder outputs synthesized voice by upsampling fixed multiple of input acoustic features and performing prediction through forward operation of the artificial neural network model.

In particular embodiments, vocoders that may be used include, but are not limited to, WaveRNN, MelGAN.

And S5, accumulating the duration information of the phoneme in front of each position label to obtain the time information of each position label in the synthesized speech.

The method for accumulating the duration information of the phonemes positioned in front of each position tag to obtain the time information of each position tag in the synthesized speech comprises the following steps: referring to the phoneme sequence without removing the position labels, calculating the phonemes contained in front of each position label, and accumulating the duration information of the phonemes; the corresponding start and stop points of each character in the synthetic audio are respectively the time information of the front and rear position labels.

In a specific embodiment, duration information of phonemes such as "[ '#5', 'i1', 'g', 'ong4', '2','s', 'an1', 'sh', 'iii2', '5', 'u3', '5', 'van2', '#4' ]" may be the predicted duration information of the input phoneme sequence "[ 0,15,6,17,0,9,11,8,6,0,19,0,28,30 ]". Referring to a phoneme sequence "# 5i1[ pos:0] g ong4[ pos:1] #2s an1[ pos:2] sh iii2[ pos:3] #5u3[ pos:4] #5van2[ pos:5] # 4] without removing position tags, it can be known that the phoneme duration in front of [ pos:0] is 0,15, the accumulated sum is 15, the unit is a frame, and the rest is similar. The corresponding time is the number of frames times the vocoder up-sampling rate divided by the audio sampling rate.

According to the method provided by the invention, the position label is added into the text to be synthesized, the relative position of the position label is kept in the text processing, and the character level alignment between the synthesized audio and the text to be synthesized is realized by utilizing the intermediate output result of the speech synthesis model with extremely low cost.

Example 2:

referring to fig. 2, an apparatus for aligning synthesized speech with text, comprising:

a tag adding module 10 for adding a position tag after each word of the text to be synthesized.

And a phoneme conversion module 20, configured to perform text preprocessing and phoneme conversion on the position-tagged text to obtain a phoneme sequence.

And the prediction module 30 is used for inputting the phoneme sequence into the speech synthesis model and predicting the duration information and the acoustic characteristics of the phonemes.

And a voice synthesis module 40 for converting the acoustic features into synthesized voice through the vocoder.

And a tag alignment module 50, configured to accumulate the duration information of the phonemes located in front of each position tag to obtain time information of each position tag in the synthesized speech.

It should be noted that, in the present embodiment, each unit is in a logical sense, and in a specific implementation process, one unit may be divided into a plurality of units, and a plurality of units may also be combined into one unit.

According to the device for aligning the synthesized voice and the text, which is provided by the second embodiment of the invention, the relative position of the position label can be kept in the text processing by adding the position label into the text to be synthesized, and the character level alignment between the synthesized voice and the text to be synthesized can be realized with extremely low cost by utilizing the intermediate output result of the voice synthesis model.

Example 3

Embodiments of the present application provide a storage medium on which a computer program is stored, which when executed by a processor implements the method for aligning synthesized speech with text in embodiment 1.

The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:语音合成方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!