Emotional voice synthesis method, device, equipment and storage medium

文档序号:9764 发布日期:2021-09-17 浏览:32次 中文

阅读说明:本技术 情感语音合成方法、装置、设备及存储介质 (Emotional voice synthesis method, device, equipment and storage medium ) 是由 孙奥兰 王健宗 于 2021-06-30 设计创作,主要内容包括:本申请为语音合成技术领域,本申请提供了一种情感语音合成方法、装置、设备及存储介质,其中,所述方法包括:获取情感语音合成片段,对情感语音合成片段设置同步标记;以情感语音合成片段的同步标记为中心,选择预设时长的时间窗对情感语音合成片段进行加窗处理,得到多段语音信号;依据预设的情感语音合成规则调整同步标记,得到目标同步标记;根据目标同步标记将多段语音信号进行拼接,得到合成语音。本申请利用情感语音合成片段,通过基音同步分析、基音同步修改、基音同步合成等方式合成语音,提高合成效果;同时无需获取文本情感分类标签,降低了合成成本。(The application relates to the technical field of speech synthesis, and provides an emotion speech synthesis method, device, equipment and storage medium, wherein the method comprises the following steps: acquiring an emotion voice synthesis segment, and setting a synchronous mark for the emotion voice synthesis segment; selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by taking the synchronous mark of the emotion voice synthesis fragment as a center to obtain a plurality of sections of voice signals; adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark; and splicing the multiple sections of voice signals according to the target synchronous mark to obtain the synthesized voice. The method and the device have the advantages that the emotion voice synthesis segment is utilized, voice is synthesized through modes of pitch synchronous analysis, pitch synchronous modification, pitch synchronous synthesis and the like, and the synthesis effect is improved; meanwhile, text emotion classification labels do not need to be acquired, and synthesis cost is reduced.)

1. An emotion voice synthesis method is characterized by comprising the following steps:

acquiring an emotion voice synthesis segment, and setting a synchronous mark for the emotion voice synthesis segment; the synchronous mark is a position point which keeps synchronous with the fundamental tone of the voiced sound segment in the emotional voice synthesis segment and is used for reflecting the initial position of the fundamental tone period of each voiced sound segment;

selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by taking the synchronous mark of the emotion voice synthesis fragment as a center to obtain a plurality of sections of voice signals;

adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark;

and splicing the multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice.

2. The method of claim 1, wherein the step of adjusting the synchronization mark according to the predetermined emotion speech synthesis rule comprises:

acquiring at least one of a pitch frequency change rule, an energy change rule, a vowel change rule and a silence delay ratio of the emotion voice synthesis segment;

and adjusting the synchronous mark according to at least one of a pitch frequency change rule, an energy change rule, a vowel change rule and a mute time delay proportion of the emotional voice synthesis segment.

3. The method of claim 1, wherein the step of adjusting the synchronization mark according to the predetermined emotion speech synthesis rule comprises:

determining a tone waveform of the emotion voice synthesis segment;

determining a synchronization mark of the tone waveform; wherein the synchronization mark comprises a start position and an end position of each of the pitch periods of the emotion speech synthesis segment;

determining a target position from the tone waveform according to a reference tone curve; wherein the reference tone curve is a tone waveform of the emotion voice synthesis segment determined by prosodic features of human voice;

adjusting the synchronization mark to the target position.

4. The method of claim 1, wherein the step of adjusting the synchronization mark according to the predetermined emotion speech synthesis rule to obtain the target synchronization mark comprises:

adding or reducing synchronous marks in the emotion voice synthesis segment according to a preset synchronous mark interval;

and taking the increased or decreased synchronization mark as the target synchronization mark.

5. The method of claim 1, wherein the step of centering on the synchronization mark of the emotion speech synthesis segment further comprises:

obtaining a pitch period of an unvoiced segment in the emotion voice synthesis segment;

setting a pitch period of the unvoiced segment to be constant.

6. The method according to claim 1, wherein the step of splicing the multiple segments of speech signals according to the target synchronization mark to obtain a synthesized speech comprises:

obtaining emotion control parameters corresponding to the target synchronous marks; wherein the emotion control parameter is used for controlling the tone of the voice signal at the target synchronous mark;

adding the emotion control parameter to the target synchronization mark;

and splicing the multiple sections of voice signals according to the target synchronous mark added with the emotion control parameter to obtain synthesized voice.

7. The method of claim 1, wherein the step of selecting a time window of a preset duration to perform windowing on the emotion speech synthesis segment comprises:

acquiring the starting position and the ending position of a preset frame in the emotion voice synthesis fragment;

randomly inserting a time window between the starting position and the ending position of the preset frame;

and compressing the signal amplitude of the region inserted into the time window in the emotion voice synthesis segment to be minimum.

8. An emotion speech synthesis apparatus, comprising:

the obtaining module is used for obtaining the emotion voice synthesis segment and setting a synchronous mark for the emotion voice synthesis segment; the synchronous mark is a position point which keeps synchronous with the fundamental tone of the voiced sound segment in the emotional voice synthesis segment and is used for reflecting the initial position of the fundamental tone period of each voiced sound segment;

the selection module is used for selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis segment by using the synchronization mark of the emotion voice synthesis segment as a center to obtain a plurality of sections of voice signals;

the adjusting module is used for adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark;

and the splicing module is used for splicing the multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice.

9. A computer device, comprising:

a processor;

a memory;

a computer program, wherein the computer program is stored in the memory and configured to be executed by the processor, the computer program configured to perform the emotion speech synthesis method according to any of claims 1 to 7.

10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the emotion speech synthesis method as recited in any one of claims 1 to 7.

Technical Field

The present application relates to the field of speech synthesis technologies, and in particular, to an emotion speech synthesis method, apparatus, device, and storage medium.

Background

Speech is one of the most important tools for human interaction, and speech signal processing has been in the past decades as an important research field. Human speech not only contains character symbol information, but also contains the changes of emotion and emotion of people. In modern speech signal processing, analyzing and processing emotional characteristics in speech signals, and judging and simulating the joy, anger, sadness and the like of speakers are important research subjects.

In the prior art, speech corresponding to emotion is generally synthesized by analyzing emotional colors of different text types, the synthesis method needs to adopt text emotion classification labels and depends on a text emotion classification model, and the acquisition cost of the text emotion classification labels is high.

Disclosure of Invention

The application mainly aims to provide an emotion voice synthesis method, device, equipment and storage medium, so as to solve the problems that a text emotion classification label needs to be acquired in a current voice synthesis mode, and the acquisition cost of the text emotion classification label is high.

In order to achieve the above object, the present application provides an emotion speech synthesis method, which includes the steps of:

acquiring an emotion voice synthesis segment, and setting a synchronous mark for the emotion voice synthesis segment; the synchronous mark is a position point which keeps synchronous with the fundamental tone of the voiced sound segment in the emotional voice synthesis segment and is used for reflecting the initial position of the fundamental tone period of each voiced sound segment;

selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by taking the synchronous mark of the emotion voice synthesis fragment as a center to obtain a plurality of sections of voice signals;

adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark;

and splicing the multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice.

Preferably, the step of adjusting the synchronization mark according to a preset emotion speech synthesis rule includes:

acquiring at least one of a pitch frequency change rule, an energy change rule, a vowel change rule and a silence delay ratio of the emotion voice synthesis segment;

and adjusting the synchronous mark according to at least one of a pitch frequency change rule, an energy change rule, a vowel change rule and a mute time delay proportion of the emotional voice synthesis segment.

Preferably, the step of adjusting the synchronization mark according to a preset emotion speech synthesis rule includes:

determining a tone waveform of the emotion voice synthesis segment;

determining a synchronization mark of the tone waveform; wherein the synchronization mark comprises a start position and an end position of each of the pitch periods of the emotion speech synthesis segment;

determining a target position from the tone waveform according to a reference tone curve; wherein the reference tone curve is a tone waveform of the emotion voice synthesis segment determined by prosodic features of human voice;

adjusting the synchronization mark to the target position.

Preferably, the step of adjusting the synchronization mark according to a preset emotion speech synthesis rule to obtain a target synchronization mark includes:

adding or reducing synchronous marks in the emotion voice synthesis segment according to a preset synchronous mark interval;

and taking the increased or decreased synchronization mark as the target synchronization mark.

Further, before the step of centering on the synchronous mark of the emotion speech synthesis segment, the method further includes:

obtaining a pitch period of an unvoiced segment in the emotion voice synthesis segment;

setting a pitch period of the unvoiced segment to be constant.

Preferably, the step of splicing the multiple sections of speech signals according to the target synchronization mark to obtain a synthesized speech includes:

obtaining emotion control parameters corresponding to the target synchronous marks; wherein the emotion control parameter is used for controlling the tone of the voice signal at the target synchronous mark;

adding the emotion control parameter to the target synchronization mark;

and splicing the multiple sections of voice signals according to the target synchronous mark added with the emotion control parameter to obtain synthesized voice.

Preferably, the step of selecting a time window with a preset duration to perform windowing processing on the emotion speech synthesis segment includes:

acquiring the starting position and the ending position of a preset frame in the emotion voice synthesis fragment;

randomly inserting a time window between the starting position and the ending position of the preset frame;

and compressing the signal amplitude of the region inserted into the time window in the emotion voice synthesis segment to be minimum.

The present application also provides an emotion speech synthesis apparatus, which includes:

the obtaining module is used for obtaining the emotion voice synthesis segment and setting a synchronous mark for the emotion voice synthesis segment; the synchronous mark is a position point which keeps synchronous with the fundamental tone of the voiced sound segment in the emotional voice synthesis segment and is used for reflecting the initial position of the fundamental tone period of each voiced sound segment;

the selection module is used for selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis segment by using the synchronization mark of the emotion voice synthesis segment as a center to obtain a plurality of sections of voice signals;

the adjusting module is used for adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark;

and the splicing module is used for splicing the multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice.

The present application further provides a computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the steps of any of the above methods when executing the computer program.

The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the methods described above.

According to the emotion voice synthesis method, device, equipment and storage medium, firstly, emotion voice synthesis fragments are obtained, and synchronous marks are set for the emotion voice synthesis fragments; selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by taking the synchronous mark of the emotion voice synthesis fragment as a center to obtain a plurality of sections of voice signals; adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark; splicing multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice, and synthesizing the voice by using the emotional voice synthesis segment in modes of fundamental tone synchronous mark analysis, fundamental tone synchronous mark adjustment, fundamental tone synchronous synthesis and the like, so that the synthesis effect is improved; meanwhile, text emotion classification labels do not need to be acquired, and synthesis cost is reduced.

Drawings

FIG. 1 is a flowchart illustrating an emotion speech synthesis method according to an embodiment of the present application;

FIG. 2 is a simulation diagram of an emotion speech synthesis method according to an embodiment of the present application;

FIG. 3 is a block diagram illustrating an emotion speech synthesis apparatus according to an embodiment of the present application;

fig. 4 is a block diagram illustrating a structure of a computer device according to an embodiment of the present application.

The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

Referring to fig. 1, the present application provides an emotion speech synthesis method, wherein in one embodiment, the emotion speech synthesis method includes the following steps:

s11, obtaining emotion voice synthesis fragments, and setting synchronous marks for the emotion voice synthesis fragments; the synchronous mark is a position point which keeps synchronous with the fundamental tone of the voiced sound segment in the emotional voice synthesis segment and is used for reflecting the initial position of the fundamental tone period of each voiced sound segment;

s12, selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by using the synchronization mark of the emotion voice synthesis fragment as the center to obtain a plurality of sections of voice signals;

s13, adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark;

and S14, splicing the multiple sections of voice signals according to the target synchronous mark to obtain synthetic voice.

As described in step S11, the emotion speech synthesis segment may be an initial synthesized speech obtained by synthesizing an initial speech according to a predetermined waveform in the speech synthesis library, for example, after converting text characters into the initial speech, a preset waveform corresponding to the initial speech in the speech synthesis library needs to be obtained, and then the initial speech is synthesized with the determined preset waveform, so as to obtain the initial synthesized speech, that is, the initial synthesized speech is a synthesized speech obtained by a conventional speech synthesis method. When the initial voice is synthesized with the determined preset waveform, prosodic feature information may be lost, so that the emotion voice synthesis segment is synthesized voice without prosody optimization, and a certain difference may exist between the natural degree and the definition degree. In addition, the emotion speech synthesis segment may also be artificial speech acquired from a smart device, such as a terminal device of a mobile phone, a computer, a tablet, and the like, and is not limited in detail herein.

The general sound is composed of a series of vibrations with different frequencies and amplitudes emitted by a sounding body. One of these vibrations has the lowest frequency, and the sound emitted therefrom is the fundamental tone, and the rest are overtones.

The pitch period is a detection method for recording the length of time of the pitch, which is the time for each opening and closing of the vocal cords.

Voiced sound is the sound with the vocal cords vibrating when speaking, while unvoiced sound is the sound with the vocal cords not vibrating.

This step sets a pitch synchronization mark mi for the emotion speech synthesis segment, where the synchronization mark is a series of location points that are kept in synchronization with the pitch of the voiced segment of the synthesized segment, and they must accurately reflect the start position of each pitch period.

As described in step S12, in this step, a time window (e.g., hanning window) with an appropriate length (generally twice the pitch period 2T) is selected to perform windowing on the synthesized segment, the synthesized segment is divided into multiple segments of speech signals, and a set of segmented speech signals is obtained. When the synchronous mark is at the initial position of the emotion voice synthesis segment, performing blank processing on the part positioned in front of the synchronous mark or adding a default segment; when the synchronous mark is at the end position of the emotion speech synthesis segment, blank processing is carried out on the part behind the synchronous mark or a default segment is added.

Specifically, the speech signal s [ n ] is windowed and decomposed into a plurality of speech signals, which includes the following formula:

si[n]=h[n-mi]s[n];

wherein, h [ n ]]For Hanning window, miIs a synchronization mark.

In this embodiment, since the emotion speech synthesis segment is a time-varying signal, in order to analyze the emotion speech synthesis segment by a conventional method, it can be assumed that the emotion speech synthesis segment is stable for a short time, so that windowing processing needs to be performed on the emotion speech synthesis segment first, and it is ensured that the adjustment of the speech signal to be synthesized is accurate and effective.

As described in step S13, the obtained synchronization mark is adjusted under the guidance of the emotion speech synthesis rule to generate a new pitch synchronization mark, and the new pitch synchronization mark is used as the target synchronization mark. The emotion voice synthesis rule can be means of changing the fundamental frequency of emotion synthesized voice by increasing and decreasing the interval of the synchronous marks of the emotion voice synthesis segment, changing the energy of the synthesized voice by changing the amplitude of the emotion voice synthesis segment, changing the time length of the synthesized voice by inserting and deleting the synchronous marks of the emotion voice synthesis segment, and changing the silence ratio by inserting the silence segment.

As described in the above step S14, there are generally 3 ways to splice the multiple speech signals: time domain pitch synchronous superposition (TD-PSOLA), linear prediction pitch synchronous superposition (LPC-PSOLA), and frequency domain pitch synchronous superposition (FD-PSOLA). According to the method and the device, emotion voice synthesis can be performed on the multiple sections of voice signals by adopting time domain fundamental tone synchronous superposition, so that synthesized voice containing specified emotion is obtained. The following formula may be specifically used to resynthesize the signal:

wherein the content of the first and second substances,for the target sync mark, miIs a synchronization mark.

Specifically, as shown in fig. 2, in the present application, taking an emotion speech synthesis segment of "raining" as an example, the pitch change laws in 200 emotion sentences (joyful, angry, and sad) are studied, and as shown in fig. 2, the pitch change situations of 4 emotions can be found that the 4 emotions respectively have the following pitch change laws:

and (3) happiness: the duration of the pronunciation with the preference is equivalent to that of the narrative sentence, but the influence is mainly caused by the tail of the sentence, and the front part and the middle part of the sentence are faster than the speed of the narrative sentence with the corresponding content. The amplitude intensity of the sentence is also concentrated in one or two characters at the end of the sentence, and the tone range of the tone of the whole sentence is higher than that of the narrative sentence. Because the speed of the front part and the middle part of the sentence is accelerated, the shape and the camber of the non-key characters and words in the sentence become flat due to the restriction of physiological reasons and grammar, even the key is lost, and the transition between the front tone and the back tone is formed. The exclamation word at the tail of the sentence reads a soft sound in the flat narrative sentence, wherein the tone is strongly aggravated, and the shape is changed into a mountain bag shape which ascends first and then descends.

Rage: the duration of the words containing anger is about half of that of the plain narrative words, the amplitude intensity is high, and the words are the combination of accelerating words and strengthening words. The verb and the adverb of the modifier verb in the sentence have a higher amplitude intensity than the average. The pitch domain of sentences is elevated, but the profiles are not necessarily flattened, sometimes with their camber even more extended. The exclamation word at the end of the sentence is also different from the light sound and becomes a tone similar to the upper sound.

Surprisingly: the situation of the surprise-containing sentences is similar to that of the happiness-containing sentences, but the difference is that the tail of the sentence tends to be warped upwards. The average amplitude intensity of the whole sentence is slightly higher than that of the plain sentence because the amplitude intensity of the tail of the sentence is increased.

Sadness: the duration of the sad sentences is about one time of the plain sentences, and the amplitude intensity of the sad sentences is much lower. Because the pronunciations of all the characters are pulled apart, the shape of the tone of the character retains the shape of a single character, and the effect of multiple tones is weakened. However, since almost every word in a sad sentence is mixed with a certain degree of nasal sound, a treatment for nasal transformation is performed. The sorrow sentence domain-adjusting is reduced, and the whole sentence tends to be flattened.

According to the examination of the characteristic parameters of the emotional sentences and the subjective feeling of the listener, the local tone (camber) of a certain sentence is changed or the tone domain of the whole sentence is changed, so that the corresponding emotional semanteme can be reflected. In a statement reflecting different emotions, the tone shape of each basic unit is basically stable, but it generates some tone variation.

The application can obtain that the emotion mainly appears in the voice in several aspects through the analysis of four emotions of 'joyful and frightened sadness': the change of the fundamental tone frequency is mainly reflected by the shift of the fundamental frequency under different emotions; energy change mainly reflects the phenomena of splicing energy under a high-activation emotional state, repeated reading of specific emotional words corresponding to certain specific emotions and the like; variation of vowels mainly reflects delay and fuzzification of vowels; the insertion of the mute frame mainly embodies the control of the pause of the speech statement through the mute frame, and the emphasis, the highlighting and the like of the statement emotion are realized.

Firstly, obtaining emotion voice synthesis fragments, and setting synchronous marks for the emotion voice synthesis fragments; selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by taking the synchronous mark of the emotion voice synthesis fragment as a center to obtain a plurality of sections of voice signals; adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark; splicing multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice, and synthesizing the voice by using the emotional voice synthesis segment in modes of fundamental tone synchronous mark analysis, fundamental tone synchronous mark adjustment, fundamental tone synchronous synthesis and the like, so that the synthesis effect is improved; meanwhile, text emotion classification labels do not need to be acquired, and synthesis cost is reduced.

In an embodiment, in step S12, before the step of centering on the synchronous mark of the emotion speech synthesis segment, the method may further include:

obtaining a pitch period of an unvoiced segment in the emotion voice synthesis segment;

setting a pitch period of the unvoiced segment to be constant.

In the TD-PSOLA technique, the selection of the time length and the interception and splicing of the short-time speech signal are performed according to the synchronization mark, the voiced speech has a pitch period T, and the unvoiced waveform is close to the white noise, so that the pitch period of the unvoiced speech can be made a constant c to ensure the accuracy of the synthesized speech while the pitch marking is performed on the voiced speech signal.

In an embodiment, in step S12, the step of selecting a time window with a preset duration to perform windowing on the emotion speech synthesis segment may specifically include:

s121, acquiring the starting position and the ending position of a preset frame in the emotion voice synthesis fragment;

s122, randomly inserting a time window between the starting position and the ending position of the preset frame;

and S123, compressing the signal amplitude of the region inserted into the time window in the emotion voice synthesis segment to be minimum.

In this embodiment, at least one frame in the emotion speech synthesis segment may be selected as a preset frame, a start position and an end position of the preset frame are determined, a time window with a preset duration is randomly inserted between the start position and the end position of the preset frame, and an amplitude value of a region inserted into the time window is directly adjusted to an amplitude value close to zero, that is, a signal amplitude value of the region inserted into the time window is compressed to a minimum value, so that a tone waveform of the region inserted into the time window in the emotion speech synthesis segment is more similar to a reference tone waveform, a prosody of synthesized speech is made to be closer to a prosody feature of natural speech, and the synthesized speech is made to be closer to the prosody feature of the natural speech in terms of natural degree and definition.

In an embodiment, in step S13, the step of adjusting the synchronization mark according to the preset emotion speech synthesis rule may specifically include:

s131, acquiring at least one of a pitch frequency change rule, an energy change rule, a vowel change rule and a silence delay ratio of the emotion voice synthesis segment;

s132, adjusting the synchronous mark according to at least one of a pitch frequency change rule, an energy change rule, a vowel change rule and a mute time delay proportion of the emotion voice synthesis segment.

In this embodiment, the emotion speech synthesis rule may include at least one of a pitch frequency variation rule, an energy variation rule, a vowel variation rule, and a silence delay ratio, and the synchronization flag is adjusted according to one or more of the pitch frequency variation rule, the energy variation rule, the vowel variation rule, and the silence delay ratio of the emotion speech synthesis segment. After the adjustment, the same emotion can be obtained, the pitch frequency change rule, the energy change rule, the vowel change rule and the mute delay ratio corresponding to different emotions are analyzed and summarized, the rules are summarized, and the summarized rules are used as rules for further voice synthesis. The mute time delay can be obtained by detecting a voice segment when the voice in the emotion voice synthesis segment is mute and calculating the mute time length of the voice segment.

Specifically, when the emotion speech synthesis rule is a pitch frequency variation rule, the interval of the synchronization mark in the emotion speech synthesis segment can be increased or decreased to adjust the synchronization mark of the emotion speech synthesis segment, so as to change the pitch frequency of the synthesized speech. The fundamental frequency is called fundamental frequency for short, when the sounding body sounds due to vibration, the sound can be decomposed into a plurality of pure sine waves generally, all natural sounds are basically composed of a plurality of sine waves with different frequencies, wherein the sine wave with the lowest frequency is the fundamental frequency, and the other sine waves with higher frequencies are harmonic waves. For example, the pitch frequency is a basic feature that can reflect the pitch of human voice, and it is generally determined whether the intonation of a singing person is correct, and the pitch is obtained by extracting the pitch frequency of human voice. The pitch frequency of the emotion voice synthesis segment can be detected by utilizing a time domain autocorrelation method, a frequency domain cepstrum calculation method, a frequency domain discrete wavelet transform method and the like, and the detected pitch frequency is analyzed to determine the change rule of the pitch frequency and obtain the pitch frequency change rule of the emotion voice synthesis segment.

When the emotion voice synthesis rule is an energy change rule, the time window of each synchronous mark is changed through the change of the amplitude of the emotion voice synthesis segment, and then the energy of the synthesized voice is changed. When the energy change rule of the emotion voice synthesis segment is obtained, the emotion voice synthesis segment can be divided into a plurality of audio frames at equal intervals, the short-time energy of each audio frame is calculated, an energy curve is generated according to the short-time energy of each audio frame, and the energy change rule of the emotion voice synthesis segment is determined based on energy curve analysis.

When the emotion voice synthesis rule is a variation rule of vowels, the variation of the vowels mainly reflects the delay and fuzzification of the vowels, and at the moment, the synchronization marks of the emotion voice synthesis segments can be adjusted by adding or deleting the synchronization marks to the emotion voice synthesis segments, so that the time length of the vowels of the synthesized voice is changed.

When the emotion voice synthesis rule is a silence delay proportion, a silence segment is inserted into a synchronization mark of an emotion voice synthesis segment to adjust the synchronization mark of the emotion voice synthesis segment, so that the silence ratio of the synthesized voice is changed, and the emphasis, the highlighting and the like of the sentence emotion are realized. When obtaining variation rules of vowels of emotion voice synthesis fragments, through collecting information of various pinyin characters and through interview and other forms, vocabularies with standard tones and vocabularies with variant tones corresponding to the vocabularies with the standard tones in the same language can be obtained, wherein the standard tones are pronunciations of official languages, the variant tones are pronunciations with pronunciation variations corresponding to the standard tones in the same language, and for the pinyin characters, the pronunciations and spelling modes of the vocabularies with the standard tones are different from those of the vocabularies with the variant tones. Therefore, the vocabulary with variant sounds of the emotional voice synthesis segment can be obtained firstly, and the pronunciation variant rule between the standard sound and the variant sound is determined according to the vocabulary with the standard sound and the vocabulary with the variant sounds under the help of language experts, so as to form the variant rule of vowels.

In an embodiment, in step S13, the step of adjusting the synchronization mark according to the preset emotion speech synthesis rule may specifically include:

a131, determining the tone waveform of the emotion voice synthesis segment;

a132, determining a synchronous mark of the tone waveform; wherein the synchronization mark comprises a start position and an end position of each of the pitch periods of the emotion speech synthesis segment;

a133, determining a target position from the tone waveform according to a reference tone curve; wherein the reference tone curve is a tone waveform of the emotion voice synthesis segment determined by prosodic features of human voice;

and A134, adjusting the synchronous mark to the target position.

In this embodiment, the reference tone curve may be extracted from a reference voice, where the reference voice may be a pure voice recorded by a professional pronunciation person, such as a speaker, and the prosodic feature of the reference voice may be understood as the prosodic feature corresponding to the reference voice extracted by a professional technician. The prosodic features may include voice information such as tone, intonation, accent, timbre, and the like, or may be some other feature information for describing voice, regarding the type and amount of prosodic feature information.

When adjusting the synchronization mark, it is necessary to obtain a tone waveform of the emotion speech synthesis segment, for example, the tone waveform may be referred to as a tone waveform to be adjusted, then mark an original mark point on the tone waveform to be adjusted, determine a synchronization mark of the tone waveform, when marking the original mark point, it is generally necessary to include a start position and an end position of each pitch period in the original emotion speech synthesis segment, then determine a target position from the tone waveform according to a reference tone curve, where the target position is determined by a prosodic feature of the reference speech, then adjust the synchronization mark to the target position, that is, insert the synchronization mark in the target position, and further adjust the pitch period of the emotion speech synthesis segment, so that the synthesized speech is closer to human speech.

In an embodiment, in step S13, the step of adjusting the synchronization mark according to the preset emotion speech synthesis rule to obtain the target synchronization mark may specifically include:

b131, adding or reducing a synchronous mark in the emotion voice synthesis segment according to a preset synchronous mark interval;

and B132, taking the increased or decreased synchronous mark as the target synchronous mark.

In this embodiment, the position of inserting the synchronization mark may be determined by different methods, and then the synchronization mark may be inserted and adjusted, for example, the synchronization mark may be added or subtracted in the emotion speech synthesis segment to adjust the pitch period of the tone waveform to be adjusted to be close to or the same as the pitch period of the reference tone waveform, so as to ensure that the tone waveform of the synthesized speech is substantially consistent with the reference tone waveform, and thus is closer to the natural speech in terms of naturalness and clarity.

In an embodiment, in step S14, the step of splicing the multiple speech signals according to the target synchronization mark to obtain a synthesized speech may specifically include:

s141, obtaining emotion control parameters corresponding to the target synchronous marks; wherein the emotion control parameter is used for controlling the tone of the voice signal at the target synchronous mark;

s142, adding the emotion control parameter to the target synchronization mark;

and S143, splicing the multiple sections of voice signals according to the target synchronous mark added with the emotion control parameter to obtain synthesized voice.

The realization of emotion pronunciation needs to reflect human emotion characteristics through acoustic parameters of voice, and emotion control parameters are added on the basis of a tone method to increase expressive force of voice synthesis. Specifically, the emotion control parameters corresponding to each target synchronous mark are obtained, the emotion control parameters are added to the target synchronous marks, then the multiple sections of voice signals are spliced according to the target synchronous marks added with the emotion control parameters to obtain synthetic voice, so that human tones are reflected in the synthetic voice added with the emotion control parameters, and the synthetic voice is closer to natural voice.

Referring to fig. 3, an embodiment of the present application further provides an emotion speech synthesis apparatus, including:

the obtaining module 11 is configured to obtain an emotion voice synthesis segment, and set a synchronization mark for the emotion voice synthesis segment; the synchronous mark is a position point which keeps synchronous with the fundamental tone of the voiced sound segment in the emotional voice synthesis segment and is used for reflecting the initial position of the fundamental tone period of each voiced sound segment;

a selecting module 12, configured to select a time window with a preset duration to perform windowing on the emotion voice synthesis segment by using the synchronization mark of the emotion voice synthesis segment as a center, so as to obtain multiple segments of voice signals;

the adjusting module 13 is configured to adjust the synchronization mark according to a preset emotion voice synthesis rule to obtain a target synchronization mark;

and the splicing module 14 is configured to splice the multiple sections of voice signals according to the target synchronization mark to obtain a synthesized voice.

The emotion speech synthesis segment may be an initial synthesized speech obtained by synthesizing an initial speech according to a predetermined waveform in a speech synthesis library, and for example, after text characters are converted into the initial speech, a predetermined waveform corresponding to the initial speech in the speech synthesis library needs to be obtained, and then the initial speech is synthesized with the determined predetermined waveform, so as to obtain the initial synthesized speech, that is, the initial synthesized speech is a synthesized speech obtained by a conventional speech synthesis method. When the initial voice is synthesized with the determined preset waveform, prosodic feature information may be lost, so that the emotion voice synthesis segment is synthesized voice without prosody optimization, and a certain difference may exist between the natural degree and the definition degree. In addition, the emotion speech synthesis segment may also be artificial speech acquired from a smart device, such as a terminal device of a mobile phone, a computer, a tablet, and the like, and is not limited in detail herein.

The general sound is composed of a series of vibrations with different frequencies and amplitudes emitted by a sounding body. One of these vibrations has the lowest frequency, and the sound emitted therefrom is the fundamental tone, and the rest are overtones.

The pitch period is a detection method for recording the length of time of the pitch, which is the time for each opening and closing of the vocal cords.

Voiced sound is the sound with the vocal cords vibrating when speaking, while unvoiced sound is the sound with the vocal cords not vibrating.

In addition, the emotion speech synthesis segment may be provided with a pitch synchronization mark mi, which is a series of location points that remain synchronized with the pitch of the voiced segments of the synthesized segment, and which must accurately reflect the start position of each pitch period.

The method comprises the steps of taking a synchronous mark of an emotion voice synthesis segment as a center, selecting a time window (such as a Hanning window) with a proper length (generally, twice of a pitch period 2T) to perform windowing processing on the synthesis segment, dividing the emotion voice synthesis segment into a plurality of voice signals, and acquiring a group of segmented voice signals. When the synchronous mark is at the initial position of the emotion voice synthesis segment, performing blank processing on the part positioned in front of the synchronous mark or adding a default segment; when the synchronous mark is at the end position of the emotion speech synthesis segment, blank processing is carried out on the part behind the synchronous mark or a default segment is added.

Specifically, the speech signal s [ n ] is windowed and decomposed into a plurality of speech signals, which includes the following formula:

si[n]=h[n-mi]s[n];

wherein, h [ n ]]For Hanning window, miIs a synchronization mark.

In this embodiment, since the emotion speech synthesis segment is a time-varying signal, in order to analyze the emotion speech synthesis segment by a conventional method, it can be assumed that the emotion speech synthesis segment is stable for a short time, so that windowing processing needs to be performed on the emotion speech synthesis segment first, and it is ensured that the adjustment of the speech signal to be synthesized is accurate and effective.

And under the guidance of the emotion voice synthesis rule, adjusting the obtained synchronous mark, generating a new pitch synchronous mark, and taking the new pitch synchronous mark as a target synchronous mark. The emotion voice synthesis rule can be means of changing the fundamental frequency of emotion synthesized voice by increasing and decreasing the interval of the synchronous marks of the emotion voice synthesis segment, changing the energy of the synthesized voice by changing the amplitude of the emotion voice synthesis segment, changing the time length of the synthesized voice by inserting and deleting the synchronous marks of the emotion voice synthesis segment, and changing the silence ratio by inserting the silence segment.

The splicing of the multiple speech signals is generally implemented in 3 ways: time domain pitch synchronous superposition (TD-PSOLA), linear prediction pitch synchronous superposition (LPC-PSOLA), and frequency domain pitch synchronous superposition (FD-PSOLA). According to the method and the device, emotion voice synthesis can be performed on the multiple sections of voice signals by adopting time domain fundamental tone synchronous superposition, so that synthesized voice containing specified emotion is obtained. The following formula may be specifically used to resynthesize the signal:

wherein the content of the first and second substances,for the target sync mark, miIs a synchronization mark.

As described above, it can be understood that each component of the emotion speech synthesis apparatus provided in the present application can implement the function of any one of the emotion speech synthesis methods described above, and the detailed structure is not described again.

Referring to fig. 4, a computer device, which may be a server and whose internal structure may be as shown in fig. 4, is also provided in the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a storage medium and an internal memory. The storage medium stores an operating system, a computer program, and a database. The memory provides an environment for the operation of the operating system and computer programs in the storage medium. The database of the computer equipment is used for storing emotion voice synthesis fragments, synthesized voice and other data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an emotion speech synthesis method.

The processor executes the emotion speech synthesis method, and the method comprises the following steps:

acquiring an emotion voice synthesis segment, and setting a synchronous mark for the emotion voice synthesis segment; the synchronous mark is a position point which keeps synchronous with the fundamental tone of the voiced sound segment in the emotional voice synthesis segment and is used for reflecting the initial position of the fundamental tone period of each voiced sound segment;

selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by taking the synchronous mark of the emotion voice synthesis fragment as a center to obtain a plurality of sections of voice signals;

adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark;

and splicing the multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice.

An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, the computer program, when executed by a processor, implementing an emotion speech synthesis method, including the steps of:

acquiring an emotion voice synthesis segment, and setting a synchronous mark for the emotion voice synthesis segment; the synchronous mark is a position point which keeps synchronous with the fundamental tone of the voiced sound segment in the emotional voice synthesis segment and is used for reflecting the initial position of the fundamental tone period of each voiced sound segment;

selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by taking the synchronous mark of the emotion voice synthesis fragment as a center to obtain a plurality of sections of voice signals;

adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark;

and splicing the multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).

To sum up, the most beneficial effect of this application lies in:

according to the emotion voice synthesis method, device, equipment and storage medium, firstly, emotion voice synthesis fragments are obtained, and synchronous marks are set for the emotion voice synthesis fragments; selecting a time window with preset duration to perform windowing processing on the emotion voice synthesis fragment by taking the synchronous mark of the emotion voice synthesis fragment as a center to obtain a plurality of sections of voice signals; adjusting the synchronous mark according to a preset emotion voice synthesis rule to obtain a target synchronous mark; splicing multiple sections of voice signals according to the target synchronous mark to obtain synthesized voice, and synthesizing the voice by using the emotional voice synthesis segment in modes of fundamental tone synchronous mark analysis, fundamental tone synchronous mark adjustment, fundamental tone synchronous synthesis and the like, so that the synthesis effect is improved; meanwhile, text emotion classification labels do not need to be acquired, and synthesis cost is reduced.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.

The above description is only a preferred embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:语音纠正方法、装置及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!