Voice conversion method, system, electronic equipment and readable storage medium

文档序号:170828 发布日期:2021-10-29 浏览:36次 中文

阅读说明:本技术 语音转换方法、系统、电子设备及可读存储介质 (Voice conversion method, system, electronic equipment and readable storage medium ) 是由 陈怿翔 王俊超 康永国 于 2021-08-09 设计创作,主要内容包括:本公开公开了语音转换方法、系统、电子设备及可读存储介质,涉及语音、深度学习等人工智能技术领域,尤其涉及语音转换领域。具体实现方案为:语音转换方法,包括:获取目标说话人第一语音;获取原说话人语音;提取目标说话人第一语音的第一特征参数;提取原说话人语音的第二特征参数;将所述第一特征参数和所述第二特征参数进行处理得到梅尔谱信息;将所述梅尔谱信息进行转换,输出同目标说话人第一语音的音色相同,并且同原说话人语音的内容相同的目标说话人第二语音。本公开的语音转换方法及系统保留了目标说话人的语音情感、腔调等音色特征,并且见减小了运算成本。(The disclosure discloses a voice conversion method, a voice conversion system, electronic equipment and a readable storage medium, and relates to the technical field of artificial intelligence such as voice and deep learning, in particular to the field of voice conversion. The specific implementation scheme is as follows: the voice conversion method comprises the following steps: acquiring a first voice of a target speaker; acquiring the voice of an original speaker; extracting a first characteristic parameter of a first voice of a target speaker; extracting a second characteristic parameter of the original speaker voice; processing the first characteristic parameter and the second characteristic parameter to obtain Mel spectrum information; and converting the Mel-spectrum information to output a second voice of the target speaker, which has the same tone as the first voice of the target speaker and the same content as the voice of the original speaker. The voice conversion method and the voice conversion system disclosed by the invention keep the voice emotion, the cavity tone and other tone characteristics of the target speaker, and reduce the operation cost.)

1. A method of speech conversion, comprising:

acquiring a first voice of a target speaker;

acquiring the voice of an original speaker;

extracting a first characteristic parameter of a first voice of a target speaker;

extracting a second characteristic parameter of the original speaker voice;

processing the first characteristic parameter and the second characteristic parameter to obtain Mel spectrum information;

and converting the Mel-spectrum information to output a second voice of the target speaker, which has the same tone as the first voice of the target speaker and the same content as the voice of the original speaker.

2. The method of claim 1, wherein the captured target speaker first speech and the captured original speaker speech are both audio information.

3. The method of claim 1, wherein the first characteristic parameter comprises: voiceprint features with time dimension information.

4. The method of claim 3, wherein said extracting a first feature parameter of a first speech of a target speaker comprises:

extracting the voiceprint characteristics of the first voice of the target speaker;

and adding a time dimension to the voiceprint characteristic of the first voice of the target speaker to obtain a first characteristic parameter.

5. The method of claim 1, wherein the second characteristic parameter comprises: a time-dependent text encoding, a first fundamental frequency, and a first fundamental frequency characterization.

6. The method of claim 5, wherein the extracting the second feature parameter of the original speaker voice comprises:

extracting the text-like characteristics of the original speaker voice;

performing dimension reduction processing on the text-like features to obtain text codes related to time;

and processing the text-like features to obtain a first fundamental frequency and a first fundamental frequency representation.

7. The method of claim 6, wherein the processing the text-like features into a first fundamental frequency and a first fundamental frequency representation comprises:

training the original speaker voice and the text-like feature through a neural network to obtain a mapping model from the text-like feature to a fundamental frequency;

and processing the text-like features by using the mapping model from the text-like features to the fundamental frequency to obtain a first fundamental frequency and a first fundamental frequency representation.

8. The method of claim 7, wherein the training by the neural network comprises: training is performed using convolutional layers and long-short term memory networks.

9. The method of claim 1, wherein the processing the first characteristic parameter and the second characteristic parameter to obtain mel-frequency spectrum information comprises:

performing integrated coding on the first characteristic parameter and the second characteristic parameter to obtain the coding characteristic of each frame of the voice;

and passing the coding features of each frame through a decoder to obtain Mel spectrum information.

10. A speech conversion system comprising:

a first obtaining module: the voice recognition device is used for acquiring a first voice of a target speaker;

a second obtaining module: used for obtaining the original speaker voice;

a first extraction module: the method comprises the steps of extracting a first characteristic parameter of a first voice of a target speaker;

a second extraction module: the second characteristic parameter is used for extracting the voice of the original speaker;

a processing module: the device is used for processing the first characteristic parameter and the second characteristic parameter to obtain Mel spectrum information;

a conversion module: and the voice conversion module is used for converting the Mel spectrum information and outputting a second voice of the target speaker, wherein the tone of the first voice of the target speaker is the same as that of the first voice of the target speaker, and the second voice of the target speaker has the same content as that of the voice of the original speaker.

11. The system of claim 10, wherein the first extraction module comprises:

voiceprint feature extraction module: the voice print extraction module is used for extracting the voice print characteristic of the first voice of the target speaker;

the voiceprint feature processing module: the method is used for adding a time dimension to the voiceprint feature of the first voice of the target speaker to obtain a first feature parameter.

12. The system of claim 10, wherein the second extraction module comprises:

the class text feature extraction module: extracting the similar text characteristic of the original speaker voice;

a text encoding module: the system is used for performing dimension reduction processing on the text-like features to obtain text codes related to time;

a fundamental frequency prediction module: and the method is used for processing the text-like features to obtain a first fundamental frequency and a first fundamental frequency representation.

13. The system of claim 10, wherein the processing module comprises:

an integration module: the coding characteristic of each frame of the voice is obtained by performing integrated coding on the first characteristic parameter and the second characteristic parameter;

a decoder module: and the encoder is used for passing the encoding characteristics of each frame through a decoder to obtain Mel spectrum information.

14. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-9.

15. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-9.

16. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-9.

Technical Field

The present disclosure relates to the field of artificial intelligence technologies such as speech and deep learning, and in particular to a speech conversion technology.

Background

The voice conversion means that on the premise of keeping original semantic information unchanged, the voice personality characteristics of an original speaker are changed to have the voice personality characteristics of a target speaker, so that one person can hear the voice like the other person after being converted. The research of the voice conversion has important application value and theoretical value. Each acoustic characteristic parameter cannot represent all personal characteristic information of people, so that the voice conversion is carried out by selecting the voice personal characteristic parameters which can represent different people most.

Disclosure of Invention

The present disclosure provides a voice conversion method, system, electronic device, and readable storage medium for enhancing a voice conversion effect and preserving a tone color of a primitive sound.

According to an aspect of the present disclosure, there is provided a voice conversion method closer to a target speaker in terms of timbre, including:

acquiring a first voice of a target speaker;

acquiring the voice of an original speaker;

extracting a first characteristic parameter of a first voice of a target speaker;

extracting a second characteristic parameter of the original speaker voice;

processing the first characteristic parameter and the second characteristic parameter to obtain Mel spectrum information;

and converting the Mel-spectrum information to output a second voice of the target speaker, which has the same tone as the first voice of the target speaker and the same content as the voice of the original speaker.

According to another aspect of the present disclosure, there is provided a voice conversion system including:

a first obtaining module: the voice recognition device is used for acquiring a first voice of a target speaker;

a second obtaining module: used for obtaining the original speaker voice;

a first extraction module: the method comprises the steps of extracting a first characteristic parameter of a first voice of a target speaker;

a second extraction module: the second characteristic parameter is used for extracting the voice of the original speaker;

a processing module: the device is used for processing the first characteristic parameter and the second characteristic parameter to obtain Mel spectrum information;

a conversion module: and the voice conversion module is used for converting the Mel spectrum information and outputting a second voice of the target speaker, wherein the tone of the first voice of the target speaker is the same as that of the first voice of the target speaker, and the second voice of the target speaker has the same content as that of the voice of the original speaker.

According to a third aspect of the present disclosure, there is provided an electronic device comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the first aspects of the disclosure.

According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of the first aspect of the present disclosure.

According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of the first aspects of the present disclosure.

The beneficial effect that technical scheme that this disclosure provided brought includes:

on the basis of the existing voice conversion technology, the extraction and processing of the fundamental frequency of the voice of the original speaker are added, so that the voice conversion method and the voice conversion system keep the characteristics of voice emotion, lacunar tone and the like.

By adopting the method and the system, the operation cost is lower and the hardware requirement is lower when the voice conversion is processed.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:

FIG. 1 is a schematic diagram of a speech conversion method according to the present disclosure;

FIG. 2 is a schematic diagram of extracting a first feature parameter of a first speech of a target speaker according to the present disclosure;

FIG. 3 is a schematic diagram of extracting a second feature parameter of the original speaker's voice according to the present disclosure;

FIG. 4 is a schematic diagram of processing the text-like features to obtain a first fundamental frequency and a first fundamental frequency representation according to the present disclosure;

FIG. 5 is a schematic diagram of a speech conversion system according to the present disclosure;

FIG. 5-1 is a schematic diagram of a first extraction module according to the present disclosure;

FIG. 5-2 is a schematic diagram of a second extraction module according to the present disclosure;

5-3 are schematic diagrams of a processing module according to the present disclosure;

FIG. 6 is a block diagram of an electronic device for implementing a speech conversion system of an embodiment of the present disclosure; description of reference numerals:

5 voice conversion system

501 first obtaining module 502 second obtaining module

503 first extraction module 504 second extraction module

5031 voiceprint feature extraction module 5032 voiceprint feature processing module

5041 class text feature extraction module 5042 text encoding module

5043 fundamental frequency prediction module

505 processing module 506 conversion module

5051 integration module 5052 decoder module

600 electronic device 601 computing unit

602 rom 603 ram

604 bus 605I/O interface

606 input unit 607 output unit

608 storage unit 609 communication unit

Interpretation of terms:

fundamental frequency: i.e. the sine wave with the lowest frequency in pronunciation, the fundamental frequency may represent the pitch of the tone, which is the pitch of the tone in singing.

Voiceprint characteristics: the feature vector of the tone of the speaker is stored, and under the ideal condition, each speaker has a unique and determined voiceprint feature vector which can completely represent the speaker and can be analogized by a fingerprint.

Mel spectrum: the unit of frequency is Hertz, the audible frequency range of human ears is 20-20000 Hertz, but the human ears are not linearly sensitive to Hertz units, but are sensitive to low Hertz and insensitive to high Hertz, and the perception of the human ears to the frequency becomes linear by converting the Hertz frequency into the Mel frequency.

Long and short term memory network: a Long Short-Term Memory network (LSTM) is a time-cycled neural network.

A vocoder: for synthesizing mel-spectrum (mel-spectrum) information into a speech waveform signal.

Detailed Description

Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

The speech conversion system 1 refers to a kind of speaker-like system that converts a source speaker's speech into a target speaker's speech of the same timbre. The difference from the more primitive sound transformer is: the voice after voice conversion is more real and vivid, and is closer to the target speaker on the aspect of tone. Meanwhile, the voice conversion system can fully retain text and emotion information so as to achieve the replaceability of the target speaker to the greatest extent.

As shown in fig. 1, according to a first aspect of the present disclosure, there is provided a voice conversion method including:

s101: acquiring a first voice of a target speaker; the target speaker refers to a target object to be prepared for voice conversion. Text information may also be captured here and then converted to the audio-followed first speech of the target speaker. A specific target speaker is designated, the generalization is not considered in the whole calculation method, the compressible space of calculation is enlarged, and the calculation cost is smaller.

S102: acquiring the voice of an original speaker; i.e. the speech of the object being converted. Or the acquired text information can be converted into the voice of the original speaker after the voice frequency.

S103: extracting a first characteristic parameter of a first voice of a target speaker; the human speech information feature parameter contains a plurality of features, and each feature plays a different role in speech expression. The acoustic parameters characterizing the timbre generally include: voiceprint features, formant bandwidth, mel-frequency cepstral coefficients, formant position, speech energy, pitch period, etc. The reciprocal of the pitch period is the fundamental frequency. It is possible that any one or more of the above parameters are extracted from the first speech of the targeted speaker.

S104: extracting a second characteristic parameter of the original speaker voice; the second characteristic parameter, like the first characteristic parameter, also includes the above-mentioned categories. In addition, the information contained in the voice of the original speaker is extracted, and the characteristic parameters comprise: text encoding, a first fundamental frequency, and a first fundamental frequency characterization.

S105: processing the first characteristic parameter and the second characteristic parameter to obtain Mel spectrum information;

s106: and converting the Mel-spectrum information to output a second voice of the target speaker, which has the same tone as the first voice of the target speaker and the same content as the voice of the original speaker. Converting the voice of an original speaker to the voice of a target speaker can be applied to a number of fields, for example: speech synthesis, multimedia domain, medical domain, speech translation domain, etc.

The obtained first voice of the target speaker and the obtained voice of the original speaker are both audio information. The direct use of audio information for speech conversion is more direct and makes the converted speech clearer. Moreover, the audio information contains the speaking content and emotion of the speaker, the phonemes of the chamber tone and the like.

The first characteristic parameter includes: voiceprint features with time dimension information.

As shown in fig. 2, the extracting the first feature parameter of the first speech of the target speaker includes:

s201: extracting the voiceprint characteristics of the first voice of the target speaker; voiceprint features are unique and deterministic features of only one speaker, similar to a person's fingerprint.

S202: and adding a time dimension to the voiceprint characteristic of the first voice of the target speaker to obtain a first characteristic parameter. From the above explanation, it was confirmed that the voiceprint feature is a parameter not related to time. The correlation of the voiceprint feature with time is here made for the convenience of later processing the first feature parameter together with the second feature parameter. There are not only convolutional layers for voiceprint feature processing, but also long and short term memory networks.

The second characteristic parameter includes: a time-dependent text encoding, a first fundamental frequency, and a first fundamental frequency characterization. Time-dependent "text encoding" is emphasized here because finally during the speech conversion the speech is continuous and time-dependent, i.e. each word of a sentence has an precedence. In addition, if only a sentence or a speech segment is divided by each word, not a sentence or a speech segment according to time, the combination and conversion of the single words into the voice of the target speaker may occur later, and thus, a sentence or a speech segment without the emotion, the accent, and the tone information of the voice of the original speaker may occur, which is very hard. If a sentence or a segment of speech is divided based on time, then the sentence or segment of speech with the accent of speech, tone information is later combined and transformed into the speech of the target speaker. Obviously, encoding according to time-dependent text is more advantageous for the speech effect after speech conversion.

As shown in fig. 3, the extracting the second feature parameter of the original speaker voice includes:

s301: extracting the text-like characteristics of the original speaker voice; so-called text-like features are time-dependent text features. For example, a sentence spoken by the original speaker is extracted, and the text features include semantics and time information, that is, the occurrence time of each word in the sentence is in sequence, or the occurrence time of each word in the sentence is in sequence.

S302: performing dimension reduction processing on the text-like features to obtain text codes related to time; the text-like features and the time-dependent text encoding are each a vector of each frame of speech. The reason for performing the dimension reduction processing on the text-like features is to reduce the operation amount. Here, dimension reduction is performed only with the convolutional layer.

S303: and processing the text-like features to obtain a first fundamental frequency and a first fundamental frequency representation. The text-like features are time-dependent, so that the processed first fundamental frequency and first fundamental frequency representations are also time-dependent. That is, the first fundamental frequency and the first fundamental frequency characterization also correspond to each frame of speech.

As shown in fig. 4, the processing the text-like features to obtain a first fundamental frequency and a first fundamental frequency representation includes:

s401: training the original speaker voice and the text-like feature through a neural network to obtain a mapping model from the text-like feature to a fundamental frequency;

in the process of training the neural network, the fundamental frequency in the voice of the original speaker is extracted, the class text features corresponding to the fundamental frequency in the utterance of the original speaker are extracted, and a mapping model from the class text features to the fundamental frequency is obtained. In the training process, the fundamental frequency in the original speaker's voice is used for training calibration. Two loss functions are used in the training process, one is a loss function of the fundamental frequency; the other is a self-reconstruction loss function of the original speaker's speech.

S402: and processing the text-like features by using the mapping model from the text-like features to the fundamental frequency to obtain a first fundamental frequency and a first fundamental frequency representation. In the actual application stage, a mapping model from class text features to fundamental frequency acquired in the training stage is adopted to predict the first fundamental frequency through class text information. And the hidden layer of the output of the mapping model outputs the first fundamental frequency representation. In addition, a long-time memory network is added in the model for mapping the text-like features to the fundamental frequency. The reason for this long and short term memory network is that the fundamental frequency is not only time dependent but also context dependent. Therefore, the long-time memory network adds time information to the mapping model from the class text features to the fundamental frequency. Also here, processing is based on the fundamental frequency of a word or segment of a word, rather than processing based on the fundamental frequency of a word. I.e. the following speech conversion is performed according to a time-dependent, context-dependent fundamental frequency. The method has the advantage that the voice emotion, the cavity tone and other tone elements of the original speaker are reserved after conversion.

The training by the neural network comprises: training is performed using convolutional layers and long-short term memory networks. The convolution layer is mainly used for reducing dimensions, and the long-term and short-term memory network is mainly used for adding time information to a mapping model from class text features to fundamental frequency.

So far, the voiceprint features are processed to obtain time-dependent voiceprint features; the text-like features are subjected to dimension reduction of the convolutional layer to obtain text codes, and the text codes are related to time; the first fundamental frequency is also time dependent. The first base frequency is time dependent, i.e. each frame has a base frequency, the text-like features are time dependent, each frame has a base frequency, but the base frequency is a number, and the text-like features are a vector, so that the text-like features are mapped to a base frequency. That is, on one hand, the dimension of the class text feature is reduced to text coding, and on the other hand, the mapping of the class text feature to the frequency domain is established. Here, the convolutional layer is used to achieve the purpose of reducing the dimension, and at the same time, the convolutional layer also serves the purpose of converting the data space and mapping the class text features to the fundamental frequency.

The processing the first characteristic parameter and the second characteristic parameter to obtain mel-frequency spectrum information includes:

performing integrated coding on the first characteristic parameter and the second characteristic parameter to obtain the coding characteristic of each frame of the voice; the first characteristic parameter is referred to here as the time-dependent voiceprint characteristic code, and the second characteristic parameter is referred to here as the time-dependent text code and the first fundamental frequency. The integration mode of the text code related to time and the first fundamental frequency is directly spliced together, and the adding mode of the voiceprint feature code is to calculate a weight matrix and an offset vector, namely, the voiceprint feature code is converted into a full link network and then calculated with the text code, so that the voiceprint feature information is added.

And passing the coding features of each frame through a decoder to obtain Mel spectrum information.

Then, the obtained Mel spectrum information is input into a vocoder, and the Mel spectrum information is converted into voice audio by the vocoder. The voice audio at this time is the voice with the reserved tone of the target speaker, but the content is the voice of the voice content of the original speaker. The purpose of voice conversion is achieved. Vocoders are well known in the art and will not be described in detail herein.

As shown in fig. 5, according to a second aspect of the present disclosure, there is also provided a speech conversion system 5, including:

the first obtaining module 501: the voice recognition device is used for acquiring a first voice of a target speaker;

the second obtaining module 502: used for obtaining the original speaker voice;

the first extraction module 503: the method comprises the steps of extracting a first characteristic parameter of a first voice of a target speaker;

the second extraction module 504: the second characteristic parameter is used for extracting the voice of the original speaker;

the processing module 505: the device is used for processing the first characteristic parameter and the second characteristic parameter to obtain Mel spectrum information;

the conversion module 506: and the voice conversion module is used for converting the Mel spectrum information and outputting a second voice of the target speaker, wherein the tone of the first voice of the target speaker is the same as that of the first voice of the target speaker, and the second voice of the target speaker has the same content as that of the voice of the original speaker.

As shown in fig. 5-1, the first extraction module 503 includes: voiceprint feature extraction module 5031: the voice print extraction module is used for extracting the voice print characteristic of the first voice of the target speaker;

voiceprint feature processing module 5032: the method is used for adding a time dimension to the voiceprint feature of the first voice of the target speaker to obtain a first feature parameter.

As shown in fig. 5-2, the second extraction module 504 includes: class text feature extraction module 5041: extracting the similar text characteristic of the original speaker voice;

text encoding module 5042: the system is used for performing dimension reduction processing on the text-like features to obtain text codes related to time;

fundamental frequency prediction module 5043: and the method is used for processing the text-like features to obtain a first fundamental frequency and a first fundamental frequency representation. The fundamental frequency prediction module 5043, the input is a text-like feature, and the output is a hidden layer feature in the fundamental frequency and fundamental frequency prediction module, aiming at predicting the fundamental frequency through the text-like feature. In the training stage, the real fundamental frequency is used as a target, a loss function is calculated, and in the application stage, the fundamental frequency is predicted through the text-like features. The fundamental prediction module 5043 is essentially a neural network.

As shown in fig. 5-3, the processing module 505 comprises:

the integration module 5051: the coding characteristic of each frame of the voice is obtained by performing integrated coding on the first characteristic parameter and the second characteristic parameter;

the decoder module 5052: and the encoder is used for passing the encoding characteristics of each frame through a decoder to obtain Mel spectrum information.

As shown in fig. 6, according to a third aspect of the present disclosure, there is also provided an electronic apparatus including:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the first aspects.

According to a fourth aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method according to any one of the first aspects of the present disclosure.

According to a fifth aspect of the present disclosure, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of the first aspects of the present disclosure.

The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.

FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.

As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.

The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the respective methods and processes described above, such as a voice conversion method. For example, in some embodiments, the speech conversion method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the speech conversion method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the speech conversion method by any other suitable means (e.g., by means of firmware).

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.

The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种语音数据的识别方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!