Audio synthesis method and device, electronic equipment and computer readable medium

文档序号:1244047 发布日期:2020-08-18 浏览:14次 中文

阅读说明:本技术 音频合成方法、装置、电子设备和计算机可读介质 (Audio synthesis method and device, electronic equipment and computer readable medium ) 是由 顾宇 于 2020-04-23 设计创作,主要内容包括:本公开的实施例公开了音频合成方法、装置、电子设备和计算机可读介质。该方法的一具体实施方式包括:基于客户端发送的第一歌词,生成第二歌词;基于上述第二歌词和上述客户端发送的主题,生成旋律;基于上述旋律和上述第二歌词,生成歌谱;基于上述歌谱和上述旋律,生成背景音乐和干声;基于上述干声和上述背景音乐,生成音频。该实施方式实现了高效的合成歌曲,而且客户端的用户可以参与歌唱合成过程,增加了用户参与的趣味性,从而提升了用户体验。(The embodiment of the disclosure discloses an audio synthesis method, an audio synthesis device, an electronic device and a computer readable medium. One embodiment of the method comprises: generating second lyrics based on the first lyrics sent by the client; generating a melody based on the second lyrics and the theme sent by the client; generating a song score based on the melody and the second lyrics; generating background music and dry sound based on the song score and the melody; and generating audio based on the dry sound and the background music. According to the embodiment, the efficient song synthesis is realized, and the user at the client can participate in the singing synthesis process, so that the interest of the user participation is increased, and the user experience is improved.)

1. An audio synthesis method, comprising:

generating second lyrics based on the first lyrics sent by the client;

generating a melody based on the second lyrics and the theme sent by the client;

generating a song score based on the melody and the second lyrics;

generating background music and dry sound based on the music score and the melody;

generating audio based on the dry sound and the background music.

2. The method of claim 1, wherein the generating second lyrics based on first lyrics sent by a client comprises:

inputting the first lyrics sent by the client to a pre-trained first deep learning network to obtain the lyrics to be processed;

sending the lyrics to be processed to the client;

and receiving the lyrics processed by the client as the second lyrics, wherein the processed lyrics are the lyrics processed by the client to be processed.

3. The method of claim 1, wherein the generating a melody based on the second lyrics and a theme sent by the client comprises:

adjusting a final sound foot of the second lyric to generate the second lyric after the final sound foot is adjusted;

and generating the melody based on a pre-trained second deep learning network, a variation self-encoder, the second lyrics after the vowel foot adjustment and the theme.

4. The method of claim 1, wherein the generating a song score based on the melody and the second lyrics comprises:

generating a target melody based on the respective processing of the respective melody fragments comprised by the melody;

filling the second lyrics into the target melody to generate a song score to be processed;

sending the music score to be processed to the client;

and receiving the singing score processed by the client as the singing score, wherein the processed singing score is the singing score processed by the client for the singing score to be processed.

5. The method of claim 1, wherein the generating of background music and dry sound based on the song score and the melody comprises:

generating the background music based on the melody, the song score and a bidirectional long-and-short time memory neural network;

converting the song score into dry sound.

6. The method of one of claims 1-5, wherein the generating audio based on the stem sound and the background music comprises:

adjusting the dry sound according to the song score to obtain the adjusted dry sound;

and mixing the adjusted dry sound with the background music to generate the audio.

7. The method of claim 6, wherein said adjusting said stem sound according to a song score to obtain said adjusted stem sound comprises:

in response to determining that there is a segment in the dry sound having a pitch less than a corresponding pitch in the music score, pitch up the segment;

in response to determining that there is a segment in the dry sound with a pitch higher than a corresponding pitch in the song score, pitch of the segment is reduced;

in response to determining that there is a segment of the dry sound with missing pitch, selecting a corresponding pitch in the song score as the pitch of the segment.

8. An audio synthesis method, comprising:

a first generation unit configured to generate second lyrics based on the first lyrics sent by the client;

a second generating unit configured to generate a melody based on the second lyrics and a theme transmitted by the client;

a third generating unit configured to generate a song score based on the melody and the second lyrics;

a fourth generating unit configured to generate background music and dry sound based on the music score and the melody;

a fifth generating unit configured to generate audio based on the dry sound and the background music.

9. An electronic device, comprising:

one or more processors;

storage means for storing one or more programs;

the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-7.

10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.

Technical Field

Embodiments of the present disclosure relate to the field of computer technologies, and in particular, to an audio synthesis method, an audio synthesis apparatus, an electronic device, and a computer-readable medium.

Background

Song synthesis generally requires lyrics and a tune to which the lyrics correspond. However, word making or music making usually requires time-consuming authoring by corresponding domain experts, the authoring efficiency is low, and common users are difficult to participate.

Disclosure of Invention

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

Some embodiments of the present disclosure propose methods, apparatuses, devices and computer readable media for audio synthesis to solve the technical problems mentioned in the background section above.

In a first aspect, some embodiments of the present disclosure provide an audio synthesis method, including: generating second lyrics based on the first lyrics sent by the client; generating a melody based on the second lyrics and the theme sent by the client; generating a song score based on the melody and the second lyrics; generating background music and dry sound based on the song score and the melody; and generating audio based on the dry sound and the background music.

In a second aspect, some embodiments of the present disclosure provide an audio synthesis apparatus, the apparatus comprising: a first generation unit configured to generate second lyrics based on the first lyrics sent by the client; a second generating unit configured to generate a melody based on the second lyrics and the theme transmitted by the client; a third generating unit configured to generate a song score based on the melody and the second lyrics; a fourth generating unit configured to generate background music and dry sound based on the music score and the melody; a fifth generating unit configured to generate audio based on the dry sound and the background music.

In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.

In a fourth aspect, some embodiments of the disclosure provide a computer readable medium having a computer program stored thereon, wherein the program when executed by a processor implements a method as in any one of the first aspect.

At least one of the above embodiments of the present disclosure has the following beneficial effects: first, according to the first lyrics sent by the client, the first lyrics can be expanded, so that second lyrics are generated. Here, the first lyric may be a simple one-sentence word. Then, according to the second lyrics and the theme sent by the client, a melody associated with the theme sent by the client can be generated. In the two steps, the interestingness of user participation is increased by increasing the generation process of the lyrics and the melody participated by the user. Then, based on the melody and the second lyrics, a corresponding song score can also be generated. Based on the song score and the melody, background music and dry sound can be generated. And finally, generating final audio according to the dry sound and the background music. The embodiment of the disclosure can improve the creation efficiency of songs, and meanwhile, the user can also participate in the creation process, thereby improving the user experience.

Drawings

The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.

Fig. 1 is a schematic diagram of one application scenario of an audio synthesis method according to some embodiments of the present disclosure;

FIG. 2 is a flow diagram of some embodiments of an audio synthesis method according to the present disclosure;

FIG. 3 is a flow diagram of further embodiments of audio synthesis methods according to the present disclosure;

FIG. 4 is a schematic block diagram of some embodiments of an audio synthesis apparatus according to the present disclosure;

FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.

Detailed Description

Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.

It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.

It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.

It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.

The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.

The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.

Fig. 1 is a schematic diagram 100 of one application scenario of an audio synthesis method according to some embodiments of the present disclosure.

As shown in fig. 1, first, a user of a client can write first lyrics according to his/her preference. It is emphasized that the first lyrics may be a short sentence of text. For example, "it is good today", as indicated by reference numeral 102. Meanwhile, the user can also customize the theme of the song. For example, "classical," as indicated by reference numeral 103. After receiving the first lyrics and the theme sent by the client, the server 101 may expand based on the first lyrics sent by the client, i.e., "weather good today", to generate a second lyric, e.g., "weather good today, sunny, beautiful mood", as shown by reference numeral 104. Then, based on the second lyrics 104 obtained in the previous step and the user-defined theme 103, a melody 105 may be generated. Next, based on the melody 105 and the second lyrics 104, a song score 106 may be generated. Next, based on the above-mentioned song score 106 and the above-mentioned melody 105, background music 108 and dry sound 107 may also be generated. Finally, the audio 109 is finally generated based on the above-mentioned dry sound 107 and the above-mentioned background music 108.

It is to be understood that the method for audio synthesis may be performed by the server 101 described above. The server 101 may be hardware or software. When the server 101 is hardware, it may be a variety of electronic devices with information processing capabilities, including but not limited to smartphones, tablets, e-book readers, laptop portable computers, desktop computers, servers, and the like. When the server 101 is software, it can be installed in the electronic devices listed above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.

It should be understood that the number of servers in fig. 1 is merely illustrative. There may be any number of servers, as desired for implementation.

With continued reference to fig. 2, a flow 200 of some embodiments of an audio synthesis method according to the present disclosure is shown. The audio synthesis method comprises the following steps:

step 201, generating second lyrics based on the first lyrics sent by the client.

In some embodiments, an executing entity (e.g., the server 101 shown in fig. 1) of the audio synthesis method may expand the first lyrics according to the first lyrics provided by the client, so as to obtain the second lyrics. Here, the first lyric may be a brief sentence. For example, "weather good today". The second lyrics are multiple lyrics expanded based on the first lyrics. For example, "today is good, sunny, and beautiful. Here, it is emphasized that the above-described method of extension may be extension through a deep learning network. As an example, the first lyrics are input into a pre-trained recurrent neural network, resulting in the second lyrics.

In some optional implementations of some embodiments, the executing body generating the second lyrics based on the first lyrics sent by the client may include:

in a first step, the execution body may pre-process the first lyric provided by the client, for example, convert the first lyric into a vector. And then, inputting the preprocessing result into a first deep learning network trained in advance to obtain output serving as the lyrics to be processed. Here, the first deep learning network may employ a transform model. Here, the length of the lyrics to be processed is limited.

And secondly, sending the lyrics to be processed to the client, wherein the user of the client can modify the content of the lyrics to be processed so as to generate the processed lyrics. Here, the processed lyric content is required to comply with legal regulations. Furthermore, the length of the processed lyrics may also be limited. For example, no more than 200 words.

And thirdly, receiving the lyrics processed by the client as the second lyrics, wherein the processed lyrics are the lyrics processed by the client to be processed.

Step 202, generating a melody based on the second lyric and the theme sent by the client.

In some embodiments, the execution body may generate a melody corresponding to the second lyric in various ways according to the theme sent by the client. The theme sent by the client may be one of the following: classical, country, rock, jazz, pop, heavy metal, and R & B. Here, the melody is a sequence of sounds that are associated by a series of sounds of the same or different pitches in a specific relationship of height and rhythm.

It should be emphasized that the melody is not necessarily a melody for a song in daily life. The melody may be incomplete and may need to be subsequently adjusted to be the melody for a song. Here, the melody may be represented by a piece of music score (e.g., numbered musical notation, staff).

In some optional implementation manners of some embodiments, the executing body may adjust a final of the second lyric to generate the second lyric after the final is adjusted. The second lyrics after the vowel foot adjustment are similar to the following lyrics: "withered vine, old book, twilight (ya), small bridge and flowing water man (jia), ancient track and western wind thin horse (ma)", it can be seen that the last word of each lyric contains the final sound of a.

Then, the second lyrics after the vowel foot adjustment and the theme can be preprocessed. For example, the second lyrics and the above-mentioned theme are converted into a vector. Then, the preprocessed result is input to an Encoder of a variational auto-Encoder (VAE). Then, the output result of the encoder is sampled, and finally the melody is generated. Wherein a second deep learning Network, e.g., LSTM (Long Short-term memory Network), may be used inside the encoder.

Step 203, generating a song score based on the melody and the second lyric.

In some embodiments, the execution subject may generate a song score according to the melody and the second lyric. Here, the melody may be adjusted according to the characteristics of the melody and combined into a new melody. Then, the second lyrics are filled in the position corresponding to the new melody. Thus, a song score is generated. Here, the song score refers to a combination of a song score and song words, wherein the song words may be filled in below the corresponding notes. The new melody can be represented by a music score (e.g. numbered musical notation, staff).

In some optional implementation manners of some embodiments, the executing body may further perform separate processing on each melody fragment included in the rotation law according to the melody obtained in step 202, so as to generate the target melody. As an example, the melody obtained in step 202 may be divided into a melody fragment a, a melody fragment B, and a melody fragment C according to the pitch of the melody. As an example, the pitch of the melody segment a is greater than the pitch of the melody segment B, which is greater than the pitch of the melody segment C. Here, the melody fragment a may be regarded as the theme melody and the melody fragment B may be regarded as the chorus melody, and the melody fragment C may be regarded as the middle passing melody. Finally, the melody of the chorus, the melody of the main song and the melody of the middle gate are combined to generate the target melody. For example, the song melody of the main song or the song melody of the side song or the melody of the middle gate may be repeated a plurality of times and then combined to generate the target melody. The target melody may be represented by a score (e.g., a numbered musical notation, a staff notation).

Then, the second lyrics may be filled into the target melody, thereby generating a song score to be processed.

And then, sending the song score to be processed to the client. The user of the client can adjust the dissatisfied place in the song score to be processed. For example, the lyrics corresponding to a melody fragment may be changed to the lyrics desired by the user, or the melody fragment and the corresponding lyrics may be deleted directly, when the user feels that the lyrics are unsatisfactory.

And finally, receiving the song score processed by the client as the song score, wherein the processed song score is the song score processed by the client to be processed.

The implementation mode has the advantages that in the process of creating the song score, the steps of user participation are added, so that the finally generated song score is more in line with the requirements of the user.

And step 204, generating background music and dry sound based on the song score and the melody.

In some embodiments, the execution body may generate a piece of background music and dry sound by various means according to the song score and the melody. For example, a background music accompanied by guitar is generated with reference to the song score and the melody. In addition, there are many ways to generate the dry sound. For example, a singing synthesis method may be used to generate corresponding stem sounds according to a given song profile.

In some alternative implementations of some embodiments, the executing entity may perform some preprocessing on the song score, for example, convert notes in the song score into corresponding vectors. Then, the preprocessing result is input into a pre-trained bidirectional long-time memory neural network, and the background music is finally generated by referring to the melody.

Step 205, generating an audio based on the dry sound and the background music.

In some embodiments, the execution subject obtains the background music and the stem sound according to step 204. The background music and the dry sound may be mixed to generate audio. Here, the mixing method includes various methods. For example, background music and the above-described dry sound are mixed using an audio editing tool. The audio editing tool is, for example, audio, a multitrack editing tool.

One of the above-described various embodiments of the present disclosure has the following advantageous effects: first, according to the first lyrics sent by the client, the first lyrics can be expanded, so that second lyrics are generated. Here, the first lyric may be a simple one-sentence word. Then, according to the second lyrics and the theme sent by the client, a melody associated with the theme sent by the client can be generated. In the two steps, the interestingness of user participation is increased by increasing the generation process of the lyrics and the melody participated by the user. Then, based on the melody and the second lyrics, a corresponding song score can also be generated. Based on the song score and the melody, background music and dry sound can be generated. And finally, generating final audio according to the dry sound and the background music. The embodiment can improve the song creation efficiency, and meanwhile, the user can also participate in the creation process, so that the user experience is improved.

With further reference to fig. 3, a flow 300 of further embodiments of audio synthesis methods is shown. The process 300 of the audio synthesizing method includes the following steps:

step 301, generating second lyrics based on the first lyrics sent by the client.

Step 302, generating a melody based on the second lyric and the theme sent by the client.

Step 303, generating a song score based on the melody and the second lyric.

And step 304, generating background music and dry sound based on the song score and the melody.

Here, the specific implementation and technical effects of the steps 301 and 304 can refer to the steps 201 and 204 in the embodiments corresponding to fig. 2, and are not described herein again.

And 305, adjusting the dry sound according to the song score to obtain the adjusted dry sound.

In some embodiments, the execution subject may adjust the dry sound obtained in step 304. It is emphasized that the dry sound obtained via step 304 is often incomplete. For example, the pitch of some segments in the dry sound does not fit the above-mentioned song score, and the rhythm of the dry sound is too fast or too slow. The adjustment here may be to adjust the rhythm and pitch of the dry sound. In particular, the adjustment can be made with reference to the tempo and pitch characterized on the song score.

Step 306, mixing the adjusted dry sound with the background music to generate the audio.

In some embodiments, the execution subject may adjust the stem sound according to the song score according to the stem sound obtained in step 305. Then, the adjusted dry sound is mixed with the background music, and finally the audio is generated. Here, the mixing manner may specifically refer to the mixing manner in step 205.

In some optional implementations of some embodiments, the adjusting the stem sound by the executing body according to the song score to obtain the adjusted stem sound may include the following steps:

in a first step, in response to determining that there is a segment in the dry sound having a pitch smaller than a corresponding pitch in the song score, a pitch of the segment may be increased. For example, by one 8 degrees.

In response to determining that there is a segment of the dry sound having a pitch higher than a corresponding pitch in the song score, a second step of turning down a pitch of the segment. For example, by one 8 degrees.

And thirdly, in response to determining that the segments with missing pitches exist in the dry sound, selecting corresponding pitches in the song spectrum as the pitches of the segments.

As can be seen from fig. 3, compared to the description of some embodiments corresponding to fig. 2, the flow 300 of the audio synthesis method in some embodiments corresponding to fig. 3 embodies the step of adjusting the dry sound. Therefore, the scheme described in the embodiments mixes the adjusted dry sound with the background music, so that the finally generated audio is more beautiful to people, thereby improving the user experience.

With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of an audio synthesis apparatus, which correspond to those of the method embodiments shown in fig. 2, and which may be applied in particular in various electronic devices.

As shown in fig. 4, the audio synthesis method apparatus 400 of some embodiments includes: a first generation unit 401, a second generation unit 402, a third generation unit 403, a fourth generation unit 404, and a fifth generation unit 405. Wherein, the first generating unit 401 is configured to generate the second lyrics based on the first lyrics sent by the client. A second generating unit 402 configured to generate a melody based on the second lyric and the theme sent by the client. A third generating unit 403 configured to generate a song score based on the melody and the second lyrics. A fourth generating unit 404 configured to generate background music and dry sound based on the above-mentioned song score and the above-mentioned melody. A fifth generating unit 405 configured to generate audio based on the dry sound and the background music.

In some optional implementations of some embodiments, the first generating unit 401 may be further configured to: inputting the first lyrics sent by the client to a pre-trained first deep learning network to obtain the lyrics to be processed; sending the lyrics to be processed to the client; and receiving the lyrics processed by the client as the second lyrics, wherein the processed lyrics are the lyrics processed by the client to be processed.

In some optional implementations of some embodiments, the second generating unit 402 may be further configured to: adjusting the vowel foot of the second lyric to generate second lyric with the adjusted vowel foot; and generating the melody based on a pre-trained second deep learning network, a variation self-encoder, the second lyrics after the vowel foot adjustment and the theme.

In some optional implementations of some embodiments, the third generating unit 403 may be further configured to: generating a target melody based on respective processing of respective melody fragments included in the melody; filling the second lyrics into the target melody to generate a song score to be processed; sending the music score to be processed to the client; and receiving the song score processed by the client as the song score, wherein the processed song score is the song score processed by the client to be processed.

In some optional implementations of some embodiments, the fourth generating unit 404 may be further configured to: generating the background music based on the melody, the song score and the bidirectional long-and-short time memory neural network; the above-mentioned song score is converted into dry sound.

In some optional implementations of some embodiments, the fifth generating unit 405 may further include an adjusting unit and a mixing unit. Wherein the adjustment unit is configured to: and adjusting the dry voice according to the song score to obtain the adjusted dry voice. The mixing unit is configured to: and mixing the adjusted dry sound with the background music to generate the audio.

In some optional implementations of some embodiments, the adjusting unit may be further configured to: in response to determining that there is a segment in the dry sound having a pitch less than a corresponding pitch in the song score, increasing a pitch of the segment; in response to determining that there is a segment in the dry sound having a pitch higher than a corresponding pitch in the song score, pitch of the segment is reduced; in response to determining that there is a segment of the dry sound with missing pitch, selecting a corresponding pitch in the song score as a pitch of the segment.

It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.

Referring now to fig. 5, a schematic diagram of an electronic device (e.g., the server of fig. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.

As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.

Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.

In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.

It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.

In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.

The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: generating second lyrics based on the first lyrics sent by the client; generating a melody based on the second lyrics and the theme sent by the client; generating a song score based on the melody and the second lyrics; generating background music and dry sound based on the song score and the melody; and generating audio based on the dry sound and the background music.

Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first generation unit, a second generation unit, a third generation unit, a fourth generation unit, and a fifth generation unit. The names of the units do not form a limitation on the units themselves in some cases, and for example, the first generation unit may also be described as a "unit that generates second lyrics based on first lyrics sent by a client".

The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.

According to one or more embodiments of the present disclosure, there is provided an audio synthesizing method including: generating second lyrics based on the first lyrics sent by the client; generating a melody based on the second lyrics and the theme sent by the client; generating a song score based on the melody and the second lyrics; generating background music and dry sound based on the song score and the melody; and generating audio based on the dry sound and the background music.

According to one or more embodiments of the present disclosure, the generating the second lyric based on the first lyric sent by the client includes: inputting the first lyrics sent by the client to a pre-trained first deep learning network to obtain the lyrics to be processed; sending the lyrics to be processed to the client; and receiving the lyrics processed by the client as the second lyrics, wherein the processed lyrics are the lyrics processed by the client to be processed.

According to one or more embodiments of the present disclosure, the generating the melody based on the second lyric and the theme sent by the client includes: adjusting the vowel foot of the second lyric to generate second lyric with the adjusted vowel foot; and generating the melody based on a pre-trained second deep learning network, a variation self-encoder, the second lyrics after the vowel foot adjustment and the theme.

According to one or more embodiments of the present disclosure, the generating a song score based on the melody and the second lyric includes: generating a target melody based on respective processing of respective melody fragments included in the melody; filling the second lyrics into the target melody to generate a song score to be processed; sending the music score to be processed to the client; and receiving the song score processed by the client as the song score, wherein the processed song score is the song score processed by the client to be processed.

According to one or more embodiments of the present disclosure, the generating of the background music and the dry sound based on the song score and the melody includes: generating the background music based on the melody, the song score and the bidirectional long-and-short time memory neural network; converting the song score into dry voice.

According to one or more embodiments of the present disclosure, the generating audio based on the dry sound and the background music includes: adjusting the dry sound according to the song score to obtain the adjusted dry sound; and mixing the adjusted dry sound with the background music to generate the audio.

According to one or more embodiments of the present disclosure, the adjusting the dry sound according to the song score to obtain the adjusted dry sound includes: in response to determining that there is a segment in the dry sound having a pitch less than a corresponding pitch in the song score, increasing a pitch of the segment; in response to determining that there is a segment in the dry sound having a pitch higher than a corresponding pitch in the song score, pitch of the segment is reduced; in response to determining that there is a segment of the dry sound with missing pitch, selecting a corresponding pitch in the song score as a pitch of the segment.

According to one or more embodiments of the present disclosure, there is provided an audio synthesizing apparatus including: a first generation unit configured to generate second lyrics based on the first lyrics sent by the client; a second generating unit configured to generate a melody based on the second lyrics and the theme transmitted by the client; a third generating unit configured to generate a song score based on the melody and the second lyrics; a fourth generating unit configured to generate background music and dry sound based on the music score and the melody; a fifth generating unit configured to generate audio based on the dry sound and the background music.

According to one or more embodiments of the present disclosure, the first generating unit 401 may be further configured to: inputting the first lyrics sent by the client to a pre-trained first deep learning network to obtain the lyrics to be processed; sending the lyrics to be processed to the client; and receiving the lyrics processed by the client as the second lyrics, wherein the processed lyrics are the lyrics processed by the client to be processed.

According to one or more embodiments of the present disclosure, the second generating unit 402 may be further configured to: adjusting the vowel foot of the second lyric to generate second lyric with the adjusted vowel foot; and generating the melody based on a pre-trained second deep learning network, a variation self-encoder, the second lyrics after the vowel foot adjustment and the theme.

According to one or more embodiments of the present disclosure, the third generating unit 403 may be further configured to: generating a target melody based on respective processing of respective melody fragments included in the melody; filling the second lyrics into the target melody to generate a song score to be processed; sending the music score to be processed to the client; and receiving the song score processed by the client as the song score, wherein the processed song score is the song score processed by the client to be processed.

According to one or more embodiments of the present disclosure, the fourth generating unit 404 may be further configured to: generating the background music based on the melody, the song score and the bidirectional long-and-short time memory neural network; the above-mentioned song score is converted into dry sound.

According to one or more embodiments of the present disclosure, the fifth generating unit 405 may further include an adjusting unit and a mixing unit. Wherein the adjustment unit is configured to: and adjusting the dry voice according to the song score to obtain the adjusted dry voice. The mixing unit is configured to: and mixing the adjusted dry sound with the background music to generate the audio.

According to one or more embodiments of the present disclosure, the adjusting unit may be further configured to: in response to determining that there is a segment in the dry sound having a pitch less than a corresponding pitch in the song score, increasing a pitch of the segment; in response to determining that there is a segment in the dry sound having a pitch higher than a corresponding pitch in the song score, pitch of the segment is reduced; in response to determining that there is a segment of the dry sound with missing pitch, selecting a corresponding pitch in the song score as a pitch of the segment.

According to one or more embodiments of the present disclosure, there is provided an electronic device including: one or more processors; a storage device having one or more programs stored thereon which, when executed by one or more processors, cause the one or more processors to implement a method as described in any of the embodiments above.

According to one or more embodiments of the present disclosure, a computer-readable medium is provided, on which a computer program is stored, wherein the program, when executed by a processor, implements the method as described in any of the embodiments above.

The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于语言模型的语言识别方法、文本分类方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!