Musical composition generation and synthesis method and device, equipment, medium and product thereof

文档序号:211047 发布日期:2021-11-05 浏览:14次 中文

阅读说明:本技术 音乐作品生成、合成方法及其装置、设备、介质、产品 (Musical composition generation and synthesis method and device, equipment, medium and product thereof ) 是由 黄不群 彭学杰 于 2021-08-31 设计创作,主要内容包括:本申请公开音乐作品生成、合成方法及其装置、设备、介质、产品,其中的生成方法包括:获取伴奏模板相对应的伴奏和弦信息及旋律节奏信息,所述伴奏和弦信息包括多个和弦,所述旋律节奏信息用于限定待获取的音乐旋律中与所述和弦相同步的待定音符的节奏;根据旋律节奏信息格式化作曲界面,使其根据所述旋律节奏信息展示所述音乐旋律的待定音符的时值信息;从该作曲界面获取所述音乐旋律,所述音乐旋律包括多个选定音符,所述选定音符为在节奏上与之同步的和弦相对应的协和音程内的音符;响应作品播放指令,播放包含了所述音乐旋律的音乐作品。本申请可高效引导用户创作音乐旋律,形成音乐作品,丰富了音乐辅助创作手段,提升了音乐辅助创作效率。(The application discloses a method for generating and synthesizing musical works, and a device, equipment, a medium and a product thereof, wherein the generating method comprises the following steps: obtaining accompaniment chord information and melody rhythm information corresponding to an accompaniment template, wherein the accompaniment chord information comprises a plurality of chords, and the melody rhythm information is used for limiting the rhythm of to-be-determined notes which are synchronous with the chords in the music melody to be obtained; formatting a composition interface according to the melody rhythm information, so that the composition interface displays the duration information of the to-be-determined notes of the music melody according to the melody rhythm information; obtaining the music melody from the composition interface, wherein the music melody comprises a plurality of selected notes, and the selected notes are notes in a harmony note interval corresponding to the chord synchronized with the selected notes in rhythm; and responding to a composition playing instruction, and playing the musical composition containing the musical melody. The method and the device can efficiently guide the user to create the music melody to form the musical works, enrich the auxiliary musical creation means, and improve the auxiliary musical creation efficiency.)

1. A musical composition generating method, comprising the steps of:

obtaining accompaniment chord information and melody rhythm information corresponding to an accompaniment template, wherein the accompaniment chord information comprises a plurality of chords, and the melody rhythm information is used for limiting the rhythm of to-be-determined notes which are synchronous with the chords in the music melody to be obtained;

formatting a composition interface according to the melody rhythm information, so that the composition interface displays the duration information of the to-be-determined notes of the music melody according to the melody rhythm information;

obtaining the music melody from the composition interface, wherein the music melody comprises a plurality of selected notes, and the selected notes are notes in a harmony note interval corresponding to the chord synchronized with the selected notes in rhythm;

and responding to a composition playing instruction, and playing the musical composition containing the musical melody.

2. The method of generating musical composition according to claim 1, wherein obtaining accompaniment chord information and melody rhythm information corresponding to the accompaniment template comprises:

displaying an accompaniment template selection interface to list a plurality of candidate accompaniment templates;

receiving a user selection instruction to determine a target accompaniment template from the candidate accompaniment templates;

obtaining the harmony information and rhythm information of the accompaniment corresponding to the target accompaniment template.

3. The method of claim 1, wherein the composition interface is formatted according to melody rhythm information to display duration information of notes to be timed for the melody according to the melody rhythm information, comprising the steps of:

displaying a composition interface, wherein the composition interface displays a list of note locations with rhythm and scale as dimensions, and each note location corresponds to a note in the scale dimension corresponding to a certain time under the indication of the rhythm dimension;

and adjusting the occupation width of corresponding note regions in the composition interface according to the corresponding time values of all the notes to be determined in the music melody defined by the melody rhythm information so as to display the time value information of the notes to be determined in the music melody.

4. The method of generating a musical composition according to claim 1 wherein the step of obtaining the musical melody from the composition interface comprises the steps of:

corresponding to a current order undetermined note of the music melody, determining a note in a harmony interval corresponding to a chord synchronized with the undetermined note in rhythm as a candidate note;

filtering the candidate notes according to a preset rule to obtain the remaining selectable notes;

displaying a note zone corresponding to the selectable note on a composition interface at a position corresponding to the current sequence to form a note prompt zone;

receiving a selected note determined from a plurality of selectable notes within the note-cue region, advancing the musical melody to a next order to cycle through its subsequent selected notes.

5. The method of generating musical compositions according to claim 4, wherein said step of filtering said plurality of candidate notes according to predetermined rules to obtain remaining selectable notes comprises: the preset rules are configured to remove at least individual notes from the plurality of candidate notes according to a selected note that is ordered prior.

6. The method of generating a musical composition according to claim 4 wherein said musical melody is retrieved from the composition interface, comprising the subsequent steps of:

and starting sequence updating flow of the selected notes sequenced later in response to the reset event of any selected notes, so that the selectable notes sequenced later in the composition interface are automatically re-determined according to the selected notes sequenced earlier in the composition interface, wherein if the selectable notes at the sequencing position after re-determination do not comprise the selected notes determined originally, the selected notes at the sequencing position are randomly selected from the re-determined selectable notes, and re-determined.

7. The method of generating a musical composition according to claim 4 wherein said musical melody is retrieved from the composition interface, comprising the subsequent steps of:

responding to an automatic composition instruction triggered by a control in a composition interface or a vibration sensor of the local equipment, and automatically completing a note prompt area corresponding to undetermined notes in the music melody and selected notes in the note prompt area.

8. The method of generating a musical composition according to claim 4 wherein displaying said selectable notes on a composition interface at a location corresponding to said current order comprises the steps of:

coloring the note zone corresponding to the selectable note at the position corresponding to the current sequence on the composition interface to form a note prompt zone so as to finish the representation display of the selectable note;

and moving the composition interface along the rhythm dimension direction to move the note prompt area corresponding to the current sequence to a preset important position.

9. The method of generating a musical composition according to claim 8 wherein receiving a selected note selected from a plurality of selectable notes within said note-cue region comprises the steps of:

receiving a note corresponding to the selected note location as a selected note of the current sequence of the musical melody in response to a selected operation of selecting one of note locations corresponding to a plurality of selectable notes in the note prompt area;

highlighting the selected note zone, adding a lyric editing control in the note zone, and displaying the characters which are synchronous with the lyric text in rhythm in the lyric cache zone in the lyric editing control.

10. The method of generating a musical composition according to claim 9, wherein receiving a selected note selected from a plurality of selectable notes within said note-cue region, further comprises the steps of:

responding to an editing event of characters in the lyric editing control acting on the note zone bit, and replacing the corresponding edited characters to update corresponding contents in the lyric cache zone according to the word number corresponding relation;

and refreshing the characters in the lyric editing control corresponding to the selected notes of the music melody in the composition interface in response to the content updating event of the lyric cache area.

11. The musical composition generating method according to claim 1, wherein playing the musical composition including the musical melody in response to a composition playing instruction, comprises the steps of:

responding to a work playing instruction, and acquiring a preset sound type;

acquiring a music melody to which a corresponding sound effect is applied according to a preset sound type;

and playing the musical composition containing the music melody.

12. The musical composition generating method according to claim 11, wherein the music melody to which the corresponding sound effect is applied is acquired according to a preset sound type, comprising the steps of:

judging and determining that the preset voice type is a voice type representing that voice singing is carried out according to the lyrics;

acquiring a preset lyric text corresponding to the voice type;

and adapting to the human voice type, constructing a voice effect of the human voice pronunciation of the lyric text, applying the voice effect to the music melody, and synthesizing background music corresponding to the accompaniment template for the music melody, wherein the background music is music played according to the chord information.

13. The musical composition generating method according to claim 11, wherein the music melody to which the corresponding sound effect is applied is acquired according to a preset sound type, comprising the steps of:

judging and determining that the preset sound type is an instrument type representing the performance of a specific type of instrument;

acquiring preset sound effect data corresponding to the type of the musical instrument;

adapting to the type of the musical instrument, constructing the sound effect of the corresponding musical instrument according to the sound effect data, and applying the sound effect to the music melody.

14. The method of generating a musical composition according to claim 1, further comprising the subsequent steps of:

responding to a publishing and submitting instruction, acquiring text information input from a publishing and editing interface, publishing the musical composition to a corresponding control of a browsable interface, and implanting a player for playing the musical composition and the text information into the corresponding control.

15. The method of generating a musical composition according to claim 14, wherein the step of, prior to issuing the rendering instruction, including the steps of:

and responding to the determined event of the selected note, judging whether the selected note corresponds to the last undetermined note in the music melody, if so, activating a release control for triggering the release submission instruction, and otherwise, keeping the release control in an inactivated state.

16. The method of generating a musical composition according to claim 1, further comprising the subsequent steps of:

displaying a word filling interface for receiving lyric input, the word filling interface providing lyric prompt information corresponding to the determined selected notes in the music melody;

and responding to the word filling confirmation instruction, storing the lyrics input in the word filling interface into a lyric cache region, and synchronizing to the note region where the corresponding selected note in the composition interface is located for displaying.

17. The musical composition generation method of claim 16 wherein the step of displaying a word-filling interface for receiving lyric inputs comprises:

displaying a word filling interface, dividing the total number of selected notes corresponding to each lyric single sentence according to sentence dividing information in the melody rhythm information corresponding to the music melody, and determining word number information corresponding to each lyric single sentence according to the total number of the selected notes;

correspondingly displaying a plurality of editing areas in the word filling interface according to the single lyric sentence;

and loading single-sentence texts for displaying corresponding single words of the lyrics in the lyrics texts in the lyrics cache region for each editing region, and displaying lyrics prompt information including the maximum word number information of the lyrics to be input of the corresponding single words of the lyrics for each editing region.

18. The method of generating musical compositions according to claim 17, wherein a single-sentence text showing a corresponding single lyric sentence in the lyric text in the lyric cache is loaded for each of said editing areas, and lyric prompt information showing the maximum word count information of the lyrics to be input including the corresponding single lyric sentence is displayed for each editing area, comprising the steps of:

acquiring a lyric text in a lyric cache region, and dividing a single-sentence text corresponding to each lyric single sentence in the lyric text according to the sentence dividing information;

and displaying each single sentence text in a text box of a corresponding editing area, and displaying the lyric prompting information in a prompting area of the editing area, wherein the lyric prompting information comprises the maximum word number information of the lyric single sentence and the total input number information in the current text box.

19. The method of generating a musical composition according to claim 16 wherein, after displaying a word-filling interface for receiving lyric input, comprising the steps of:

responding to an intelligent reference instruction triggered by a single sentence text in the lyric text, and entering an intelligent search interface;

displaying one or more recommended texts matched with the keywords in response to the keywords input from the intelligent search interface;

and responding to a selection instruction of one recommended text, replacing the single sentence text with the selected recommended text, and synchronizing the single sentence text to the lyric cache region.

20. The method of generating a musical composition according to claim 16 wherein, after displaying a word-filling interface for receiving lyric input, comprising the steps of:

responding to an automatic word filling instruction triggered by a control in a word filling interface or a vibration sensor of the local equipment, and automatically completing the lyric text according to the selected notes of the music melody.

21. The musical piece generating method according to any one of claims 1 to 20, further comprising the steps of:

submitting the draft information corresponding to the musical composition to a server, wherein the draft information comprises an accompaniment template corresponding to the musical composition, a preset sound type and the musical melody.

22. The musical piece generating method according to any one of claims 1 to 20, wherein the playing speeds of the musical pieces are unified to be determined in accordance with a preset speed-per-hour.

23. The musical piece production method according to any one of claims 1 to 20, wherein the selected note is a chord tonic within a harmony note corresponding to a chord specified in the preset accompaniment chord information in synchronization with its rhythm.

24. The musical piece production method as claimed in any one of claims 1 to 20, wherein the chord is a columnar chord and/or a decomposed chord, and the chord in the accompaniment chord information is organized according to the chord, each chord being rhythmically synchronized with the one or more selected notes determined in the order.

25. A method of composing a musical composition, comprising the steps of:

responding to a music synthesis instruction submitted by an original user, determining draft information of the original user, wherein the draft information comprises an accompaniment template appointed in the instruction, a preset sound type and a music melody of which the original user determines a selected note, and a duration value of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template;

storing the draft information into a personal editing library of the original user for subsequent invocation;

synthesizing the corresponding sound effect into the music melody according to the preset sound type;

and synthesizing background music formed according to the accompaniment chord information and the music melody into playable music works, and pushing the music works to the user.

26. The method of composing a musical piece according to claim 25, comprising the steps of:

responding to an authorized access instruction, and pushing the draft information to an authorized user authorized by the original user;

and receiving an updated version of the draft information submitted by an authorized user to replace the original version of the draft information, regenerating the playable musical composition according to the updated version, and pushing the musical composition to the original user.

27. The method of synthesizing a musical composition according to claim 26 wherein the updated version of the draft information includes lyric text corresponding to the musical melody.

28. The method of synthesizing a musical composition according to claim 25, wherein in the step of synthesizing the corresponding sound effects into the musical melody according to a predetermined sound type, when the predetermined sound type is a human sound type, a pre-trained acoustic model is called to synthesize the lyrics text carried in the draft information into a sound effect of a predetermined timbre, and then the sound effect is synthesized into the musical melody.

29. A musical composition generating apparatus, comprising:

the device comprises a template acquisition module, a melody synchronization module and a music module, wherein the template acquisition module is used for acquiring accompaniment chord information and melody rhythm information corresponding to an accompaniment template, the accompaniment chord information comprises a plurality of chords, and the melody rhythm information is used for limiting the rhythm of to-be-determined notes which are synchronous with the chords in the music melody to be acquired;

the composition formatting module is used for formatting a composition interface according to the rhythm information so as to display the duration value information of the to-be-determined notes of the music rhythm according to the rhythm information;

the melody obtaining module is used for obtaining the music melody from the composition interface, wherein the music melody comprises a plurality of selected notes, and the selected notes are notes in a harmony interval corresponding to the chord synchronized with the selected notes in the rhythm;

and the music playing module is used for responding to a composition playing instruction and playing the music composition containing the music melody.

30. A musical composition apparatus, comprising:

the draft acquisition module is used for responding to a music synthesis instruction triggered and submitted by an original user and determining draft information of the original user, wherein the draft information comprises an accompaniment template appointed in the instruction, a preset sound type and a music melody of which the original user determines a selected note, and a duration value of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template;

the draft storage module is used for storing the draft information into a personal editing library of the original user for subsequent calling;

the sound effect synthesis module is used for synthesizing the corresponding sound effect into the music melody according to the preset sound type;

and the music synthesis module is used for synthesizing background music formed by playing according to the accompaniment chord information and the music melody into playable music works and pushing the music works to the user.

31. A computer device comprising a central processor and a memory, wherein the central processor is configured to invoke execution of a computer program stored in the memory to perform the steps of the method according to any one of claims 1 to 28.

32. A computer-readable storage medium, characterized in that it stores, in the form of computer-readable instructions, a computer program implemented according to the method of any one of claims 1 to 28, which, when invoked by a computer, performs the steps comprised by the corresponding method.

33. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method as claimed in any one of claims 1 to 28.

Technical Field

The present application relates to the field of audio processing technologies, and in particular, to a method for generating and synthesizing a musical composition, and a corresponding apparatus, a computer device, a computer-readable storage medium, and a computer program product.

Background

Most of auxiliary music creation technologies in the prior art only realize the automation of input and output, users do not need to create with paper pens and musical instruments, corresponding convenience is provided for the users, and the users still rely on music theory knowledge to create effective music works.

The existing software related to auxiliary music creation lacks business logic for guiding users to make effective music creation by fusing music theory knowledge, technically depends on acoustic models and synthetic models related to artificial intelligence technology, and songs created by artificial intelligence technology depend on big data and are obtained by 'creation' after training and learning 'creation ability', so that correspondingly obtained works have different quality, often lack of individuation and lose artistry. From this, because the musical composition of artificial intelligence creation lacks the user and participates in the link, user experience also has not followed the acquisition, can't really effectively attract user's flow to participate in the creation and share, can't form novel attitude, consequently, its limitation is obvious.

The defects of the prior art lead to low efficiency of auxiliary music creation, intelligent advantages of computer equipment cannot be excavated, and huge excavation space still exists in related fields, so that the application tries to make related exploration.

Disclosure of Invention

A primary object of the present application is to solve at least one of the above problems and provide a musical composition generating method and a corresponding apparatus, computer device, computer readable storage medium, computer program product, so as to implement the auxiliary music creation.

It is another object of the present application to solve at least one of the above problems and provide a musical composition method and corresponding apparatus, computer device, computer readable storage medium, computer program product to support assisted musical composition.

In order to meet various purposes of the application, the following technical scheme is adopted in the application:

a musical composition generating method adapted to one of the objects of the present application, comprising the steps of:

obtaining accompaniment chord information and melody rhythm information corresponding to an accompaniment template, wherein the accompaniment chord information comprises a plurality of chords, and the melody rhythm information is used for limiting the rhythm of to-be-determined notes which are synchronous with the chords in the music melody to be obtained;

formatting a composition interface according to the melody rhythm information, so that the composition interface displays the duration information of the to-be-determined notes of the music melody according to the melody rhythm information;

obtaining the music melody from the composition interface, wherein the music melody comprises a plurality of selected notes, and the selected notes are notes in a harmony note interval corresponding to the chord synchronized with the selected notes in rhythm;

and responding to a composition playing instruction, and playing the musical composition containing the musical melody.

In a further embodiment, obtaining accompaniment chord information and melody rhythm information corresponding to the accompaniment template comprises the following steps:

displaying an accompaniment template selection interface to list a plurality of candidate accompaniment templates;

receiving a user selection instruction to determine a target accompaniment template from the candidate accompaniment templates;

obtaining the harmony information and rhythm information of the accompaniment corresponding to the target accompaniment template.

In a further embodiment, the composition interface is formatted according to the melody rhythm information to display duration information of the notes to be timed of the music melody according to the melody rhythm information, comprising the steps of:

displaying a composition interface, wherein the composition interface displays a list of note locations with rhythm and scale as dimensions, and each note location corresponds to a note in the scale dimension corresponding to a certain time under the indication of the rhythm dimension;

and adjusting the occupation width of corresponding note regions in the composition interface according to the corresponding time values of all the notes to be determined in the music melody defined by the melody rhythm information so as to display the time value information of the notes to be determined in the music melody.

In a further embodiment, the step of obtaining the music melody from the composition interface includes the steps of:

corresponding to a current order undetermined note of the music melody, determining a note in a harmony interval corresponding to a chord synchronized with the undetermined note in rhythm as a candidate note;

filtering the candidate notes according to a preset rule to obtain the remaining selectable notes;

displaying a note zone corresponding to the selectable note on a composition interface at a position corresponding to the current sequence to form a note prompt zone;

receiving a selected note determined from a plurality of selectable notes within the note-cue region, advancing the musical melody to a next order to cycle through its subsequent selected notes.

In a further embodiment, the step of filtering the candidate notes according to a preset rule to obtain the remaining selectable notes comprises: the preset rules are configured to remove at least individual notes from the plurality of candidate notes according to a selected note that is ordered prior.

In a further embodiment, the step of obtaining the music melody from the composition interface comprises the following steps:

and starting sequence updating flow of the selected notes sequenced later in response to the reset event of any selected notes, so that the selectable notes sequenced later in the composition interface are automatically re-determined according to the selected notes sequenced earlier in the composition interface, wherein if the selectable notes at the sequencing position after re-determination do not comprise the selected notes determined originally, the selected notes at the sequencing position are randomly selected from the re-determined selectable notes, and re-determined.

In a further embodiment, the step of obtaining the music melody from the composition interface comprises the following steps:

responding to an automatic composition instruction triggered by a control in a composition interface or a vibration sensor of the local equipment, and automatically completing a note prompt area corresponding to undetermined notes in the music melody and selected notes in the note prompt area.

In a further embodiment, displaying the selectable note on a composition interface at a position corresponding to the current position comprises the following steps:

coloring the note zone corresponding to the selectable note at the position corresponding to the current sequence on the composition interface to form a note prompt zone so as to finish the representation display of the selectable note;

and moving the composition interface along the rhythm dimension direction to move the note prompt area corresponding to the current sequence to a preset important position.

In an expanded embodiment, receiving a selected note determined from a plurality of selectable notes within the note-cue region comprises the steps of:

receiving a note corresponding to the selected note location as a selected note of the current sequence of the musical melody in response to a selected operation of selecting one of note locations corresponding to a plurality of selectable notes in the note prompt area;

highlighting the selected note zone, adding a lyric editing control in the note zone, and displaying the characters which are synchronous with the lyric text in rhythm in the lyric cache zone in the lyric editing control.

In a further embodiment, receiving a selected note determined from a plurality of selectable notes in the note-cue region, further comprises the steps of:

responding to an editing event of characters in the lyric editing control acting on the note zone bit, and replacing the corresponding edited characters to update corresponding contents in the lyric cache zone according to the word number corresponding relation;

and refreshing the characters in the lyric editing control corresponding to the selected notes of the music melody in the composition interface in response to the content updating event of the lyric cache area.

In a further embodiment, the step of playing the musical composition including the musical melody in response to a composition playing command includes the steps of:

responding to a work playing instruction, and acquiring a preset sound type;

acquiring a music melody to which a corresponding sound effect is applied according to a preset sound type;

and playing the musical composition containing the music melody.

In a further embodiment, the step of obtaining the music melody to which the corresponding sound effect is applied according to the preset sound type comprises the following steps:

judging and determining that the preset voice type is a voice type representing that voice singing is carried out according to the lyrics;

acquiring a preset lyric text corresponding to the voice type;

and adapting to the human voice type, constructing a voice effect of the human voice pronunciation of the lyric text, applying the voice effect to the music melody, and synthesizing background music corresponding to the accompaniment template for the music melody, wherein the background music is music played according to the chord information.

In a further embodiment, the step of obtaining the music melody to which the corresponding sound effect is applied according to the preset sound type comprises the following steps:

judging and determining that the preset sound type is an instrument type representing the performance of a specific type of instrument;

acquiring preset sound effect data corresponding to the type of the musical instrument;

adapting to the type of the musical instrument, constructing the sound effect of the corresponding musical instrument according to the sound effect data, and applying the sound effect to the music melody.

In an extended embodiment, the method further comprises the following subsequent steps:

responding to a publishing and submitting instruction, acquiring text information input from a publishing and editing interface, publishing the musical composition to a corresponding control of a browsable interface, and implanting a player for playing the musical composition and the text information into the corresponding control.

In a further embodiment, before issuing the commit instruction, the method includes the steps of:

and responding to the determined event of the selected note, judging whether the selected note corresponds to the last undetermined note in the music melody, if so, activating a release control for triggering the release submission instruction, and otherwise, keeping the release control in an inactivated state.

In an extended embodiment, the method further comprises the following subsequent steps:

displaying a word filling interface for receiving lyric input, the word filling interface providing lyric prompt information corresponding to the determined selected notes in the music melody;

and responding to the word filling confirmation instruction, storing the lyrics input in the word filling interface into a lyric cache region, and synchronizing to the note region where the corresponding selected note in the composition interface is located for displaying.

In a further embodiment, the step of displaying a word-filling interface for receiving input of lyrics comprises:

displaying a word filling interface, dividing the total number of selected notes corresponding to each lyric single sentence according to sentence dividing information in the melody rhythm information corresponding to the music melody, and determining word number information corresponding to each lyric single sentence according to the total number of the selected notes;

correspondingly displaying a plurality of editing areas in the word filling interface according to the single lyric sentence;

and loading single-sentence texts for displaying corresponding single words of the lyrics in the lyrics texts in the lyrics cache region for each editing region, and displaying lyrics prompt information including the maximum word number information of the lyrics to be input of the corresponding single words of the lyrics for each editing region.

In an embodiment, a single-sentence text for displaying a corresponding single lyric sentence in the lyric texts in the lyric cache area is loaded for each editing area, and lyric prompt information including the maximum word number information of the lyric to be input of the corresponding single lyric sentence is displayed for each editing area, which comprises the following steps:

acquiring a lyric text in a lyric cache region, and dividing a single-sentence text corresponding to each lyric single sentence in the lyric text according to the sentence dividing information;

and displaying each single sentence text in a text box of a corresponding editing area, and displaying the lyric prompting information in a prompting area of the editing area, wherein the lyric prompting information comprises the maximum word number information of the lyric single sentence and the total input number information in the current text box.

In a further embodiment, after displaying the word-filling interface for receiving input of lyrics, the method comprises the following steps:

responding to an intelligent reference instruction triggered by a single sentence text in the lyric text, and entering an intelligent search interface;

displaying one or more recommended texts matched with the keywords in response to the keywords input from the intelligent search interface;

and responding to a selection instruction of one recommended text, replacing the single sentence text with the selected recommended text, and synchronizing the single sentence text to the lyric cache region.

In a further embodiment, after displaying the word-filling interface for receiving input of lyrics, the method comprises the following steps:

responding to an automatic word filling instruction triggered by a control in a word filling interface or a vibration sensor of the local equipment, and automatically completing the lyric text according to the selected notes of the music melody.

In an extended embodiment, the method further comprises the following steps:

submitting the draft information corresponding to the musical composition to a server, wherein the draft information comprises an accompaniment template corresponding to the musical composition, a preset sound type and the musical melody.

In a preferred embodiment, the playing speed of the musical pieces is uniformly determined according to a preset speed per hour.

In a preferred embodiment, the selected note is a chord tonic within a harmony interval corresponding to a chord specified in the preset accompaniment chord information in synchronization with the rhythm thereof.

In a preferred embodiment, the chord is a columnar chord and/or a decomposed chord, and the chords in the accompaniment chord information are organized according to the chords, each chord being rhythmically synchronized with the one or more selected notes determined in sequence.

A method of composing a musical composition, adapted to one of the objects of the present application, comprising the steps of:

responding to a music synthesis instruction submitted by an original user, determining draft information of the original user, wherein the draft information comprises an accompaniment template appointed in the instruction, a preset sound type and a music melody of which the original user determines a selected note, and a duration value of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template;

storing the draft information into a personal editing library of the original user for subsequent invocation;

synthesizing the corresponding sound effect into the music melody according to the preset sound type;

and synthesizing background music formed according to the accompaniment chord information and the music melody into playable music works, and pushing the music works to the user.

In an expanded embodiment, the method comprises the following steps:

responding to an authorized access instruction, and pushing the draft information to an authorized user authorized by the original user;

and receiving an updated version of the draft information submitted by an authorized user to replace the original version of the draft information, regenerating the playable musical composition according to the updated version, and pushing the musical composition to the original user.

In a preferred embodiment, the updated version of the draft information includes lyric text corresponding to the music melody.

In a preferred embodiment, in the step of synthesizing the corresponding sound effect into the music melody according to a preset sound type, when the preset sound type is a human sound type, a pre-trained acoustic model is called to synthesize the lyric text carried in the draft information into a sound effect with a preset tone color, and then the sound effect is synthesized into the music melody.

A musical composition generating apparatus adapted to one of the objects of the present application includes: the music playing device comprises a template acquisition module, a composition formatting module, a melody acquisition module and a music playing module. The template acquisition module is used for acquiring accompaniment chord information and melody rhythm information corresponding to the accompaniment template, wherein the accompaniment chord information comprises a plurality of chords, and the melody rhythm information is used for limiting the rhythm of to-be-determined notes which are synchronous with the chords in the music melody to be acquired; the composition formatting module is used for formatting a composition interface according to the rhythm information of the melody so that the composition interface can display the duration information of the to-be-determined notes of the music melody according to the rhythm information of the melody; the melody obtaining module is used for obtaining the music melody from the composition interface, wherein the music melody comprises a plurality of selected notes, and the selected notes are notes in a harmony interval corresponding to the chord synchronized with the selected notes in the rhythm; and the music playing module is used for responding to a composition playing instruction and playing the music composition containing the music melody.

A musical composition apparatus adapted to one of the objects of the present application includes the steps of: the system comprises a draft acquisition module, a draft storage module, a sound effect synthesis module and a music synthesis module. The draft acquiring module is used for responding to a music synthesis instruction triggered and submitted by an original user and determining draft information of the original user, wherein the draft information comprises an accompaniment template specified in the instruction, a preset sound type and a music melody of which the original user determines a selected note, and the duration of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template; the draft storage module is used for storing the draft information into a personal editing library of the original user for subsequent calling; the sound effect synthesis module is used for synthesizing the corresponding sound effect into the music melody according to the preset sound type; the music synthesis module is used for synthesizing background music formed by playing according to the accompaniment chord information and the music melody into playable music works and pushing the music works to the user.

A computer device adapted to one of the objects of the present application includes a central processing unit and a memory, the central processing unit is configured to invoke and run a computer program stored in the memory to execute the steps of the musical piece generating method or the musical piece synthesizing method described in the present application.

A computer-readable storage medium, which stores in the form of computer-readable instructions a computer program implemented according to the musical piece generating method or the musical piece synthesizing method described above, executes the steps included in the method when the computer program is called by a computer to run.

A computer program product, provided to adapt to another object of the present application, comprises computer programs/instructions which, when executed by a processor, implement the steps of the method described in any of the embodiments of the present application.

Compared with the prior art, the application has the following advantages:

firstly, the application provides an accompaniment template required by the creation of the musical composition, accompaniment chord information and melody rhythm information required by the musical composition are carried in the accompaniment template, duration value information corresponding to each undetermined note in a composition interface is formatted according to the rhythm defined by the melody rhythm information, a user is guided to select each undetermined note in the musical composition in the composition interface according to the melody rhythm information and the corresponding chord of the musical composition, each selected note required by the musical composition is determined, so that the musical composition is constructed, the musical composition is further constructed, in the whole process, the user can perform music composition basically without depending on music knowledge, each note required by the musical composition is selected only by prompting of the technical scheme of the application, the creation process is simple and efficient, and auxiliary creation means of the musical composition are enriched, the problem that the music auxiliary creation field cannot meet the popular requirements for a long time is solved.

Secondly, the technical means that this application adopted is easy to realize, and its running cost is lower, for example the mode of obtaining the music melody in accompaniment template to and composition interface department all need not to rely on big data, not only can implement at the client, also can implement at the server side, still can be implemented by client and server division of labor, consequently, the realization mode is nimble, and the cost of realization is lower, and the time spent reduces by a wide margin when creating to full play computer equipment's computing resource advantage, perfect the function of the supplementary creation product of music, promoted the production efficiency of the supplementary creation of music greatly.

In addition, the implementation of the technical scheme of the application redefines the music auxiliary creation state, so that people are music players, the formulation of the accompaniment template and the creation of the music melody are decoupled, the user flow can be activated, the creation and sharing of the user works are promoted, and a new internet music ecology is defined.

Drawings

The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:

FIG. 1 is a schematic flow chart diagram of an exemplary embodiment of a musical composition generation method of the present application;

FIG. 2 is a schematic layout of a composition interface of the present application;

FIG. 3 is a schematic flow chart illustrating a process of obtaining an accompaniment template according to the present application;

FIG. 4 is a schematic view of an interface corresponding to acquiring an accompaniment template according to the present application;

FIG. 5 is a schematic flow chart of a formatting composition interface according to the present application;

FIG. 6 is a schematic diagram of a note-hinting area displayed on a composition interface of the present application;

FIG. 7 is a schematic flow chart illustrating a music melody capturing process according to the present application;

FIGS. 8 and 9 are schematic diagrams of a composition interface illustrating the process of determining a selected note in the composition interface;

FIG. 10 is a schematic flow chart illustrating a dynamic shift process of the composition interface of the present application;

FIG. 11 is a flowchart illustrating a process of displaying lyrics in the note zone in response to note selection according to the present application;

FIG. 12 is a flow chart illustrating the process of updating lyrics in response to a text editing event in the note zone of the present application;

FIG. 13 is a schematic flow chart illustrating a process for playing a musical composition according to the present application;

FIG. 14 is a layout diagram of a composition interface of the present application in a state, mainly illustrating a control of a preset sound type;

FIG. 15 is a schematic flow chart illustrating audio synthesis according to the type of human voice;

FIG. 16 is a schematic flow chart of audio synthesis according to the type of musical instrument;

FIG. 17 is a flow chart illustrating an assisted lyric creation process according to the present application;

FIG. 18 is a schematic illustration of a word-filling interface of the present application;

FIG. 19 is a flow chart illustrating the process of constructing a word-filling interface according to the present application;

FIG. 20 is a flowchart illustrating a process for intelligently generating lyric text according to the present application;

FIGS. 21 and 22 are corresponding graphical user interfaces illustrating an intelligent search interface and a recommended text display page, respectively, during intelligent lyric text generation according to the present application;

FIG. 23 is a schematic flow chart of a method for synthesizing a musical composition according to the present application;

FIG. 24 is a schematic flow chart illustrating the support of cross-user collaborative authoring according to the present application;

FIGS. 25 and 26 are functional block diagrams of exemplary embodiments of a musical piece generating apparatus and a musical piece composing apparatus, respectively, according to the present application;

fig. 27 is a schematic structural diagram of a computer device used in the present application.

Detailed Description

Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.

As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.

It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

As will be appreciated by those skilled in the art, "client," "terminal," and "terminal device" as used herein include both devices that are wireless signal receivers, which are devices having only wireless signal receivers without transmit capability, and devices that are receive and transmit hardware, which have receive and transmit hardware capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having single or multi-line displays or cellular or other communication devices without multi-line displays; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a web terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, and the like.

The hardware referred to by the names "server", "client", "service node", etc. is essentially an electronic device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., a computer program is stored in the memory, and the central processing unit calls a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing a specific function.

It should be noted that the concept of "server" as referred to in this application can be extended to the case of a server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.

One or more technical features of the present application, unless expressly specified otherwise, may be deployed to a server for implementation by a client remotely invoking an online service interface provided by a capture server for access, or may be deployed directly and run on the client for access.

Unless specified in clear text, the neural network model referred to or possibly referred to in the application can be deployed in a remote server and used for remote call at a client, and can also be deployed in a client with qualified equipment capability for direct call.

Various data referred to in the present application may be stored in a server remotely or in a local terminal device unless specified in the clear text, as long as the data is suitable for being called by the technical solution of the present application.

The person skilled in the art will know this: although the various methods of the present application are described based on the same concept so as to be common to each other, they may be independently performed unless otherwise specified. In the same way, for each embodiment disclosed in the present application, it is proposed based on the same inventive concept, and therefore, concepts of the same expression and concepts of which expressions are different but are appropriately changed only for convenience should be equally understood.

The embodiments to be disclosed herein can be flexibly constructed by cross-linking related technical features of the embodiments unless the mutual exclusion relationship between the related technical features is stated in the clear text, as long as the combination does not depart from the inventive spirit of the present application and can meet the needs of the prior art or solve the deficiencies of the prior art. Those skilled in the art will appreciate variations therefrom.

The music composition generation method can be programmed into a computer program product and is realized by being deployed in terminal equipment and/or a server to run, so that a client can access an open user interface after the computer program product runs in a webpage program or application program mode to realize man-machine interaction. Referring to fig. 1, in an exemplary embodiment, the method includes the steps of:

step S1100, obtaining accompaniment chord information and melody rhythm information corresponding to the accompaniment template, wherein the accompaniment chord information comprises a plurality of chords, and the melody rhythm information is used for limiting the rhythm of to-be-determined notes which are synchronous with the chords in the music melody to be obtained:

in order to provide a musical composition creation environment for composing and filling words for a user of the computer program product of the present application, after the computer program product of the present application is operated, relevant information required for creating a music melody in a musical composition, including accompaniment chord information and melody rhythm information, needs to be determined, so as to realize a navigation process for composing and filling words for the user as required.

The musical composition comprises a pure musical type or a singing type. The pure music type musical composition generally comprises background music and music melody, wherein the background music is chord music generally formed by playing according to chords, other auxiliary music sounds such as drumbeats and the like can be added to the background music, and the music melody can be flexibly determined by a person skilled in the art and is the main melody in the musical composition. In comparison with the former, the musical melody of the singing type musical composition is generally an acoustic melody formed by vocal singing.

In the application, the background music is played according to the chord predefined in the accompaniment chord information.

The information required for producing the music melody can be determined according to the pre-prepared accompaniment template. Each accompaniment template corresponds to one background music, accompaniment chord information according to the background music and melody rhythm information which prescribes the rhythm corresponding relation between the melody and the accompaniment chord information, therefore, the accompaniment chord information and the melody rhythm information corresponding to the accompaniment templates can be obtained, and the background music can be obtained if necessary.

The accompaniment chord information predefines one or more chord progresses expanded according to time sequence, each chord progress comprises a plurality of chords, the accompaniment chord information can be manually compiled in advance, and the background music can be prepared according to the accompaniment chord information to form related background music data for storage and standby, so that the background music data and the music melody produced by the user form corresponding music works in the following.

The rhythm information of the melody predefines the rhythm of each undetermined note required to be acquired by the music melody in the music composition, specifically, the music melody is composed of a plurality of notes organized in sequence, each note is in an undetermined state before the user creation, and the rhythm information of the melody is used for predefining the time value of each undetermined note, so that the limitation on the rhythm of the music melody is formed. When each note of the musical melody is selected, a plurality of sequentially organized selected notes are constructed to form the complete musical melody.

The rhythm corresponding relation exists between the melody rhythm information and the accompaniment chord information, and the rhythm corresponding relation follows the relevant specification required by music theory knowledge, so that after the music melody prepared according to the melody rhythm information is produced, the music melody and the background music played according to the accompaniment chord information can keep synchronous on rhythm, and the music melody and the background music can be synthesized according to the rhythm corresponding relation. Similarly, the melody rhythm information can also be used as a basis for filling words in the music melody, so that the lyrics and the background music keep a rhythm synchronous relationship.

It should be noted that the chord may be a columnar chord or a decomposed chord, or in the whole musical piece, part of the chord may be a columnar chord and another part of the chord may be a decomposed chord. Alternatively, for the same chord progression, it may apply a decomposed chord to the verse portion of the musical melody and a columnar chord to the refrain portion. These techniques are all based on the creation principle that can be flexibly implemented in the music creation process, so they are not flexible enough to be out of the scope covered by the creation spirit of the present application.

When the correspondence relationship between the rhythm of the chord and the melody is maintained, the chord in the chord progression and the undetermined note in the music melody are expressed in terms of time, and can also flexibly correspond to each other, for example, in the case of the beat number of 4/4, each chord can correspond to 4 beats, 2 beats or 1 beat, so that the correspondence relationship between the rhythm of the chord and the note of the melody, which is called in the application, should not be understood as a strict isometric relationship on the duration, that is, the duration of one chord may overlap the duration of the note in one or more music melodies, and specifically can be formulated in the rhythm information.

It should be noted that in a musical composition, sometimes, there is no need for one-to-one synchronization between chords and melody notes, and therefore, for example, for pauses in a melody, there is no need to determine selected notes for them; as another example, for one or several melody notes, it is also possible to realize singing without adapting the chord, in which case the corresponding selected note still needs to be determined even if there is no chord synchronized with it in rhythm. In view of this, the melody rhythm information may be added with pause marks as needed to represent pauses or may represent pending notes belonging to the singing section in a certain format. In this regard, those skilled in the art will be able to flexibly adapt the same.

The accompaniment chord information and the melody rhythm information can be stored locally or in a remote server, so that after a client user designates the accompaniment template, the accompaniment chord information and the melody rhythm information corresponding to the accompaniment template can be called correspondingly. Because the accompaniment chord information and the melody rhythm information are associated with the same accompaniment template, the accompaniment chord information and the melody rhythm information can be packaged into the same message body in the storage and/or transmission process, such as the message structure in the same XML or TXT format, and then the message structure is called correspondingly at the client, of course, the accompaniment chord information and the melody rhythm information can be stored and transmitted respectively, and the embodiment of the creative spirit of the application is not influenced.

In addition, it can be understood that, when the accompaniment chord information corresponding to the accompaniment template is determined and forms the background music, the corresponding key sign, time stamp and speed per hour are also determined, and these information can also be explicitly recorded in the melody rhythm information, shown in the function selection area 210 of the composition interface shown in fig. 2, and provided for the client user to refer to, so as to highlight the navigation function of the present application.

Step S1200, a composition interface is formatted according to the melody rhythm information, so that the composition interface displays the duration value information of the to-be-fixed notes of the music melody according to the melody rhythm information:

in the client device, a composition interface is required to be presented, which may be presented in various suitable layout forms, for example, in a two-dimensional coordinate system layout form in an embodiment of a type to be disclosed later in this application, as shown in fig. 2, so that in the composition interface, a note hinting area 220 for each pending note of the whole music melody may be laid out, and the note hinting area 220 provides for the user to select the pending note, so as to obtain a series of selected notes corresponding to the music melody in the composition interface, thereby constructing the whole music melody. In the composition interface, a description prompt area 240, see fig. 6, may be provided in the composition interface as necessary.

In the melody rhythm information, the rhythm information of each sequentially arranged undetermined note can be marked by a time value, so that in the composition interface, the note prompt area 220 can be marked corresponding to the time value of each undetermined note, and the time value information of the undetermined note is displayed. The mark form of the time value can be flexibly designed according to different layout forms of the composition interface, as long as the design embodies the time value information to provide navigation value for the composition of the user.

Step S1300, obtaining the music melody from the composition interface, where the music melody includes a plurality of selected notes, and the selected notes are notes in a harmony interval corresponding to the chord synchronized in the rhythm:

the user in the client device may adapt to each pending note in the music melody on the composition interface, determine a selected note from its corresponding note-hinting area 220, and construct the finished music melody by determining a sequence of the selected notes. It will be appreciated that the plurality of selected notes, which form the musical melody, are arranged in a sequence, presented in the note selection area 230 for displaying the selected notes, and are typically acquired in a chronological order, although individual adaptation thereof is not excluded.

The user may determine the corresponding selected note only corresponding to a part of undetermined notes specified by the melody rhythm information, and discard other parts, thereby forming a self-recognized complete music melody, but generally, the user is required to determine the corresponding selected note according to all rhythm information of the melody rhythm information, so that the music melody more completely matches the background music.

The note indication area 220 provides a plurality of selectable notes for a user to select and determine, and the note indication area 220 essentially corresponds to a chord of the accompaniment chord information in rhythm, that is, the playing duration of the selected note determined from the note indication area 220 is covered by the playing duration of a chord, in which case, the chord and the selected note are rhythmically synchronized.

In the application, the note synchronized with the chord on the rhythm is restricted, so that the finally determined selected note forms the note in the harmony interval of the chord synchronized with the note on the rhythm, thereby simplifying the requirement on the musical theory knowledge of the user and producing the happy and fused musical works more efficiently.

The collaboration interval includes: extremely fully harmonized musical interval: a pure first degree of perfect unity and an almost perfect octave of unity with the sound of the chord; full harmony interval: pure fifth and pure fourth degree which are fused with the sound of the chord; incomplete harmony musical interval: not very confluent three degrees in size and six degrees in size. The user may prefer notes within the pitch interval to which the application applies in accordance with this principle to determine the corresponding selected notes.

Of course, in order to produce more excellent works, the following embodiments of the present application will further disclose that the selected notes can be further optimized according to their musical-theoretic relationship with the previously-ordered selected notes based on the joint interval, which is not shown here.

Step S1400, responding to a composition playing instruction, and playing the musical composition containing the music melody:

after the user at the client has constructed the musical melody by determining the selected series of notes, the musical composition may be further constructed on the basis of the musical melody and played.

Corresponding works playing controls or corresponding works publishing controls can be configured in the composition interface, after a user triggers the corresponding controls, corresponding works playing instructions are triggered, and music works are played by responding to the works playing instructions.

In an improved embodiment, the selected notes of the music melody can be played in a predetermined musical instrument sound effect sequence, thereby realizing the playing of the music melody, and playing the sound effect of playing the music melody with the musical instrument sound effect, wherein the sound effect does not carry or synthesize the background music corresponding to the accompaniment template.

In another improved embodiment, the background music and the music melody corresponding to the accompaniment template can be combined into one based on the processing according to the previous embodiment, so that the played sound effect includes both the background music and the music melody, and the two are kept harmonious in rhythm according to the provision of the rhythm information of the melody.

In another improved embodiment, before playing the musical composition, a corresponding lyric text may be provided for the musical melody, and then a sound effect of singing the lyric text by human voice according to the musical melody is synthesized according to the lyric text and the musical melody, so as to play the sound effect.

In a further improved embodiment, the sound effect of the text containing the vocal singing lyrics can be further combined with the background music of the accompaniment template, so that a musical work with both chords and singing effects can be obtained and played.

According to the above improved embodiments, when the synthesis technology of musical compositions is involved, the link of synthesizing singing effect according to music melody and lyric text, the link of synthesizing sound effect with background music, and the like can be implemented by using various pre-trained acoustic models known to those skilled in the art. When the composition of the musical composition is implemented, the composition can be generally deployed in a remote server, and a client calls related functions through an interface provided by the remote server, and in some cases, the composition interface can also be realized by a page of the server. However, it should be understood that as the performance of the computer device is improved and the migratory learning of various neural network modules for realizing the composition becomes possible, the client is often competent for the capability of composition, and in this case, the related musical composition can be directly synthesized in the client.

In one embodiment, when the musical composition is played, the musical composition can be played according to a uniformly set speed per hour. Since the background music corresponding to the accompaniment template is determined, the rhythm information of the melody can be defined and displayed to the user by the constant value of the time speed. Similarly, the key and the time scale required by the musical composition are determined according to the determination of the background music, and can be displayed for the user to know.

Through the above disclosure of the typical embodiment of the musical composition generating method and various variation embodiments thereof, it can be understood that the user-defined music melody becomes more convenient and efficient, and specifically, the following advantages are embodied:

firstly, the application provides an accompaniment template required by the creation of the musical composition, accompaniment chord information and melody rhythm information required by the musical composition are carried in the accompaniment template, duration value information corresponding to each undetermined note in a composition interface is formatted according to the rhythm defined by the melody rhythm information, a user is guided to select each undetermined note in the musical composition in the composition interface according to the melody rhythm information and the corresponding chord of the musical composition, each selected note required by the musical composition is determined, so that the musical composition is constructed, the musical composition is further constructed, in the whole process, the user can perform music composition basically without depending on music knowledge, each note required by the musical composition is selected only by prompting of the technical scheme of the application, the creation process is simple and efficient, and auxiliary creation means of the musical composition are enriched, the problem that the music auxiliary creation field cannot meet the popular requirements for a long time is solved.

Secondly, the technical means that this application adopted is easy to realize, and its running cost is lower, for example the mode of obtaining the music melody in accompaniment template to and composition interface department all need not to rely on big data, not only can implement at the client, also can implement at server side, still can be implemented by client and server division of labor, consequently, the realization mode is nimble, and the cost of realization is lower, and reduce by a wide margin when the creation to full play computer equipment's computing resource advantage, the function of music creation product has been completed, the production efficiency of the supplementary creation of music has been promoted greatly.

In addition, the implementation of the technical scheme of the application redefines the music auxiliary creation state, so that people are music players, the formulation of the accompaniment template and the creation of the music melody are decoupled, the user flow can be activated, the creation and sharing of the user works are promoted, and a new internet music ecology is defined.

As shown in fig. 3 and 4, in an embodiment of a variation of the musical composition generating method of the present application, in the step S1100, obtaining accompaniment chord information and melody rhythm information corresponding to the accompaniment template includes the following steps:

step S1110, displaying an accompaniment template selection interface to list a plurality of candidate accompaniment templates:

a database can be constructed in advance for collecting the information related to the accompaniment templates, as mentioned above, one accompaniment template corresponds to the background music, the accompaniment chord information and the melody rhythm information stored in the background, and when the background is displayed, a plurality of accompaniment templates can be listed in a list form as shown in fig. 4 to form a plurality of candidate accompaniment templates, a "trial listening" playing control of the candidate accompaniment templates is provided, when the playing control of one candidate accompaniment template is touched, the corresponding background music in the background is called to be played, so that a user can sense the music style corresponding to the candidate accompaniment template by listening to the background music, and thus, a decision whether to select the candidate accompaniment template is made.

Step S1120, receiving a user selection instruction to determine a target accompaniment template from the candidate accompaniment templates:

after determining a candidate accompaniment template, a user can touch the 'using' control to select the candidate accompaniment template and correspondingly determine the candidate accompaniment template as a target accompaniment template.

Step S1130, obtaining accompaniment chord information and melody rhythm information corresponding to the target accompaniment template:

and if the corresponding accompaniment chord information and the melody rhythm information are stored in the remote server, the accompaniment chord information and the melody rhythm information can be directly acquired through a remote network.

The embodiment allows a plurality of accompaniment templates to be provided for the user to select, realizes decoupling between the accompaniment templates and the music melody, allows pre-preparation of the accompaniment templates for the user to select according to needs, therefore, richer template sources can be provided for music creation, sharing and re-creation of accompaniment template resources are realized, the user-defined accompaniment templates are not required to be relied on, convenience of the user for creating music works is further improved, and the auxiliary creation efficiency is facilitated to be improved.

Referring to fig. 5 and 6, in an embodiment of the method for generating musical compositions, in step S1200, a composition interface is formatted according to the melody rhythm information, so that the composition interface displays duration information of to-be-timed notes of the music melody according to the melody rhythm information, including the following steps:

step S1210, displaying a composition interface, wherein the composition interface displays a list of note locations with rhythm and scale as dimensions, and each note location corresponds to one note in the scale dimension corresponding to a certain time under the indication of the rhythm dimension:

in this embodiment, as shown in fig. 6, the composition interface is mainly constructed by a two-dimensional coordinate system. The composition interface is presented in a graphical user interface of the client, two dimensions of the composition interface are rhythm dimension and musical scale dimension respectively, specifically, a transverse axis of the composition interface is unfolded according to the rhythm dimension in a time sequence, and the unfolding direction of the composition interface is the sequential unfolding direction of each note in the music melody; the longitudinal axis of which is expanded according to the scale dimension and exhibits various scale levels, typically including at least the corresponding scale levels of bass, mid-range and treble.

As shown in fig. 6, according to the two-dimensional coordinate structure, the whole composition interface can be divided into a list of note locations by using the auxiliary line, each list of note locations includes a note location row corresponding to each note sequence, each note location row includes a plurality of note locations corresponding to the scale in the whole scale, and each note location corresponds to a note. It can be appreciated that, since the horizontal axis represents time, the bit width of each note location in the horizontal direction substantially represents the corresponding duration of the note in the note location.

Step S1220, according to the corresponding time values of the pending notes in the music melody defined by the melody rhythm information, adjusting the occupied width of the corresponding note zone in the composition interface to display the time value information of the pending notes of the music melody:

because the melody rhythm information defines the time values corresponding to all the pending notes in the music melody, the time values corresponding to all the pending notes in the composition interface can be obtained according to the time values of all the pending notes, the occupation width of the note area columns corresponding to all the pending notes is correspondingly adjusted, and the time values of all the pending notes are matched, so that the layout of the composition interface can be formatted according to the time values in the melody rhythm information, a composition area for producing the music melody is formed in a part of areas in the composition interface, the composition area displays the time value information of the pending notes of the music melody, and a user can intuitively understand the duration time of each note of the music melody from the composition interface, thereby being beneficial to understanding the melody rhythm and being more convenient for implementing music creation.

In the embodiment, the duration information of the undetermined notes of the music melody is automatically and visually expressed into the composition interface according to the melody rhythm information corresponding to the accompaniment template, so that the navigation effect of the auxiliary creation is enhanced, the user can understand the rhythm of the music melody more conveniently, and the user can create the music melody more efficiently by combining singing.

Referring to fig. 7, 8 and 9, in an embodiment of the variation of the musical composition generating method of the present application, in order to guide the user to complete the creation of the music melody, the embodiment will enhance the navigation function of the auxiliary creation by adapting each pending note in the music melody to a selectable note preferred by the user, and in step S1300, the step of obtaining the music melody from the composition interface includes the following steps:

step S1310, determining a note in the harmony interval corresponding to the chord synchronized with the pending note in the rhythm as a candidate note, corresponding to a currently oriented pending note of the music melody:

as mentioned above, the obtaining of the music melody may be started according to the sequence of the pending notes, and each pending note may be corresponding to the two-dimensional coordinate system of the composition interface, so in the composition interface, specifically, the composition area thereof, the selected notes corresponding to the current sequence may be obtained in sequence starting from the first pending note of the music melody.

For each currently positioned undetermined note in the music melody, the embodiment may determine a chord in the accompaniment chord information that is synchronized with the undetermined note in rhythm according to the time information provided by the melody rhythm information, then determine notes in the harmony interval according to the chord, and take the notes as candidate notes.

Step S1320, filtering the candidate notes according to a preset rule to obtain remaining selectable notes:

the notes in the joint interval, which usually include a plurality of notes, may be further filtered to further improve the creation efficiency and achieve a good creation effect.

The skilled person can flexibly set the preset rules required by the filtering according to self understanding of the music theory knowledge and for the purpose of improving the creation assistance efficiency, so that the candidate notes remaining after the filtering are used as the selectable notes to be finally displayed. It will therefore be appreciated that the setting of these preset rules may be theoretical or empirical.

The predetermined rules may be based on whether the current sequence of selectable notes is acoustically coordinated with the previously determined sequence or sequences of selected notes, such that the current sequence of selectable notes varies in relation to variations in the previously determined sequence or sequences of selected notes. For the first pending note in the musical melody, this predetermined rule may not apply due to its lack of a previously selected note.

After processing according to the predetermined rules, configuration according to the predetermined rules typically results in deleting at least individual notes from the plurality of candidate notes according to the selected notes that are ordered prior to the current rank order, further refining the candidate notes, and obtaining the final selectable notes. Of course, it is sometimes not necessary to delete individual notes from the candidate notes, depending on the content of the particular rule.

Step S1330, displaying a note zone corresponding to the selectable note on the composition interface at a position corresponding to the current sequence to form a note prompt zone:

as shown in fig. 8, for the pending note in the current sequence, as mentioned above, there is a corresponding relationship in the composition interface, specifically, in the aforementioned example of representing the composition area in the two-dimensional coordinate system, in the composition area, there is a note area column for determining the pending note in the current sequence in the horizontal axis direction, so that the plurality of selectable notes can be visually represented in the note area column, for example, the note areas corresponding to the selectable notes are displayed in a color, and the selectable notes together define the note hinting area 220 corresponding to the pending note in the current sequence. Thus, the note-hinting area 220 includes a plurality of colored note locations, each note location indicating a corresponding alternate note that may be continuous or discontinuous across the scale.

Step S1340, receiving the selected note determined from the plurality of selectable notes in the note-hinting area, advancing the music melody to the next order to cyclically determine its subsequent selected note:

for the pending note in the current sequence, the user can select one of the note locations provided in the note indication area 220 corresponding to the current sequence, that is, a selected note is determined, and the selected note is used to form the note required by the corresponding sequence in the music melody. In some embodiments, a melody buffer may be used to store each selected note sequence generated during the creation of a musical melody, and when a selected note corresponding to a pending note is generated, the selected note is added to the end of the sequence, as shown in the note selection field 230 of FIGS. 8 and 9.

As shown in fig. 8 and 9, after the determination of a selected note is completed, the music melody is advanced to the next sequence, the pending note in the next sequence is advanced to the new current sequence, and then the process of this embodiment is cycled to determine the selected note corresponding to the pending note in the next sequence. And repeating the steps continuously until each undetermined note of the music melody is determined, and finishing the creation of the music melody.

In a modified embodiment, in the process of cyclically obtaining the selected note, if a plurality of consecutive undetermined notes are synchronized with the same chord in rhythm, the corresponding candidate note of the succeeding undetermined note does not need to be determined, and the candidate note determined correspondingly to the previous undetermined note is adopted to be continuously associated with the selected note in the previous sequence, so that the selectable note corresponding to the current sequence is preferably selected, and the program operation efficiency can be improved.

The embodiment realizes the whole navigation of the user for composing the music melody through a simple flow, has small real code quantity, simple logic and high performance, and the user only needs to gradually select the musical notes required by the music melody without knowing rich music theory knowledge, thereby greatly reducing the threshold of the user for composing the music melody.

In the process of obtaining the selected notes of the music melody, the embodiment controls the selected notes corresponding to the undetermined notes to be in the chord synchronized with the rhythm thereof and to be in the chord ordered before the selected notes, thereby realizing harmony processing between the selected notes in the music melody and the chord of the background music and the adjacent selected notes thereof and improving the quality of the music melody created by the user.

In addition, in the process of guiding the user to create the music melody, the note prompt area 220 is displayed in the composition interface, and the note zone corresponding to the selectable note is displayed in the note prompt area 220, so that the navigation effect of the auxiliary creation is highlighted, and the efficiency of creating the music composition by the user is further improved.

In an embodiment of the present application, in which the method for generating musical compositions is flexible, the step S1300 of obtaining the music melody from the composition interface includes the following subsequent steps:

step S1350, responding to the reset event of any selected note, starting a sequence updating process for the selected note after the sequence, so that the selected note after the sequence is automatically re-determined according to the selected note before the sequence on the composition interface, wherein if the selected note at the position of the re-determined sequence does not include the originally determined selected note, the selected note at the position of the re-determined sequence is randomly selected from the re-determined selected notes to re-determine the selected note at the position of the sequence:

based on the previous embodiment, when the user has completed the determination of a part of pending notes in the music melody, and needs to reset the determined respective selected notes, the user is allowed to determine a new selected note by selecting other note areas in the note selection area 230 to which the selected note belongs, thereby triggering a reset event.

A reset event of a selected note located at a position other than the end of the musical melody could theoretically cause the vocal relationship between the selected note and the succeeding selected note to contradict the predetermined rule set in the previous embodiment, and therefore, it is suitable to rearrange the other succeeding selected notes after the reset selected note in response to the reset event.

To accomplish the sorting of the selected notes after the sort, the selected notes after the sort can be reset one by one according to the procedure of setting the selected notes according to the previous embodiment starting from the selected note after the reset selected note. In this case, if the note-indicating area 220 of the current sequence still includes the note location corresponding to the selected note determined before the current sequence, the note location of the current sequence does not need to be changed, and accordingly, the selected note of the current sequence does not need to be changed. Conversely, if the note-cue region 220 of the current sequence does not include a note region corresponding to a previously identified selected note, then the selected note is automatically identified again and the selected note is selected only to select between the newly identified note regions.

Indeed, in other embodiments, one could select a selectable note according to other rules, such as selecting the selectable note indicated in the note zone closest to the original note zone as the new selected note.

The embodiment further allows the user to individually adjust the selected notes after the user completes the determination of the selected notes, and the changes caused thereby automatically determine the selected notes of the subsequent music melody for the user again in a chain reaction mode through the automatic processing of the embodiment without setting the selected notes one by one again by the user, thereby simplifying the process of modifying the music melody by the user, greatly improving the editing efficiency of the music melody and enriching the auxiliary music creation means.

In an embodiment of the present application, in which the method for generating musical compositions is flexible, the step S1300 of obtaining the music melody from the composition interface includes the following subsequent steps:

step S1360, responding to an automatic composition instruction triggered by a control in the composition interface or a vibration sensor of the local device, automatically completing the note prompt area 220 corresponding to the undetermined note in the music melody and the selected note therein:

in view of the foregoing embodiment, the present application may also implement automatic creation of a music melody, and may determine a corresponding selected note for an undetermined pending note in the music melody after a user triggers an automatic composition command.

The automatic composition command may be triggered by lifting a control in a composition interface, or by recognizing a data model of a vibration sensor, for example, by recognizing that the vibration sensor is in a reciprocating and shaking state to recognize that the vibration sensor triggers the automatic composition command, which is commonly referred to as "shake-shake".

After the automatic composition instruction is triggered, in response to the instruction, the embodiment determines the flow of the embodiment related to the selected notes according to the sequence, determines the notes in the corresponding harmony interval according to each current sequence in the music melody, determines the notes in the corresponding harmony interval according to the chord synchronized with the rhythm, then applies the preset rule to filter and select the selectable notes, displays the note positions corresponding to the selectable notes, then randomly determines the selectable note indicated by one note position as the selected note of the current sequence, advances the next sequence to continue to obtain the subsequent selected note, and so on until all the notes to be determined are determined to be the corresponding selected note, that is, the automatic composition process is realized, and the whole music melody is determined.

The automatic creation process of the embodiment is different from the logic of artificial intelligence automatic generation in the prior art, the creation of the music melody can be automatically completed only according to the relationship between the melody rhythm information and the chord and the preset rule relationship that the selected note after being sequenced is associated with the selected note before being sequenced, the whole process can be realized according to the rule, the calculated amount is small, the calculation efficiency is high, the music auxiliary creation means is further enriched, the realization efficiency of the automatic auxiliary creation is improved, the realization way that people are all musicians is further unblocked, and the novel Internet music ecology is convenient to construct.

Referring to fig. 10, in an embodiment of a variation of the musical composition generating method of the present application, in the step S1330, the displaying the selectable note at a position corresponding to the current order on the composition interface includes the following steps:

step S1331, coloring the note zone corresponding to the selectable note on the composition interface at the position corresponding to the current sequence to form a note prompt zone, so as to complete representation and display of the selectable note:

as mentioned above, after determining a selectable note corresponding to a current sequence in the musical melody according to the aforementioned process of determining the sequence of the selected notes, the note locations corresponding to the current sequence in the musical melody are determined to form the note indication areas 220, and the note locations are colored to finish the representation of the selectable note.

Step S1332, moving the composition interface along the rhythm dimension direction to move the note prompt area corresponding to the current cis position to a predetermined important position:

for convenience of user operation, the automatic shift of the composition interface may be implemented, and it is known from the layout variation relationship presented between fig. 8 and fig. 9 that, specifically, the composition interface is automatically moved along the rhythm dimension direction of the two-dimensional coordinate system, so that the note prompt region 220 corresponding to the current cis-position is moved to a preset key position, so as to be most convenient for the user to select continuous touch, for example, the key position is placed near the longitudinal axis position of the mobile terminal serving as the client, so that the user can conveniently use the finger to continuously touch, and thus, the note regions at each sequencing position can be quickly and continuously selected.

In the embodiment, further in consideration of cultivating and adapting to the operation habits of the user, the note zones are displayed in a coloring mode and the composition interface is automatically moved, so that the user can select the note zones required by the music melody more conveniently, the composition interface is not required to be automatically moved by the user, the user can conveniently and quickly acquire information through the colored note zones to realize selection, and the auxiliary creation efficiency is further improved.

Referring to fig. 11, in an embodiment of the variation of the musical composition generating method of the present application, the improvement of the embodiment of sequentially determining the associated selected notes, in step S1340, receiving the selected notes determined from the plurality of selectable notes in the note-cue region 220, includes the following steps:

step S1341, responding to a selected operation from the note locations corresponding to the plurality of selectable notes in the note prompt area, receiving the selected note corresponding to the selected note location as the selected note of the current sequence of the music melody:

after the user selects a note zone from the note-hinting area 220 in the current sequence for a selected operation, a note-selecting event is triggered to determine the selectable note indicated by the selected note zone as the selected note corresponding to the pending note in the current sequence of the music melody.

Step S1342, highlighting the selected note zone, adding a lyric editing control in the note zone, and displaying the characters synchronized with the lyric text in rhythm in the lyric text buffer zone in the lyric editing control:

for the selected note zone, the embodiment highlights the note zone, and the highlighting herein can also be implemented by, for example, adjusting other note zones to be gray scale areas and reserving the selected note zone as a highlighted area, compared to other unselected note zones.

Further, a lyric editing control, which may be a text box, is added to the selected note zone. Then, displaying characters in the lyric text in the lyric editing control to indicate the character content of the lyric text with the rhythm synchronous with the selected note corresponding to the note zone. The lyric text can be stored in a lyric cache region in advance, and after the lyric editing control is added, characters in the lyric text with synchronous rhythm are loaded from the lyric cache region for the lyric editing control; if no such lyrics text is present in the lyrics cache, a default word occupancy may be used, e.g. filling in pseudonyms like "la", "o", etc., which may be synchronized into the lyrics cache. The lyrics text may be authored or edited by other embodiments as will be disclosed later in this application, where the table is pressed temporarily.

The words displayed by each lyric edit control can be set as a single word by default, such as a single Chinese character in the Chinese or a word in the English, and in other embodiments, multiple words can be allowed to appear, but generally more than one word is not allowed to appear, so as to adapt the duration of the note. In the embodiment, taking the chinese lyrics as an example, it is suitable to display a single word in each lyrics editing control. And the words displayed in the lyric editing control indicate that the words at the position are to be sung according to the musical sound of the selected note indicated by the position. In some embodiments, when the text in the lyric editing control of a note zone is emptied, it can be understood that the selected note here will sing according to the continuation pronunciation of the text corresponding to the previous selected note. Therefore, the selected musical notes and the singing characters of the lyric texts can be in one-to-one correspondence, and can also be in more complex relationship after various flexible changes, and the skilled person can flexibly realize the relationship according to the principle.

In the embodiment, when a user determines a selected note, the note zone corresponding to the selected note zone is highlighted, and a lyric editing control is added to the selected note zone, so that the lyric editing control is used for displaying characters in a lyric text with synchronous rhythm, the corresponding relation between a music melody and the lyric text can be displayed more intuitively, a more intuitive navigation effect is achieved, the user can modify the corresponding character content individually through the lyric editing control in a follow-up manner, the alignment is accurate, and the music auxiliary creation efficiency can be further improved.

Referring to fig. 12, in an embodiment of the variation of the musical composition generating method of the present application, the step S1340 of receiving a selected note determined from a plurality of selectable notes in the note-cue region further includes the following steps:

step S1343, responding to the editing event of the characters in the lyric editing control acting on the note zone bit, and replacing the corresponding edited characters to update the corresponding contents in the lyric cache zone according to the corresponding word number relationship:

based on the previous embodiment, the user is allowed to edit the words in the lyric editing control in the note zone of the selected note, and as described in the previous embodiment, the word length of the lyric editing control can be restricted to a single word, so that the user is allowed to modify only the corresponding single word according to the embodiment. As a requirement for function expansion, a user may also input a plurality of words in one lyric editing control, and then perform corresponding processing by this embodiment.

In this embodiment, the user edits the text in the phrase editing control in the note zone of a selected note, and then triggers a corresponding editing event.

And in response to the editing event, the word content in the lyric editing control can be obtained, the content of the corresponding word number of the lyric text with the synchronized stanza in the lyric cache region is searched according to the word number of the word content, and the former is replaced with the latter, so that the words in the lyric editing control can be modified, and the replacement and modification of the lyric text with the synchronized stanza in the lyric cache region can be realized.

Step S1344, in response to the content update event of the lyric cache area, refreshing the characters in the lyric editing control corresponding to the selected note of the music melody in the composition interface:

the action of modifying the content of the lyric cache area leads to a content update event triggering the lyric cache area, and in response to the event, the updated text content can be known, and a duration value range corresponding to the updated text content can be known, i.e. a plurality of continuously selected notes corresponding to the duration value range can be determined. Therefore, the text content in the lyric editing control in the note zone where the plurality of continuously selected notes are located can be further adaptively modified according to the updated text content.

Specifically, according to the specification of the present embodiment, each selected note can be synchronized with a single character correspondingly, so that no matter how many characters are input by the user from one lyric editing control, the selected notes can be used to replace the content corresponding to the word number in the updated lyric cache region, but finally, the lyric editing control can only correspond to the word number requirement required by the selected note, and the updated corresponding characters are displayed.

After the contents in the lyric cache region are replaced by a plurality of single words input from one lyric editing control, the updated redundant text contents are correspondingly distributed to replace the texts displayed in the note region of the subsequent selected note, so that the texts displayed by the lyric editing control in the note region of each selected note are ensured to be synchronous with the texts with synchronous syllables in the lyric cache region.

In an alternative embodiment, if one selected note is allowed to correspond to two single characters, the two single characters are processed and displayed according to the priority of each selected note in the same way; if a selected note needs to be defined as the continuous sounding synchronization range of the character corresponding to the previous selected note, a placeholder can be used to represent the character corresponding to the selected note, for example, characters such as "+", "#", etc. can be used as placeholders, and once such characters appear in a lyric editing control, the lyric editing control can be parsed by the embodiment so that the sounding content corresponding to the selected note can be sounded according to the character corresponding to the previous selected note.

According to the embodiment, the modification of the content of the lyric text can be conveniently found and realized through the composition interface, the modification of the whole sentence or long string of lyrics can be realized through the lyric editing control in a single note zone, a more flexible and convenient control means is provided for word filling operation in music creation, a more convenient realization mode is provided for forming music works by users, the technical means of music auxiliary creation is enriched, and the operation efficiency is correspondingly improved.

Referring to fig. 13 and 14, in an embodiment of a variation of the musical composition generating method of the present application, in step S1400, in response to a composition playing instruction, playing a musical composition including the musical melody includes the following steps:

step S1410, responding to the work playing instruction, and acquiring a preset sound type:

in this embodiment, a play control is provided on the composition interface, and a user can trigger a composition play instruction by touching the play control. As shown in fig. 14, in the composition interface, a function selection area may be further provided, in which a preset control corresponding to the sound type adapted to the music melody is provided, and the preset control may preset the sound type to the sound type of an instrument such as a piano by default, so as to instruct the user to play the music melody created by the user in the piano sound effect subsequently. Similarly, the preset control can also be used for the user to select other musical instrument types, or for the user to select a voice type, and when the preset control is selected as the voice type, the preset control indicates that the music melody created by the user can sing according to the lyric text in the lyric cache region by using the sound effect corresponding to the voice pronunciation.

Step S1420, obtaining the music melody to which the corresponding sound effect is applied according to the preset sound type:

it is understood that, for each preset sound type, the corresponding sound effect, i.e. sound effect, can be generated, and the music melody is played or sung in the session. Generally, the sound effect data of the types of the musical instruments are simple, can be stored in a client and directly called at the client, and can be realized by an audio synthesis technology if the sound effect of the human body is obtained, or can be locally synthesized if the client supports the technology, or else, the corresponding lyric text and the music melody regulated by the melody rhythm information need to be sent to a remote server to be generated and then acquired. It will be appreciated that in any way, a musical melody to which the corresponding sound effect of the preset sound type is applied is finally obtained.

Step S1430, playing the musical composition including the musical melody:

after the music melody applying the sound effect is obtained, the playing of the music melody can be started, and the music works of the application are presented. If the music melody is further synthesized with the background music of the selected accompaniment template in advance, the music composition necessarily shows the music melody and the background music with the sound effect applied at the same time, and the music melody and the background music are defined and synchronized in rhythm according to melody rhythm information, otherwise, if the music melody is not synthesized with the background music in advance, the music melody with the corresponding sound effect applied can be simply played. Generally speaking, for the preset sound type of a musical instrument, background music synthesis is not required to be performed in advance, so that the playing effect is purer; for the preset voice type being the human voice type, the background music can be synthesized together, so that the playing effect is more infectious. Of course, the skilled person can set it flexibly. It should be noted that whether the finally played music content includes background music or not, the music composition referred to in the present application is formed as long as it includes the music melody created according to the present application.

The embodiment allows the music melody created by the user to be matched with the corresponding sound type, so that the corresponding sound effect can be conveniently generated for the user to be applied to the music melody, thereby forming the music composition expected by the user and realizing rich music auxiliary creation means.

Referring to fig. 15, in an embodiment of a variation of the musical composition generating method of the present application, in step S1420, the step of obtaining the musical melody to which the corresponding sound effect is applied according to the preset sound type includes the following steps:

step S1421, determining the preset voice type as a voice type representing voice singing according to the lyrics:

and recognizing that the preset voice type set in the function selection area by the user is a voice type representing that the voice sings according to the lyrics so as to start the business logic processed according to the voice type.

Step S1422, obtaining a preset lyric text corresponding to the voice type:

as described in the previous embodiments, the lyric text of the music melody of the present application is generally stored in the lyric cache, where it can be directly recalled. In some embodiments, if there is no corresponding lyrics text in the lyrics buffer, the user may even be allowed to import a preset choreography of lyrics text, for which the skilled person may implement flexibly.

Step S1423, adapting to the type of the human voice, constructing a voice effect of the human voice pronunciation of the lyric text, applying the voice effect to the music melody, and synthesizing the background music corresponding to the accompaniment template for the music melody, where the background music is music played according to the chord information:

as mentioned above, the lyric text and the music melody specified by the melody rhythm information can be sent to the remote server according to the voice type, the acoustic model pre-trained is called by the remote server, the sound effect of the acoustic model is obtained according to the lyric text, the sound effect is processed into singing music data according to the music melody, and the application of the sound effect of the voice singing to the music melody is realized.

Further, the music melody to which the sound effect is applied is synthesized with the background music by using a preset audio synthesis model, that is, the singing music data is synthesized with the background music, and since the two are synchronized in rhythm in relation to the melody rhythm information, the synthesized product is a musical composition coordinated in vocal music and can be used for playing.

The embodiment adapts to the requirement of singing in voice type, can realize the synthesis of lyric text, music melody and background music according to the setting of the user, creates corresponding song musical works for the user in one step, simplifies the song creation process of the user and improves the auxiliary song creation efficiency.

Referring to fig. 16, in an embodiment of the variation of the musical composition generating method of the present application, in the step S1420, the step of obtaining the musical melody to which the corresponding sound effect is applied according to the preset sound type includes the following steps:

step S1421', determining that the preset sound type is an instrument type representing the performance of a specific type of instrument:

in adaptation to the previous embodiment, it is recognized that the preset sound type set by the user in the function selection area is an instrument type played by a specific kind of instrument, such as a piano, so as to start the business logic processed according to the instrument type.

Step S1422', obtaining preset sound effect data corresponding to the type of the musical instrument:

the sound effect data can be prestored at the client, so that the corresponding sound effect data can be directly called. The sound effect data is used for simulating the sound production effect of the corresponding musical instrument.

Step S1423', adapting to the type of musical instrument, constructing the sound effect of the corresponding musical instrument according to the sound effect data, and applying the sound effect to the music melody:

in consideration of the pureness of playing with an instrument, it is possible to construct the sound effect of the corresponding instrument by using the sound effect data without sending the music melody to the remote server for more complicated synthesis operation, i.e. without synthesizing the background music, and apply the sound effect to the music melody for playing.

The embodiment simplifies the background processing process when playing the music melody according to the musical instrument, the user can obtain the playing effect of the music melody without waiting for background synthesis, the user can conveniently and quickly understand and improve the music melody created by the user, and the auxiliary creation efficiency is improved.

In an embodiment of the present application, the method for generating musical compositions further comprises the following subsequent steps:

step S1600, responding to the issuing and submitting instruction, acquiring the text information input from the issuing and editing interface, issuing the music works to a corresponding control of the browsable interface, implanting a player for playing the music works and the text information into the corresponding control:

in the application, the step can be added to release the musical works created by the users to the database accessible by the public or the users with the authority. Specifically, a publishing control may be provided in a graphical user interface of the computer program product of the present application, and a user may trigger a corresponding publishing submission instruction after touching the publishing control. The graphical user interface may be the composition interface, and may be specifically, for example, set in a function selection area of the composition interface, or may be other interfaces not mentioned, as long as the client user is accessible.

Responding to the issue submitting instruction, displaying a corresponding issue editing interface in a pop-up or switching mode, enabling a user to input text information to be issued in the issue editing interface, wherein the text information is usually title information and brief introduction information corresponding to the musical works created by the user, and then confirming submission, namely submitting the musical works created by the user and the edited text information to a remote server. When the user submits the musical composition and sends a corresponding message to the remote server, the message generally includes the music melody to which the sound effect is applied, the accompaniment template or the characteristic information thereof available for the server to obtain, the preset sound type and other information, and even may include the corresponding lyric text and the like, which is specifically determined according to the implementation logic of the remote server.

The remote server obtains the information submitted by the user, synthesizes the playable music works according to the specific content provided by the user, constructs a summary message body according to a preset format, embeds the text information used for the music works and corresponding text information, even comprises the lyrics, and then publishes the summary message body to a database accessible by the user.

The remote server can provide a browsable interface through the computer program product of the application, a user can load the abstract message body made by the remote server through the browsable interface, after the computer program product obtains the abstract message body, a corresponding control is constructed on the browsable interface, a player is implanted in the corresponding control and used for automatically playing the music works packaged by the abstract message body, and lyric text in the abstract message body and other text information and the like are displayed in the corresponding control. The corresponding control can also enter the detail page in response to the touch of the user, and for this, the skilled person can flexibly implement.

The embodiment allows a user to send the works to the public network for other users to access, so that the created musical works can be browsed and shared, the user flow of the computer program product realized by the application can be activated, the operating efficiency of a remote server deployed for the computer program product is improved, the ecological environment of internet music services is deepened and expanded, and people are music players.

In an embodiment of the present application, the method for generating musical compositions, before the step S1600 responds to the issue of the submission instruction, includes the following steps:

step S1500, responding to the determined event of the selected note, judging whether the selected note corresponds to the last undetermined note in the music melody, if so, activating a release control used for triggering the release submission instruction, and if not, keeping the release control in an inactivated state:

in order to avoid resource consumption of the remote server caused by the user publishing the works in a hurry, the embodiment adapts to the former embodiment, and the monitoring is carried out on the process of the user conducting music melody creation. Specifically, the embodiment may monitor a behavior of a user determining any selected note of the musical melody, determine, in response to a determination event of any selected note, whether the selected note is the last undetermined note in the musical melody, and activate the publishing control, so that the user may touch the publishing control to publish the musical composition; if not, the release control is always kept in the inactive state, and the user cannot release the unfinished musical composition through the release control.

The embodiment further carries out security control on the user creation process, and only after the user completes creation of the whole music melody, the control is released for the user, so that efficient utilization of server resources is ensured, proper control on the overall quality of the musical works created by the user can be realized, the musical works released to the platform conform to certain integrity, the comprehensive quality of the platform works is improved, and the server is ensured to run healthily.

Referring to fig. 17, in an embodiment of the variation of the musical composition generating method of the present application, the method further includes the following steps:

step S1701, displaying a word filling interface for receiving lyric input, where the word filling interface provides lyric prompt information corresponding to the determined selected notes in the music melody:

referring to fig. 18, in the present embodiment, a word filling control is added to the word making interface, so as to allow a user to touch the word filling control to trigger a word filling instruction, so that a word filling interface can be displayed for receiving input of a lyric text. Of course, other more convenient entry setting modes can be adopted, as long as the word-filling interface can be accessed through a certain entry, and the implementation of the application is not influenced.

In the displayed word filling interface, lyric prompt information can be displayed for prompting a user to fill words according to a certain rule, so that the word filling navigation function is realized. The lyric prompt information may include word number information corresponding to a lyric text, and may also include other information such as an auto-association text of the lyric that has been input by the user. Typically, lyric prompting information is correspondingly provided according to the selected note sequence determined by the user, and prompting can be omitted for the selected note which is not determined by the user.

Step 1702, responding to the word filling confirmation instruction, storing the inputted lyrics in the word filling interface into a lyrics cache area, and synchronizing to the note area where the corresponding selected note in the composition interface is located for displaying:

after a user inputs a lyric text in the word filling interface, a 'finishing' storage control provided in the word filling interface can be touched, so that a corresponding word filling confirmation instruction is triggered, the word filling confirmation instruction is responded, the embodiment stores the lyrics input by the user into a lyric cache area, and the lyric cache area is correspondingly driven to trigger a content updating event, so that in the embodiment disclosed in the foregoing, the words in the lyric editing control of a note area corresponding to each rhythm in the composition interface are updated according to the updated word content, and the synchronization between the lyric text input by the user in the word filling interface and the lyric text displayed in the note area in the composition interface is realized.

The embodiment further provides a word filling interface for the user to create lyrics, provides lyric prompt information in the word filling interface, and plays a role in guiding the user to create lyrics, and improves the word filling efficiency of the user.

Referring to fig. 18 and 19, in an embodiment of a variation of the musical composition generating method of the present application, in step S1701, the step of displaying a word-filling interface for receiving input of lyrics includes:

step S1710, displaying a word filling interface, dividing the total number of selected notes corresponding to each lyric single sentence according to the sentence dividing information in the melody rhythm information corresponding to the music melody, and determining the word number information corresponding to each lyric single sentence according to the total number of the selected notes:

in order to facilitate word filling for the user, the melody rhythm information corresponding to the music melody may further include sentence dividing information of the music melody so as to instruct the user to divide the lyric text into a plurality of lyric single sentences. The sentence dividing information is also a lyric prompting information in nature, and does not necessarily need to be marked by a clear text, and the lyric prompting information can be presented by dividing a single sentence of lyrics.

Therefore, the selected notes of the music melody can be punctuated according to the sentence splitting information in the rhythm information of the melody, so that a plurality of selected note subsequences are obtained, each subsequence corresponds to one lyric single sentence, so that a plurality of lyric single sentences of the lyric text are determined, the total number of the selected notes corresponding to each lyric single sentence is correspondingly determined, and therefore, the word number information contained in each lyric single sentence is also determined. For example, if a single word is specified for each selected note, the maximum input word count for a lyric phrase should be the total number of selected notes for its corresponding subsequence, and thus the word count information for each lyric phrase is a constant value. Under the prompt of the certain value, the lyric creation thought of the user is clearer.

Step S1720, correspondingly displaying a plurality of editing areas in the word filling interface according to the single lyric sentence:

in order to display the lyric prompt information of the sentence dividing information in the melody rhythm information, only one editing area is correspondingly arranged in a word filling interface corresponding to each lyric single sentence according to the lyric single sentence, and each editing area is used for independently receiving the input of the corresponding lyric single sentence. Therefore, sentence dividing information of the lyric text can be displayed, and the function of lyric prompt information is achieved.

Step S1730, loading a single-sentence text for displaying a single lyric sentence corresponding to the lyric text in the lyric cache area for each editing area, and displaying lyric prompt information including the maximum word number information of the lyric to be input in the corresponding lyric single sentence for each editing area:

because default generated lyric texts may exist in the lyric cache region, and the edited lyrics in the word filling interface are finally updated into the lyric cache region, after the editing regions corresponding to the lyric single sentences are arranged, the single sentence texts of the corresponding lyric single sentences in the lyric cache region are loaded and displayed for each editing region, and further, the maximum word number information of the lyric to be input of the lyric single sentences corresponding to the editing regions can be further displayed in each editing region as the lyric prompt information, so as to prompt a user that the word number input aiming at the single sentence texts should not exceed the fixed value prompted by the maximum word number information.

In one embodiment, the method may further comprise the following steps:

step S1731, obtaining lyric texts in a lyric cache area, and dividing single sentence texts corresponding to each lyric single sentence in the lyric texts according to the sentence dividing information;

step S1732, displaying each single sentence text in a text box of a corresponding editing area, and displaying the lyric prompt information in a prompt area of the editing area, where the lyric prompt information includes maximum word count information of the lyric single sentence and total input count information in the current text box:

that is, in order to facilitate the user input, the corresponding single sentence text is displayed and edited by loading the text box in each editing area, and a prompt area can be set in each editing area, and the maximum word number information of the corresponding lyric single sentence and the input total number information in the current text box are displayed in the prompt area, so that the lyric prompt information is played, and the user is guided to fill words more efficiently and dynamically. In a preferred embodiment, the text box is further constrained such that the maximum number of words input does not exceed the total number of selected notes corresponding to the corresponding lyric single sentence, i.e. does not exceed a predetermined value specified by the total input information, so as to avoid the user from mistakenly inputting too many words.

According to the embodiment, the word filling interface is more scientifically arranged according to the sentence segmentation information contained in the rhythm information of the melody and the word number information of the selected notes in the music melody, and the word filling interface is suitable for displaying the editing area and the lyric prompt information of each lyric single sentence in a one-to-one correspondence mode, so that the word filling interface has a stronger navigation function, can be suitable for the user-defined music melody by virtue of the fact that the corresponding navigation information is flexibly provided, the user can conveniently and quickly fill words, and the auxiliary music creation efficiency is improved.

Referring to fig. 20, 21 and 22, in an embodiment of a variation of the musical composition generating method of the present application, after the step S1701 displaying a word-filling interface for receiving input of lyrics, the method includes the following steps:

step S1703, responding to an intelligent reference instruction triggered by a single sentence text in the lyric text, and entering an intelligent search interface:

the intelligent quote instruction for a single sentence of text can be triggered in a number of ways. One of the common ways can be that when the cursor is positioned to a text box corresponding to a single sentence text, an AI word filling control is displayed on the word filling interface, and after the user touches the control, the user enters the intelligent exploration interface. For other ways, one skilled in the art can implement it flexibly.

Step S1704, responding to the keywords input from the intelligent search interface, displaying one or more recommended texts matched with the keywords:

as shown in fig. 21, a user inputs one or more keywords in a search box provided in an intelligent search interface, and may trigger a background to retrieve one or more recommended texts semantically associated with the keywords through a preset search engine or an artificial intelligence word filling interface, as shown in fig. 22, the recommended texts may be generated according to a preset rule, for example, a constraint of a maximum number of input words corresponding to a corresponding single sentence text may be followed, and a plurality of recommended texts may also be controlled to be determined in association with a vowel of the keyword, so that each recommended text may be rhyme, and thus, the lyrics are heard more vividly when a human voice sings the lyrics.

The search engine or the artificial intelligence word filling interface can be flexibly implemented by adopting tools well known to those skilled in the art, which is not repeated in this application.

Step S1705, in response to a selection instruction of one of the recommended texts, replacing the single sentence text with the selected recommended text to synchronize the single sentence text to the lyric cache:

after the user selects one of the recommended texts, a selection instruction is triggered, and in response to the selection instruction, the selected recommended text is substituted for the display content in the text box of the corresponding editing area, and is also synchronized to the lyric cache area to replace the corresponding single sentence text. Of course, the action of replacing the lyric single sentence in the lyric cache region can also be executed when the user triggers the word filling confirmation instruction, and the embodiment of the spirit of the invention is not influenced.

It will be further appreciated that as the lyric phrase in the lyric cache is replaced, the text in the note zone where the selected note corresponding to the lyric phrase is located in the composition interface is also refreshed accordingly.

According to the embodiment, an intelligent reference function for a single lyric sentence is provided for a user word filling process, and auxiliary creation means are enriched, so that when the user creates lyrics, the advantages of big data can be fully utilized to further improve the word filling efficiency, shorten the word filling time and improve the production efficiency of music works.

In an embodiment of the present application, after the step S1701 of displaying the word-filling interface for receiving the input of the lyrics, the method for generating the musical composition includes the following steps:

step S1706, responding to an automatic word filling instruction triggered by a control in a word filling interface or a vibration sensor of the local device, and automatically completing the lyric text according to the selected note of the music melody:

further, a quickly generated lyric text may be provided for a music melody composed by a user through the present embodiment. Specifically, in the word filling interface, the lyric text can be automatically created according to the music melody created by the user by recognizing an automatic word filling instruction generated by the user touching a preset control or recognizing an automatic word filling instruction generated by the user executing a 'shaking one by one' operation through a vibration sensor.

In a specific implementation manner, according to the implementation logic of the previous embodiment, for each lyric single sentence corresponding to the music melody, a recommended text corresponding to the single sentence text of the lyric single sentence is searched one by one, and then the recommended text is randomly selected and replaced by the content in the text box of the editing area.

Therefore, the user does not need to create the lyric text by himself, the lyric text can be obtained only through the touch control or the shaking, and then only simple modification or modification of each recommended text is needed, so that the lyric text is rapidly generated, and the music auxiliary creation efficiency is further improved.

In an embodiment of the present application, the method for generating musical composition further includes the following steps:

step S1800, submitting draft information corresponding to the musical composition to a server, wherein the draft information comprises an accompaniment template corresponding to the musical composition, a preset sound type and the music melody:

based on the various embodiments and the variations listed above, the user may submit draft information corresponding to his musical composition to the server during the musical melody creation process. The draft information comprises an accompaniment template corresponding to the musical composition, a preset sound type, the musical melody created by the user and optionally lyric text created by the user.

The remote service acquires the draft information submitted by the user, and stores the draft information as a draft box of the user, namely a personal editing library of the user for subsequent calling.

The embodiment can realize the protection of filing of user's musical composition, protects user's creative achievement, makes the user can improve the musical composition of its creation constantly, can further realize many people collaborative creation relevant musical composition through sharing draft information even, further promotes sharing and interaction between the user, promotes supplementary creation efficiency.

Referring to fig. 23, a musical composition method according to the present application, which can be programmed as a computer program product, is mainly deployed in a server for operation, and is used for supporting the operation of the computer program product implemented according to the musical composition generation method of the present application. Referring to fig. 23, in an exemplary embodiment, the method includes the steps of:

step S2100, determining draft information of the original user in response to a music composition instruction submitted by the original user, where the draft information includes an accompaniment template specified in the instruction, a preset sound type, and a music melody for which a selected note is determined by the original user, and a duration of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template:

after an original user creates a corresponding musical composition by using the musical composition generating method of the present application, the corresponding musical composition can be submitted to a remote server, namely, a server of the musical composition generating method of the present application, so as to trigger a musical composition instruction. In one embodiment, the music composition instruction may be further triggered after the originating user triggers the issuance of the submit instruction.

And in response to a music synthesis instruction triggered by the original user, draft information for determining the original user is extracted from the music synthesis instruction, wherein the draft information comprises an accompaniment template specified in the instruction, a preset sound type and a music melody of which the selected note is determined by the original user, and the duration of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template. Since the accompaniment template is usually stored in the remote server, only the feature identifier of the accompaniment template can be provided in the draft information of the original user.

In other alternative embodiments, the draft information may further include a lyric text submitted by the original user, and the lyric text may also be automatically generated by the original user using a computer program product.

Step S2200, storing the draft information into the personal editing library of the original user for subsequent calling:

the server creates a corresponding personal editing library, i.e. a draft box, for each user in advance, so that the draft information can be stored in the personal editing library corresponding to the original user for calling.

Step S2300, synthesizing the corresponding sound effect into the music melody according to a preset sound type:

the technical realization of synthesizing the musical works is implemented by deploying the technical realization in the server of the method, so that the server is responsible for preparing the sound effect data of the playing or singing version of the musical melody by applying the corresponding sound effect according to the preset sound type appointed in the draft information, and then synthesizing the sound effect data into the musical melody.

In one embodiment, when the preset sound type is a human sound type, a pre-trained acoustic model is called to synthesize the lyric text carried in the draft information into a sound effect with a preset tone color, and then the sound effect is synthesized into the music melody. For the designation of the tone color, the musical composition generation method of the present application may also require the user to designate during the user creation process, and similarly, the corresponding tone color data may be provided by the server, so that the server may finally apply the corresponding tone color data for synthesis.

Step S2400, synthesizing background music formed by playing according to the accompaniment chord information and the music melody into playable musical compositions, and pushing the musical compositions to the user:

the background music corresponding to the accompaniment template is stored in the server, and the background music is played according to the accompaniment chord information, so that the background music and the music melody submitted by the user have a rhythmical synchronization relationship. On the basis that the music melody of the user has synthesized the corresponding sound effect, the music melody is further synthesized with the background music, and the corresponding musical composition can be obtained at the server side. Further, the music works can be pushed to the original user for playing.

The typical embodiment realizes the synthesis technical support and the archiving support of the musical works created by the original user at the server side, so that the original user can obtain the corresponding musical works through the simple and convenient customized music melody, and the music auxiliary creation efficiency is improved.

Referring to fig. 24, in an alternative embodiment of a musical composition synthesizing method according to the present application, a collaboration between multiple users can be realized, allowing the multiple users to jointly complete the creation of a musical composition, and the method further includes the following steps:

step S2500, responding to an authorized access instruction, pushing the draft information to an authorized user authorized by the original user:

the original user can share the draft information with the authorized user, the authority of the authorized user for accessing the draft information is associated, and when the authorized user accesses the draft information, an authorized access instruction is triggered.

In response to the authorized access instruction, after the server authentication is passed, the draft information can be pushed to the authorized user, and the authorized user can perform corresponding editing on the basis of the draft information, including modifying the music melody and/or lyric text therein, and the like, depending on the authorized scope of the original user.

Step S2600, receiving an updated version of the draft information submitted by an authorized user to replace an original version thereof, regenerating the playable musical composition according to the updated version, and pushing the musical composition to the originating user:

and after the authorized user finishes editing and modifying the music melody and/or the lyric text, the authorized user forms an updated version of the draft information, submits the updated version to the server, and after the server receives the submitted updated version of the draft information, the server can replace the corresponding original version in the draft box of the original user, and then regenerates the playable musical works according to the updated version and pushes the musical works to the original user.

The embodiment further provides a technical approach for collaborative creation of musical works among multiple users, so that multiple users can collaboratively edit the same musical work to create complete musical works together, and therefore professional users and non-professional users can collaborate with each other to complete song creation, communication among user groups is further activated, user interaction of a music platform is activated, the operating efficiency of a related server group is improved, and the method is more beneficial to developing the music state of the internet.

Referring to fig. 25, a musical composition generating apparatus provided in the present application, adapted to the musical composition generating method of the present application for functional deployment, includes: a template acquisition module 1100, a composition formatting module 1200, a melody acquisition module 1300, and a music playing module 1400. The template obtaining module 1100 is configured to obtain accompaniment chord information and melody rhythm information corresponding to an accompaniment template, where the accompaniment chord information includes a plurality of chords, and the melody rhythm information is used to define a rhythm of a to-be-determined note synchronized with the chord in a music melody to be obtained; the composition formatting module 1200 is configured to format a composition interface according to the melody rhythm information, so that the composition interface displays duration information of to-be-determined notes of the music melody according to the melody rhythm information; the melody obtaining module 1300 is configured to obtain the music melody from the composition interface, where the music melody includes a plurality of selected notes, and the selected notes are notes in a harmony note corresponding to a chord synchronized in the rhythm; the music playing module 1400 is configured to respond to a composition playing instruction and play a musical composition containing the music melody.

In a further embodiment, the template obtaining module 1100 includes: the template display sub-module is used for displaying an accompaniment template selection interface so as to list a plurality of candidate accompaniment templates; the template selection submodule is used for receiving a user selection instruction and determining a target accompaniment template from the candidate accompaniment templates; and the template analysis submodule is used for acquiring accompaniment chord information and melody rhythm information corresponding to the target accompaniment template.

In a deepened embodiment, the composition formatting module 1200 includes: the composition layout submodule is used for displaying a composition interface, and the composition interface displays a list of note positions with rhythm and scale as dimensions, wherein each note position corresponds to one note in the scale dimension corresponding to a certain time under the indication of the rhythm dimension; and the layout adjusting submodule adjusts the occupation width of corresponding note regions in the composition interface according to the corresponding time values of all the to-be-determined notes in the music melody defined by the melody rhythm information so as to display the time value information of the to-be-determined notes of the music melody.

In a further embodiment, the melody acquisition module 1300 includes: a note candidate submodule, configured to determine, as candidate notes, notes within a harmony note interval corresponding to a chord synchronized with a current sequence of the pending note in the rhythm, corresponding to the current sequence of the music melody; the musical note filtering submodule is used for filtering the candidate musical notes according to a preset rule to obtain the remaining selectable musical notes; a note presenting sub-module, configured to display a note zone location corresponding to the selectable note on the composition interface at a position corresponding to the current sequence, so as to form a note prompt zone 220; a note selection sub-module for receiving a selected note determined from the plurality of selectable notes in the note-cue region 220, advancing the musical melody to a next order to cycle through its subsequent selected notes.

In a further embodiment, the predetermined rule is configured to remove at least individual notes from the plurality of candidate notes according to a previously ranked selected note in the note filtering sub-module.

In a further embodiment, the melody acquisition module 1300 further includes: and the note adjusting submodule is used for responding to the reset event of any selected note, starting a sequence updating process of the selected note after the selected note is sequenced, and enabling the selected note after the selected note is sequenced on the composition interface to be automatically re-determined according to the selected note before the selected note is sequenced, wherein if the selected note at the sequencing position after the selected note is re-determined does not comprise the selected note which is determined originally, the selected note at the sequencing position is randomly selected from the re-determined selectable notes to be re-determined.

In a further embodiment, the melody acquisition module 1300 further includes: and the automatic composing submodule is used for responding to an automatic composing instruction triggered by a control in a composing interface or a vibration sensor of the local equipment and automatically complementing the note prompt area 220 corresponding to the undetermined note in the music melody and the selected note in the music melody.

In a further embodiment, the note-rendering sub-module includes: a second-level coloring and presenting module, configured to color a note zone corresponding to the selectable note at a position corresponding to the current sequence on the composition interface to form a note prompt area 220, so as to complete representation and display of the selectable note; and the shift highlight secondary module is used for moving the composition interface along the rhythm dimension direction so as to move the note prompt area 220 corresponding to the current sequence to a preset important position.

In an expanded embodiment, the note selection sub-module includes: a note selection secondary module, configured to receive a note corresponding to the selected note location as a selected note of the current sequence of the music melody in response to a selection operation from among note locations corresponding to a plurality of selectable notes in the note hinting area 220; and the highlight marking secondary module is used for highlighting and marking the selected note zone, adding a lyric editing control in the note zone and displaying characters which are synchronous with the lyric zone in rhythm in the lyric text in the lyric cache zone in the lyric editing control.

In a further embodiment, the note selection sub-module further comprises: the editing and storing secondary module is used for responding to an editing event of characters in the lyric editing control acting on the note zone bit and replacing the corresponding edited characters to update corresponding contents in the lyric cache zone according to the word number corresponding relation; and the association refreshing secondary module is used for responding to the content updating event of the lyric cache area and refreshing the characters in the lyric editing control corresponding to the selected notes of the music melody in the composition interface.

In a further embodiment, the music playing module 1400 includes: the type determining submodule is used for responding to a work playing instruction and acquiring a preset sound type; the sound effect processing submodule is used for acquiring the music melody to which the corresponding sound effect is applied according to the preset sound type; and the composition playing sub-module is used for playing the musical composition containing the musical melody.

In one embodiment, the sound effect processing submodule includes: the singing determination secondary module is used for judging and determining that the preset voice type is a voice type representing voice singing according to the lyrics; the lyric determining secondary module is used for acquiring a preset lyric text corresponding to the voice type; and the voice adding secondary module is used for adapting to the voice type, constructing the voice effect of the voice pronunciation of the lyric text, applying the voice effect to the music melody, and synthesizing the background music corresponding to the accompaniment template for the music melody, wherein the background music is the music played according to the chord information.

In another embodiment, the sound effect processing submodule includes: the playing determination secondary module is used for judging and determining the preset sound type as an instrument type representing the playing of a specific type of instrument; the sound effect acquisition secondary module is used for acquiring preset sound effect data corresponding to the type of the musical instrument; and the musical sound adding secondary module is used for adapting to the type of the musical instrument, constructing the sound effect of the corresponding musical instrument according to the sound effect data and applying the sound effect to the music melody.

In an extended embodiment, the apparatus further comprises: and the issuing and submitting module is used for responding to an issuing and submitting instruction, acquiring the text information input from an issuing and editing interface, issuing the musical composition to a corresponding control of a browsable interface, and implanting a player for playing the musical composition and the text information into the corresponding control.

In a further embodiment, the apparatus further includes the following modules that operate prior to the release submission module: and the composition monitoring module is used for responding to the determined event of the selected note, judging whether the selected note corresponds to the last undetermined note in the music melody, if so, activating an issuing control used for triggering the issuing submission instruction, and otherwise, keeping the issuing control in an inactivated state.

In an extended embodiment, the apparatus further comprises: a word filling display module for displaying a word filling interface for receiving lyric input, wherein the word filling interface provides lyric prompt information corresponding to the determined selected notes in the music melody; and the word filling confirmation module is used for responding to the word filling confirmation instruction, storing the lyrics input in the word filling interface into a lyric cache region, and synchronizing the lyrics to the note region where the corresponding selected note is positioned in the composition interface for displaying.

In a further embodiment, the word-filling display module includes: the word filling layout submodule is used for displaying a word filling interface, dividing the total number of selected notes corresponding to each lyric single sentence according to sentence dividing information in the melody rhythm information corresponding to the music melody, and determining word number information corresponding to each lyric single sentence according to the total number of the selected notes; the area-based layout submodule is used for correspondingly displaying a plurality of editing areas in the word filling interface according to the single lyric sentence; and the single sentence loading submodule is used for loading and displaying single sentence texts of corresponding single words of lyrics in the lyrics texts in the lyrics cache region for each editing region, and displaying lyrics prompt information including the maximum word number information of the lyrics to be input of the corresponding single sentence of the lyrics for each editing region.

In an embodiment, the single sentence loading submodule includes: the single sentence acquisition secondary module is used for acquiring a lyric text in the lyric cache region and dividing a single sentence text corresponding to each lyric single sentence in the lyric text according to the sentence dividing information; and the single sentence loading secondary module is used for displaying each single sentence text in a text box of the corresponding editing area, and displaying the lyric prompting information in a prompting area of the editing area, wherein the lyric prompting information comprises the maximum word number information of the lyric single sentence and the total input number information in the current text box.

In a further embodiment, the apparatus further comprises the following modules operating after the word-filling display module: the intelligent reference module is used for responding to an intelligent reference instruction triggered by a single sentence text in the lyric text and entering an intelligent search interface; the intelligent association module is used for responding to the keywords input from the intelligent search interface and displaying one or more recommended texts matched with the keywords; and the recommendation updating module is used for responding to a selected instruction of one of the recommended texts, replacing the single sentence text with the selected recommended text and synchronizing the single sentence text to the lyric cache region.

In a further embodiment, the apparatus further comprises the following modules operating after the word-filling display module: and the automatic word filling module is used for responding to an automatic word filling instruction triggered by a control in a word filling interface or a vibration sensor of the local equipment and automatically completing the lyric text according to the selected musical notes of the music melody.

In an extended embodiment, the apparatus further comprises: and the draft submitting module is used for submitting draft information corresponding to the musical works to the server, and the draft information comprises the accompaniment template corresponding to the musical works, the preset sound type and the musical melody.

In a preferred embodiment, the playing speed of the musical pieces is uniformly determined according to a preset speed per hour.

In a preferred embodiment, the selected note is a chord tonic within a harmony interval corresponding to a chord specified in the preset accompaniment chord information in synchronization with the rhythm thereof.

In a preferred embodiment, the chord is a columnar chord and/or a decomposed chord, and the chords in the accompaniment chord information are organized according to the chords, each chord being rhythmically synchronized with the one or more selected notes determined in sequence.

Referring to fig. 26, the apparatus for synthesizing musical composition provided by the present application, which is adapted to the method for synthesizing musical composition of the present application to perform functional deployment, includes the following steps: a draft acquiring module 2100, a draft storing module 2200, a sound effect synthesizing module 2300, and a music synthesizing module 2400. The draft acquiring module 2100 is configured to determine draft information of an original user in response to a music composition instruction triggered and submitted by the original user, where the draft information includes an accompaniment template specified in the instruction, a preset sound type, and a music melody of which a selected note is determined by the original user, and a duration of the selected note of the music melody is determined according to melody rhythm information corresponding to the accompaniment template; the draft storage module 2200 is configured to store the draft information in the personal editing library of the original user for subsequent call; the sound effect synthesizing module 2300 is configured to synthesize the corresponding sound effect into the music melody according to a preset sound type; the music composition module 2400 is configured to combine the background music formed according to the playing of the accompaniment chord information and the music melody into a playable musical composition, and push the musical composition to the user.

In an expanded embodiment, the device comprises: the authorized access module is used for responding to an authorized access instruction and pushing the draft information to an authorized user authorized by the original user; and the draft replacing module is used for receiving the updated version of the draft information submitted by the authorized user to replace the original version of the draft information, regenerating the playable musical composition according to the updated version and pushing the musical composition to the original user.

In a preferred embodiment, the updated version of the draft information includes lyric text corresponding to the music melody.

In a preferred embodiment, in the sound effect synthesizing module, when the preset sound type is a human sound type, a pre-trained acoustic model is called to synthesize the lyric text carried in the draft information into a sound effect of a predetermined tone, and then the sound effect is synthesized into the music melody.

In order to solve the technical problem, an embodiment of the present application further provides a computer device. As shown in fig. 27, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a computer-readable storage medium, a memory, and a network interface connected by a system bus. The computer readable storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions when executed by the processor can enable the processor to realize a musical composition generating/synthesizing method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions that, when executed by the processor, may cause the processor to perform the musical composition generating/synthesizing method of the present application. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 27 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

In this embodiment, the processor is configured to execute specific functions of each module and its sub-module in fig. 25 and 26, and the memory stores program codes and various data required for executing the modules or sub-modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in the present embodiment stores program codes and data necessary for executing all modules/sub-modules in the musical composition generating/synthesizing apparatus of the present application, and the server can call the program codes and data of the server to execute the functions of all sub-modules.

The present application also provides a storage medium having stored thereon computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the musical piece generating/composing method of any of the embodiments of the present application.

The present application also provides a computer program product comprising computer programs/instructions which, when executed by one or more processors, implement the steps of the method as described in any of the embodiments of the present application.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments of the present application can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when the computer program is executed, the processes of the embodiments of the methods can be included. The storage medium may be a computer-readable storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).

In conclusion, the method and the device can efficiently guide the user to create the music melody to form the music works, enrich the auxiliary music creation means and improve the auxiliary music creation efficiency.

Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.

The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

52页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种电子钢琴的控制面板及其使用方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!