Music generation method and device, electronic equipment and storage medium

文档序号:570033 发布日期:2021-05-18 浏览:2次 中文

阅读说明:本技术 一种音乐生成方法、装置、电子设备和存储介质 (Music generation method and device, electronic equipment and storage medium ) 是由 王佳 高林明 于 2021-01-22 设计创作,主要内容包括:本申请实施例公开了一种音乐生成方法、装置、电子设备和存储介质,其中方法包括:获取胎儿的原始心率数据,并根据原始心率数据确定待生成音乐的音频参数;从原始心率数据中提取预设数量的心率值点;依次遍历心率值点,根据预设的心率值与音符的映射关系,确定每个心率值点对应的音符;根据预设旋律生成规则,从确定的音符中选择目标音符,并根据目标音符组成音符旋律;根据和旋规则生成伴奏;根据音符旋律和伴奏生成五线谱,并根据五线谱,结合音频参数生成可播放的音乐。本申请实施例实现了将胎儿心率转换成音乐和五线谱的目的。(The embodiment of the application discloses a music generation method, a music generation device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring original heart rate data of a fetus, and determining audio parameters of music to be generated according to the original heart rate data; extracting a preset number of heart rate value points from the original heart rate data; sequentially traversing the heart rate value points, and determining the musical notes corresponding to each heart rate value according to the preset mapping relation between the heart rate values and the musical notes; selecting target notes from the determined notes according to a preset melody generation rule, and forming note melodies according to the target notes; generating an accompaniment according to the rule of harmony and rotation; and generating a staff according to the note melody and the accompaniment, and generating playable music according to the staff and the audio parameters. The purpose of converting the fetal heart rate into music and a staff is achieved.)

1. A music generation method, comprising:

acquiring original heart rate data of a fetus, and determining audio parameters of music to be generated according to the original heart rate data;

extracting a preset number of heart rate value points from the original heart rate data;

sequentially traversing the heart rate value points, and determining the musical notes corresponding to each heart rate value according to the preset mapping relation between the heart rate values and the musical notes;

selecting target notes from the determined notes according to a preset melody generation rule, and forming note melodies according to the target notes;

generating an accompaniment according to the rule of harmony and rotation;

and generating a staff according to the note melody and the accompaniment, and generating playable music according to the staff and the audio parameters.

2. The method of claim 1, wherein the audio parameters include at least performance tempo, performance intensity, and music tonality;

correspondingly, determining the audio parameters of the music to be generated according to the original heart rate data comprises the following steps:

determining an average heart rate value and a fetal heart rate baseline from the raw heart rate data;

determining the playing speed of the music to be generated according to the average heart rate value and a preset playing speed calculation formula;

determining the music key of the music to be generated according to the average heart rate value and the mapping relation between the preset heart rate range and the music key;

and determining the playing intensity of the music to be generated according to the fetal heart rate baseline and the mapping relation between the preset heart rate range and the playing intensity.

3. The method of claim 1, wherein extracting a preset number of heart rate value points from the raw heart rate data comprises:

sequentially traversing the heart rate value points included in the original heart rate data, and judging whether each heart rate value point exceeds a heart rate threshold interval in the traversing process;

if a certain heart rate value point exceeds the heart rate threshold interval, modifying the heart rate value corresponding to the heart rate value point;

and after traversing is finished, extracting a preset number of heart rate value points from the original heart rate data according to a preset time interval.

4. The method of claim 3, further comprising:

and if the number of the extracted heart rate value points is less than the preset number, adjusting the preset time interval, and re-extracting the heart rate value points according to the adjusted time interval.

5. The method of claim 1, wherein the melody generation rules include a number of segments of the melody, a number of notes included in each segment, a note duration of a first note in each segment, and a note that will be followed when a homophonic event is encountered as a sustain.

6. The method of claim 1, further comprising, before composing the note melody from the target notes:

and judging whether the interval between two adjacent notes exceeds a preset threshold value or not, and performing octave-up or octave-down processing on the notes according to the judgment result.

7. The method of claim 1, wherein generating playable music from the staff in combination with the audio parameters comprises:

and converting the staff and the audio parameters into playable music through a transcoding tool.

8. A music generating apparatus, comprising:

the data acquisition and parameter determination module is used for acquiring the original heart rate data of the fetus and determining the audio parameters of the music to be generated according to the original heart rate data;

the data extraction module is used for extracting a preset number of heart rate value points from the original heart rate data;

the traversal module is used for sequentially traversing the heart rate value points and determining the musical notes corresponding to each heart rate value point according to the preset mapping relation between the heart rate value and the musical notes;

the melody generating module is used for selecting target notes from the determined notes according to a preset melody generating rule and forming note melodies according to the target notes;

the accompaniment generating module is used for generating the accompaniment according to the rule of harmony and rotation;

and the staff and music generation module is used for generating staff according to the note melody and the accompaniment and generating playable music by combining the audio parameters according to the staff.

9. An electronic device, comprising:

one or more processors;

a storage device for storing one or more programs,

the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the music generation method of any of claims 1-7.

10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a music generation method according to any one of claims 1 to 7.

Technical Field

The present application relates to the field of data processing, and in particular, to a music generation method and apparatus, an electronic device, and a storage medium.

Background

The fetal heart rate is the heart rate of a fetus, and the normal value of the fetal heart rate is generally 110-160 times/minute. With the development of fetal heart monitoring technology, a mother can see a fetal heart rate change trend graph and directly see the heart beating of the baby in the stomach. However, the heart rate variation trend graph can only be displayed as a cold ice graph, and the mother can not intuitively experience the beauty of new life.

Disclosure of Invention

The embodiment of the application provides a music generation method, a music generation device, electronic equipment and a storage medium, so as to achieve the purpose of generating a staff and playable music by utilizing fetal heart rate data.

In a first aspect, an embodiment of the present application provides a music generation method, where the method includes:

acquiring original heart rate data of a fetus, and determining audio parameters of music to be generated according to the original heart rate data;

extracting a preset number of heart rate value points from the original heart rate data;

sequentially traversing the heart rate value points, and determining the musical notes corresponding to each heart rate value according to the preset mapping relation between the heart rate values and the musical notes;

selecting target notes from the determined notes according to a preset melody generation rule, and forming note melodies according to the target notes;

generating an accompaniment according to the rule of harmony and rotation;

and generating a staff according to the note melody and the accompaniment, and generating playable music according to the staff and the audio parameters.

In a second aspect, an embodiment of the present application provides a music generating apparatus, including:

the data acquisition and parameter determination module is used for acquiring the original heart rate data of the fetus and determining the audio parameters of the music to be generated according to the original heart rate data;

the data extraction module is used for extracting a preset number of heart rate value points from the original heart rate data;

the traversing module is used for sequentially traversing the heart rate value points and determining the musical notes corresponding to each heart rate value point according to the preset mapping relation between the heart rate value and the musical notes;

the melody generating module is used for selecting target notes from the determined notes according to a preset melody generating rule and forming note melodies according to the target notes;

the accompaniment generating module is used for generating the accompaniment according to the rule of harmony and rotation;

and the staff and music generation module is used for generating staff according to the note melody and the accompaniment and generating playable music according to the staff and by combining the audio parameters.

In a third aspect, an embodiment of the present application further provides an electronic device, including:

one or more processors;

a storage device for storing one or more programs,

when the one or more programs are executed by the one or more processors, the one or more processors implement the music generation method according to any of the embodiments of the present application.

In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a music generation method according to any embodiment of the present application.

In the embodiment of the application, the musical notes corresponding to each heart rate value point are determined by extracting partial heart rate value points and according to the preset mapping relation between the heart rate value and the musical notes, partial phonetic symbols are selected from the musical notes to form the musical note melody, and then the staff is generated by combining the accompaniment; and finally generating the final music according to the staff and the audio parameters. Therefore, the aim of converting the fetal heart rate into music and a staff is achieved, and the rhythm of fetal emotion change can be obtained according to the music.

Drawings

FIG. 1a is a schematic flow chart of a music generation method in a first embodiment of the present application;

FIG. 1b is a schematic representation of the 20 minute fetal heart rate trend in the first embodiment of the present application;

fig. 1c is a schematic view of a piano key in the first embodiment of the present application;

FIG. 1d is a schematic diagram of a staff of a music piece according to the first embodiment of the present application;

fig. 2 is a schematic flow chart of a music generation method in a second embodiment of the present application;

FIG. 3 is a logic diagram of a music generation method in a third embodiment of the present application;

fig. 4 is a schematic structural diagram of a music generating apparatus in a fourth embodiment of the present application;

fig. 5 is a schematic structural diagram of an electronic device implementing a music generation method in a fifth embodiment of the present application.

Detailed Description

The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.

At present, in the prior art, music playing is mainly controlled through heart rate variation, and heart rate data cannot be directly converted into a staff and music. Based on this, the inventors have creatively proposed a music generation method. See the examples below for specific methods.

Fig. 1a is a flowchart of a music generation method according to a first embodiment of the present application, which is applicable to a case where fetal heart rate data is converted into music, and the method may be executed by a music generation apparatus, which may be implemented in software and/or hardware, and may be integrated in an electronic device, for example, on a server or a computer device.

As shown in fig. 1a, the music generating method specifically includes the following steps:

s101, obtaining original heart rate data of a fetus, and determining audio parameters of music to be generated according to the original heart rate data.

In the application, the original heart rate data of the fetus is the fetal heart rate variation trend data of a preset time duration, for example, the fetal heart rate variation trend data may be 20 minutes. Illustratively, referring to fig. 1b, which shows a schematic diagram of the fetal heart rate trend over 20 minutes, it can be seen that the raw heart rate data (i.e. the fetal heart rate trend data) is composed of a plurality of heart rate value points, and each heart rate value point corresponds to one heart rate value.

In the embodiment of the application, the audio parameters at least comprise playing speed, playing intensity and music tonality. Correspondingly, the audio parameters of the music to be generated are determined according to the original heart rate data, and the method comprises the following four steps:

(1) from the raw heart rate data, an average heart rate value and a fetal heart rate baseline are determined.

The average heart rate value is the mean value of the heart rate values obtained according to all the heart rate value points included in the original heart rate data; the fetal heart rate baseline refers to the average value of the heart rate in more than 10 minutes without fetal movement and uterine contraction, and in an alternative embodiment, the determined average heart rate value can be directly used as the fetal heart rate baseline for improving the calculation efficiency.

(2) And determining the playing speed of the music to be generated according to the average heart rate value and a preset playing speed calculation formula.

In the embodiment of the present application, the first step is toAs the predetermined playing speed of the music, the conversion rule is as follows, using the quarter note as a standard:and so on.

The preset performance speed calculation formula is as follows: playing speed ═ ROUNDUP (average heart rate value 0.6); where 0.6 is the interval value of the heart rate conversion music index, and ROUNDUP () is an upward rounding function.

Based on the above, it is exemplified that if the average heart rate value is 120bpm, 120 × 0.6 ═ 72, and ROUNDUP (72) ═ 80, the playing speed is thus 120bpm, 120 × 0.6 ═ 72, and 72, 80, respectivelyIt should be noted that, in the following description,the representation shows that 80 quarter notes are played every minute, namely 80 beats are played every minute with the quarter notes as one beat.

(3) And determining the music tonality of the music to be generated according to the average heart rate value and the mapping relation between the preset heart rate range and the music tonality.

In the embodiment of the present application,music Tonality (Tonality) is a generic term for the major and tonal categories of tones, and for example, a major tone with C as the major tone is "C major tone", and a minor tone with a as the major tone is "a minor tone". By analogy, there are mainly 24 key features in general music. In consideration of basic music theory knowledge of the music, in the embodiment of the application, only 12 tonal music are selected. The ABCDEFG major key (7) and the descending (b) ABCDEG major key (5) are arranged in the specific sequence of A/Ab/B/Bb/C/D/Db/E/Eb/F/G/Gb

In the embodiment of the application, the fetal heart rate can be divided according to the conventional fetal heart rate range (110-; 116-120 corresponds to AbMajor adjustment; 121-125 corresponds to the major key B; 126-130 corresponds to BbThe major scale … … is continued in this way, so that the mapping relation between the heart rate range and the music tonality can be obtained. And subsequently, the music tonality can be directly determined according to the mapping relation.

Illustratively, if the 20min fetal heart reports a fetal average heart rate of 130bpm, the inflection is regularly assigned to BbAnd (5) major key.

(4) And determining the playing intensity of the music to be generated according to the fetal heart rate baseline and the mapping relation between the preset heart rate range and the playing intensity.

Illustratively, the performance intensity is divided into weak (MP), Medium (MF) and strong (F) in advance based on the fetal heart rate Baseline (BL), and specifically, when the baseline is in the range of 110-; when the baseline is in the range of 121-; the performance intensity was F at the baseline in the range of 141-160 bpm. And subsequently, the playing intensity of the music to be generated can be determined only according to the range of the heart rate baseline.

It should be noted that in the embodiment of the present application, by setting the audio parameter, the auditory effect of the music generated by subsequently combining the audio parameter can be ensured.

S102, extracting a preset number of heart rate value points from the original heart rate data.

Because the original heart rate data (namely the fetal heart rate variation trend data) consists of a plurality of heart rate value points, a preset number of heart rate value points can be directly extracted from the original heart rate data and added into a heart rate sampling value queue, and then music generation operation can be performed according to the points in the heart rate sampling value queue. The preset number is determined according to music to be generated, and is 1200 for example. It should be noted that, as can be seen from the heart rate trend graph in fig. 1b, four heart rate value points are corresponding to each second, and therefore, the raw heart rate data of 20 minutes includes 4800 heart rate value points. Therefore, the predetermined number may also be a value between 1200 and 4800, and is not limited herein.

S103, sequentially traversing the heart rate value points, and determining the musical notes corresponding to each heart rate value according to the preset mapping relation between the heart rate values and the musical notes.

In the embodiment of the application, when the mapping relation between the heart rate value and the notes is predetermined, the range of the lowest syllable is defined, the range of the highest syllable and the range of the lowest syllable are removed according to the musical performance, the proper range of the. For example, referring to fig. 1c, which shows a schematic diagram of a piano key, the area in the rectangular frame is a range of motion, when the heart rate ranges from min to max bpm, the lowest note of the segment of melody and the leftmost note corresponding to the interval selected by the piano key at the heart rate min are the rightmost note, and max to min intervals, where min represents the minimum value and max represents the maximum value.

Illustratively, the range of the minimum interval to the maximum interval of the fetal heart rate ranges from 110 bpm to 160bpm, and then the following results are obtained according to piano key traversal notes: 160- > e (e4),159- > e (e4b),158- > d (d4),157- > d (db),156- > c (c4). In the embodiment of the present application, 160- > E (E4) is taken as an example for explanation, E (E4) represents a note, and in fig. 1c, the piano key represented by E (E4) is a black key between D and E in four groups of small characters.

Through the corresponding relation, the mapping relation between the heart rate value and the notes can be determined. Therefore, the notes corresponding to the heart rate values of all the points can be determined according to the mapping relation only by traversing all the heart rate value points in the queue.

S104, selecting target notes from the determined notes according to a preset melody generation rule, and forming note melodies according to the target notes.

In the embodiment of the present application, the melody generation rule includes the number of segments of the melody, the number of notes included in each segment, the note duration of the first note in each segment, and the notes that will be ranked later when a homophonic sound is encountered as a sustain. Illustratively, the number of the segments is 8, that is, the generated melody includes 8 rows, and each segment includes 19 notes, so for any segment, when the determined number of the target notes belonging to the segment exceeds 19, the note determination of the segment is completed, and a line change is needed to continue to determine the target notes of the next segment until all the target notes of 8 segments are determined, and then the note melody can be composed according to the target notes. It should be noted that, since there are cases where a plurality of consecutive identical notes occur due to identical heart rate values of a plurality of consecutive heart rate points, in the process of selecting a target note from the determined notes, the first note is retained, and the rest is used as a sustain, and a sustain line is required to be added to mark the same note in the interval range. In the embodiment of the present application, in order to ensure the musical effect, a sustain of three consecutive identical notes may be allowed at most.

It should be noted that, the music melody generated according to the time interval between adjacent heartbeats is not selected in the present application because the music melody generated according to the time interval between adjacent heartbeats is relatively simple and has a poor music effect.

And S105, generating the accompaniment according to the rule of sum rotation.

In the embodiment of the application, in order to reduce the difficulty of generating the accompaniment and ensure the graceful accompaniment, the 'F G Em Am Dm 7G C' is optionally adopted as the harmony rule. It should be noted that the accompaniment pieces and the note melody pieces are equal in number. Furthermore, as the staff which can be played through the piano is generated, when the accompaniment is generated, in order to ensure the rhythm sense of suppressing the yangton frustration, a pedal identification position can be added, the adding rule is \ susatinon accompaniment \ susatinoff, wherein the susatinon is the pedal starting identification position, and the susatinoff is the pedal ending identification position.

And S106, generating a staff according to the note melody and the accompaniment, and generating playable music according to the staff and the audio parameters.

Illustratively, referring to fig. 1d, a schematic diagram of a staff of a segment generated according to the note melody and the accompaniment is shown, wherein the rectangular frame 1 includes a score corresponding to the note melody composed of 19 notes, and the rectangular frame 2 includes a score of the accompaniment generated according to the rotation rule.

And according to the staff, combine the audio parameter to produce the music that can be broadcast, including: and converting the staff and audio parameters into playable music through a transcoding tool. Illustratively, the generated staff and audio parameters are saved in a fixed format (e.g., saved in a. ly format), a lilypond music score tool is invoked to generate a mid-format music file; and then converting the mid format file into playable music, for example, converting into an mp3 format file, and during specific conversion, transcoding the mid format music file into a wav format by using AudioSystem, and further converting into an mp3 format by using a JAVE class library.

In the embodiment of the application, the fetal heart rate data are converted into elegant playable music, so that anxiety of the pregnant woman during pregnancy can be relieved.

In the embodiment of the application, the musical notes corresponding to each heart rate value point are determined by extracting partial heart rate value points and according to the preset mapping relation between the heart rate value and the musical notes, partial phonetic symbols are selected from the musical notes to form the musical note melody, and then the staff is generated by combining the accompaniment; and finally generating the final music according to the staff and the audio parameters. Therefore, the aim of converting the fetal heart rate into music and a staff is achieved, and the rhythm of fetal emotion change can be obtained according to the music.

Fig. 2 is a flowchart of a music generating method according to a second embodiment of the present application, and this embodiment is optimized based on the foregoing embodiment, and referring to fig. 2, the method includes:

s201, obtaining original heart rate data of a fetus, and determining audio parameters of music to be generated according to the original heart rate data.

S202, sequentially traversing the heart rate value points included in the original heart rate data, and judging whether each heart rate value point exceeds a heart rate threshold interval in the traversing process.

In the embodiment of the application, since the normal heart rate of the fetus is between 110-. Because the abnormal condition that the fetal heart rate value is more than 160 or less than 110 can be caused by the emotional change, the movement and the like of the mother, the heart rate value points included in the original heart rate data need to be traversed in sequence, the abnormal heart rate value points are marked, that is, whether the heart rate value corresponding to each heart rate value exceeds the heart rate threshold interval or not is judged in the traversing process.

S203, if a certain heart rate value point exceeds the heart rate threshold interval, modifying the heart rate value corresponding to the heart rate value point.

And in the process of judging each heart rate value point in the traversing process, if the heart rate value corresponding to a certain heart rate value exceeds the heart rate threshold interval, modifying the heart rate value corresponding to the heart rate value. Illustratively, if the heart rate value corresponding to the heart rate value point is less than 110, the heart rate value is modified to 110; if the heart rate value corresponding to the heart rate value point is larger than 160, the heart rate value is modified to be 160. It should be noted that, by modifying the heart rate value corresponding to the abnormal heart rate value, a guarantee is provided for subsequently generating graceful music.

And S204, after traversing is finished, extracting a preset number of heart rate value points from the original heart rate data according to a preset time interval.

Illustratively, a preset number of heart rate value points may be extracted from the raw heart rate data as taking a point every 5 seconds.

It should be noted that, if the number of the extracted heart rate value points is smaller than the preset number, the preset time interval is adjusted, and the heart rate value points are extracted again according to the adjusted time interval, so as to ensure that enough heart rate value points are extracted. For example, if the number of heart rate value points extracted every 5 seconds is less than the preset number, the time interval is adjusted to 3 seconds, and the heart rate value points are extracted again every 3 seconds.

S205, sequentially traversing the heart rate value points, and determining the musical notes corresponding to each heart rate value according to the preset mapping relationship between the heart rate values and the musical notes.

S206, selecting target notes from the determined notes according to a preset melody generation rule, and forming note melodies according to the target notes.

Wherein the melody generation rule includes the number of segments of the melody, the number of notes included in each segment, the note duration of the first note in each segment, and the notes that will be ranked later when a homophonic sound is encountered as a sustain. The process of selecting the target note from the determined notes can be referred to the above embodiments, and will not be described herein.

In the embodiment of the present application, in order to ensure the effect of the generated note melody, before the note melody is composed according to the target note, the method further includes: judging whether the interval between two adjacent target notes exceeds a preset threshold value or not, and performing octave increasing or octave decreasing processing on the target notes according to the judgment result, wherein the preset threshold value is 2 optionally, the interval between the two target notes refers to the number of notes between the two target notes, for example, two adjacent target notes c and g, the two target notes comprise three notes d, e and f, and if the interval is larger than the preset threshold value, performing octave increasing processing on the target note c, and performing octave decreasing processing on the target note g. In another alternative embodiment, centered at f, the target note is smaller than f, and needs to be processed by octave rising, the target note is larger than f, and needs to be processed by octave falling, for example, the target note c is on the left side of f, the target note c is smaller than f, and needs to be processed by octave rising, the target note g is on the right side of f, and the target note g is larger than f, and needs to be processed by octave falling.

And S207, generating the accompaniment according to the rule of sum rotation.

And S208, generating a staff according to the note melody and the accompaniment, and generating playable music according to the staff and the audio parameters.

In the embodiment of the application, the heart rate value corresponding to the abnormal heart rate value is adjusted, so that the subsequent generation of graceful music is guaranteed; moreover, by adjusting the time interval for extracting the heart rate value points, enough heart rate value points can be ensured to be extracted; by carrying out octave-up or octave-down processing on the target note, the decibel of the note melody can be optimized, and the beauty of the generated music is further ensured.

Fig. 3 is a logic flow diagram of a music generating method according to a third embodiment of the present application, and this embodiment is optimized based on the above embodiments, referring to fig. 3, where the method includes:

first, 20-minute heart rate data points (i.e., heart rate value points) are acquired, and then, the music tonality is set, for example, the music tonality of the music to be generated is determined according to the average heart rate value of the 20-minute heart rate data points and the preset mapping relationship between the heart rate range and the music tonality, and a specific process can be referred to the above embodiment. It should be noted that, while determining the music tonality, the playing speed and the playing intensity of the music to be generated can also be determined according to the 20-minute heart rate data points, and the specific process can be referred to the above embodiments.

Further, according to the point-taking frequency (namely the point-taking time interval), the number of single-tone sampling points is obtained from the 20-minute heart rate data points and is placed in a heart rate sampling queue. Further, calculating the length of the sampling queue, namely determining the number of center rate data points in the sampling queue, and judging whether the number of the collected heart rate data points is greater than a preset number, if not, increasing the point taking frequency (namely adjusting the time interval of point taking), and re-collecting the heart rate data points and placing the heart rate data points in the sampling queue; if yes, traverse the heart rate data points in the sampling queue to generate the note melody, as described in the above embodiments.

Furthermore, the accompaniment is generated according to the preset rule and the rotation rule, and then the staff is generated according to the note melody and the accompaniment.

And finally, transcoding the audio parameters such as the staff and the music tonality into a music file in mid format, and further converting the music file in mid format into a playable music file in mp3 format by using a transcoding tool.

In the embodiment of the application, the musical notes corresponding to each heart rate value point are determined by extracting partial heart rate value points and according to the preset mapping relation between the heart rate value and the musical notes, partial phonetic symbols are selected from the musical notes to form the musical note melody, and then the staff is generated by combining the accompaniment; and finally generating the final music according to the staff and the audio parameters. Therefore, the aim of converting the fetal heart rate into music and a staff is achieved, and the rhythm of fetal emotion change can be obtained according to the music.

Fig. 4 is a schematic structural diagram of a music generating apparatus according to a fourth embodiment of the present application, where this embodiment is applicable to a case where fetal heart rate data is converted into music, and referring to fig. 4, the apparatus includes:

the data acquisition and parameter determination module 401 is configured to acquire original heart rate data of a fetus, and determine an audio parameter of music to be generated according to the original heart rate data;

a data extraction module 402, configured to extract a preset number of heart rate value points from the original heart rate data;

the traversal module 403 is configured to sequentially traverse the heart rate value points, and determine a note corresponding to each heart rate value according to a preset mapping relationship between the heart rate value and the note;

a melody generating module 404, configured to select a target note from the determined notes according to a preset melody generation rule, and form a note melody according to the target note;

an accompaniment generating module 405 for generating an accompaniment according to the rule of harmony rotation;

the staff and music generating module 406 is configured to generate staff according to the note melody and the accompaniment, and generate playable music according to the staff in combination with the audio parameters.

On the basis of the above embodiment, optionally, the audio parameters at least include performance speed, performance intensity and music tonality;

correspondingly, the data acquisition and parameter determination module comprises:

the mean value and baseline determining unit is used for determining a mean heart rate value and a fetal heart rate baseline according to the original heart rate data;

the first parameter determining unit is used for determining the playing speed of the music to be generated according to the average heart rate value and a preset playing speed calculation formula;

the second parameter determining unit is used for determining the music tonality of the music to be generated according to the average heart rate value and the mapping relation between the preset heart rate range and the music tonality;

and the third parameter determining unit is used for determining the playing intensity of the music to be generated according to the fetal heart rate baseline and the preset mapping relation between the heart rate range and the playing intensity.

On the basis of the foregoing embodiment, optionally, the data extraction module includes:

the traversal and judgment unit is used for sequentially traversing the heart rate value points included in the original heart rate data and judging whether each heart rate value point exceeds a heart rate threshold interval in the traversal process;

the heart rate value modification unit is used for modifying the heart rate value corresponding to a certain heart rate value point if the heart rate value point exceeds a heart rate threshold interval;

and the extraction unit is used for extracting a preset number of heart rate value points from the original heart rate data according to a preset time interval after traversing is finished.

On the basis of the above embodiment, optionally, the apparatus further includes:

and the time interval adjusting module is used for adjusting the preset time interval if the number of the extracted heart rate value points is less than the preset number, and re-extracting the heart rate value points according to the adjusted time interval.

On the basis of the above embodiment, optionally, the melody generation rule includes the number of segments of the melody, the number of notes included in each segment, the note duration of the first note in each segment, and the notes that will be ranked later when a homophonic event is encountered as a sustain.

On the basis of the above embodiment, optionally, the method further includes:

and the note processing module is used for judging whether the interval between two adjacent notes exceeds a preset threshold value before forming the note melody according to the target note and performing octave increasing or octave decreasing processing on the notes according to the judgment result.

On the basis of the above embodiment, optionally, the staff and music generating module is specifically configured to:

and converting the staff and audio parameters into playable music through a transcoding tool.

The music generation device provided by the embodiment of the application can execute the music generation method provided by any embodiment of the application, and has corresponding functional modules and beneficial effects of the execution method.

Fig. 5 is a schematic structural diagram of an electronic device provided in a fifth embodiment of the present application. As shown in fig. 5, the electronic device provided in the embodiment of the present application includes: one or more processors 502 and memory 501; the processor 502 in the electronic device may be one or more, and one processor 502 is taken as an example in fig. 5; the memory 501 is used to store one or more programs; the one or more programs are executed by the one or more processors 502, causing the one or more processors 502 to implement a music generation method as in any one of the embodiments of the present application.

The electronic device may further include: an input device 503 and an output device 504.

The processor 502, the memory 501, the input device 503 and the output device 504 in the electronic apparatus may be connected by a bus or other means, and fig. 5 illustrates the connection by the bus as an example.

The storage device 501 in the electronic device is used as a computer-readable storage medium for storing one or more programs, which may be software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the music generation method provided in the embodiments of the present application. The processor 502 executes various functional applications and data processing of the electronic device by executing software programs, instructions and modules stored in the storage 501, that is, implements the music generation method in the above method embodiments.

The storage device 501 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory 501 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 501 may further include memory located remotely from the processor 502, which may be connected to devices through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The input device 503 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus. The output device 504 may include a display device such as a display screen.

And, when the one or more programs included in the above-described electronic device are executed by the one or more processors 502, the programs perform the following operations:

acquiring original heart rate data of a fetus, and determining audio parameters of music to be generated according to the original heart rate data;

extracting a preset number of heart rate value points from the original heart rate data;

sequentially traversing the heart rate value points, and determining the musical notes corresponding to each heart rate value according to the preset mapping relation between the heart rate values and the musical notes;

selecting target notes from the determined notes according to a preset melody generation rule, and forming note melodies according to the target notes;

generating an accompaniment according to the rule of harmony and rotation;

and generating a staff according to the note melody and the accompaniment, and generating playable music according to the staff and the audio parameters.

Of course, it will be understood by those skilled in the art that when one or more programs included in the electronic device are executed by the one or more processors 502, the programs may also perform related operations in the music generation method provided in any of the embodiments of the present application.

One embodiment of the present application provides a computer-readable storage medium having stored thereon a computer program for executing a music generating method when executed by a processor, the method comprising:

acquiring original heart rate data of a fetus, and determining audio parameters of music to be generated according to the original heart rate data;

extracting a preset number of heart rate value points from the original heart rate data;

sequentially traversing the heart rate value points, and determining the musical notes corresponding to each heart rate value according to the preset mapping relation between the heart rate values and the musical notes;

selecting target notes from the determined notes according to a preset melody generation rule, and forming note melodies according to the target notes;

generating an accompaniment according to the rule of harmony and rotation;

and generating a staff according to the note melody and the accompaniment, and generating playable music according to the staff and the audio parameters.

Optionally, the program, when executed by a processor, may be further configured to perform the method provided in any of the embodiments of the present application.

The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), an Erasable Programmable Read Only Memory (EPROM), a flash Memory, an optical fiber, a portable CD-ROM, an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. A computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take a variety of forms, including, but not limited to: an electromagnetic signal, an optical signal, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, Radio Frequency (RF), etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including, for example, a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).

It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:音频处理方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!