Music score processing method and device and computer equipment

文档序号:1939844 发布日期:2021-12-07 浏览:18次 中文

阅读说明:本技术 曲谱处理方法、装置和计算机设备 (Music score processing method and device and computer equipment ) 是由 丁小玉 于 2021-09-09 设计创作,主要内容包括:本申请涉及一种曲谱处理方法、装置和计算机设备。所述方法包括:获取待处理曲谱,待处理曲谱中包括至少一个音乐单元,音乐单元包括音符单元、小节单元和声部单元中的至少一种;检测基于待处理曲谱进行的第一触发操作;根据第一触发操作的触发位置识别对应位置上的目标音乐单元;获取目标音乐单元在待处理曲谱中对应的坐标信息和时值信息,根据坐标信息和时值信息播放目标音频数据,目标音频数据为待处理曲谱的音频数据中坐标和时值与目标音乐单元对应的音频数据。采用本方法能够提高曲谱播放的灵活性。(The application relates to a method and a device for processing a music score and computer equipment. The method comprises the following steps: acquiring a music score to be processed, wherein the music score to be processed comprises at least one music unit, and the music unit comprises at least one of a note unit, a measure unit and a sound unit; detecting a first trigger operation based on a music score to be processed; identifying a target music unit at a corresponding position according to the trigger position of the first trigger operation; and acquiring coordinate information and duration information corresponding to the target music unit in the music score to be processed, and playing target audio data according to the coordinate information and the duration information, wherein the target audio data is the audio data corresponding to the coordinate and duration information in the audio data of the music score to be processed and the target music unit. The method can improve the flexibility of music score playing.)

1. A method of processing a score, the method comprising:

acquiring a music score to be processed, wherein the music score to be processed comprises at least one music unit, and the music unit comprises at least one of a note unit, a measure unit and a sound unit;

detecting a first trigger operation based on the music score to be processed;

identifying a target music unit at a corresponding position according to the trigger position of the first trigger operation;

and acquiring coordinate information and duration information corresponding to the target music unit in the music score to be processed, and playing target audio data according to the coordinate information and the duration information, wherein the target audio data is audio data corresponding to the coordinate and duration in the audio data of the music score to be processed and the target music unit.

2. The method of claim 1, wherein the target audio data comprises at least one of user historical performance audio data, real person performance audio data, standard performance audio data, and custom performance audio data.

3. The method of claim 1, further comprising:

receiving input current performance exercise data while playing the target audio data;

acquiring music attribute information of the music symbol associated with the target music unit;

analyzing the current performance exercise data according to music attribute information of the music symbols associated with the target music units.

4. The method of claim 3, wherein the musical symbol comprises at least one of a note, a temporary riser, a key signature, a metronome, a hyphen, a slumber, a treble, a triphone, and a hyphen, a power signature, a weakest note, a very weak note, a medium weak signature, a medium strong signature, a strong note, a strong signature, a strongest note, a highlighter, a fader, a strong-to-weak signature, a skip note, a major-break signature, a hold note, a sustain note, a double note, a highlight note, a tremolo signature, a wave note, a return note, a lean note, a broken note, an octave, a fifteen degree signature, a repeat signature, a tail signature, and a pedal signature.

5. The method of claim 3, further comprising:

acquiring information of the marked error-prone playing area in the music score to be processed;

generating prompt information before the current performance exercise is carried out to the performance error prone area.

6. The method of claim 1, further comprising:

and acquiring the beat information of the associated beat of the target music unit in the music score to be processed, and adjusting the beat striking speed of a metronome according to the beat information.

7. The method of any one of claims 1 to 6, further comprising:

detecting a second trigger operation based on the music score to be processed;

identifying a target music symbol on a corresponding position according to the trigger position of the second trigger operation;

and calling target music attribute information of the target music symbol.

8. The method of claim 7, further comprising:

displaying the target music attribute information in the music score to be processed; and/or the presence of a gas in the gas,

and receiving a modification instruction of the target music attribute information, and modifying the target music attribute information according to the modification instruction.

9. A score processing apparatus, characterized in that the apparatus comprises:

the music score acquisition module is used for acquiring a music score to be processed, wherein the music score to be processed comprises at least one music unit, and the music unit comprises at least one of a note unit, a measure unit and a sound unit;

the trigger receiving module is used for detecting a first trigger operation based on the music score to be processed;

the unit identification module is used for identifying a target music unit at a corresponding position according to the trigger position of the first trigger operation;

and the unit playing module is used for acquiring the coordinate information and the time value information corresponding to the target music unit in the music score to be processed, and playing target audio data according to the coordinate information and the time value information, wherein the target audio data is the audio data corresponding to the coordinate and the time value in the audio data of the music score to be processed and the target music unit.

10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented when the computer program is executed by the processor.

Technical Field

The present application relates to the field of computer software technologies, and in particular, to a method and an apparatus for processing a music score, and a computer device.

Background

With the development of computer software technology, computer-aided music learning software has emerged. Computer-assisted music learning software may be used to assist users in music creation, practice, and the like.

Conventional computer-aided music learning software is generally based on MIDI (Musical Instrument Digital Interface) to describe music data, however, the music score in MIDI-based software only contains standard fixed deductive information. Therefore, the playing mode is to play the whole standardized fixed song, the music score in the software is displayed to the user for practice as a whole, and even if the music score can be decomposed into sentences, the music score must be split and fixed in advance, so that the user cannot flexibly select the required content from the music score for practice according to the requirement of the user.

Disclosure of Invention

In view of the foregoing, it is desirable to provide a music score processing method, apparatus and computer device capable of improving flexibility of music score playing.

A method of processing a music score, the method comprising:

acquiring a music score to be processed, wherein the music score to be processed comprises at least one music unit, and the music unit comprises at least one of a note unit, a measure unit and a sound unit;

detecting a first trigger operation based on a music score to be processed;

identifying a target music unit at a corresponding position according to the trigger position of the first trigger operation;

and acquiring coordinate information and duration information corresponding to the target music unit in the music score to be processed, and playing target audio data according to the coordinate information and the duration information, wherein the target audio data is the audio data corresponding to the coordinate and duration information in the audio data of the music score to be processed and the target music unit.

In one embodiment, the target audio data includes at least one of user historical performance audio data, real person performance audio data, standard performance audio data, and custom performance audio data.

In one embodiment, the method further comprises: receiving input current performance exercise data while playing target audio data; acquiring music attribute information of a music symbol associated with a target music unit; the current performance exercise data is analyzed according to music attribute information of the music symbols associated with the target music units.

In one embodiment, the musical symbol includes at least one of a note, a temporary riser, a key signature, a metronome, a hyphen, a chorus, a polyphonic note, a chord note, a power note, a weakest note, a weak note, a medium weak note, a strong note, a strongest note, a highlight note, a fade-in note, a fade-out note, a strong turn-weak note, a skip note, a large break note, a hold note, a sustain note, a heavy note, a strong note, a tremolo note, a wave note, a return note, a lean note, a broken note, an octave note, a fifteen degree note, a repeat, a tail note, and a pedal note.

In one embodiment, the method further comprises: acquiring information of a marked error-prone playing area in a music score to be processed; prompt information is generated before the current performance exercise progresses to the performance error prone area.

In one embodiment, the method further comprises: acquiring the beat information of associated beats of the target music unit in the music score to be processed, and adjusting the beat striking speed of the metronome according to the beat information.

In one embodiment, the method further comprises: detecting a second trigger operation based on the music score to be processed; identifying a target music symbol on a corresponding position according to the trigger position of the second trigger operation; and calling target music attribute information of the target music symbol.

In one embodiment, the method further comprises: and displaying the target music attribute information in the music score to be processed.

In one embodiment, the method further comprises: and receiving a modification instruction of the target music attribute information, and modifying the target music attribute information according to the modification instruction.

A score processing apparatus, the apparatus comprising:

the music score acquisition module is used for acquiring a music score to be processed, wherein the music score to be processed comprises at least one music unit, and the music unit comprises at least one of a note unit, a measure unit and a sound unit;

the trigger receiving module is used for detecting a first trigger operation based on the music score to be processed;

the unit identification module is used for identifying the target music unit at the corresponding position according to the trigger position of the first trigger operation;

and the unit playing module is used for acquiring the coordinate information and the duration information corresponding to the target music unit in the music score to be processed, and playing the target audio data according to the coordinate information and the duration information, wherein the target audio data is the audio data corresponding to the target music unit and the coordinates and the duration in the audio data of the music score to be processed.

A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of processing a music score when executing the computer program.

The music score processing method, the music score processing device and the computer equipment have the advantages that the music score is divided into the music units with the sound parts, the measures or the notes as the units, the music units at the corresponding positions are identified according to the trigger positions of the detected trigger operations, and the audio data with the matched coordinates and the matched duration values with the music units are determined and played according to the corresponding coordinate information and duration value information of the music units in the music score. By adopting the method, the user can flexibly select any note, bar or vocal part to play according to the requirement, thereby improving the flexibility of music score playing.

Drawings

FIG. 1 is a diagram of an exemplary embodiment of a process for processing a music score;

FIG. 2 is a schematic flow chart diagram of a method for processing a music score according to one embodiment;

FIG. 3 is a block diagram showing the structure of a music score processing apparatus according to an embodiment;

FIG. 4 is a diagram illustrating an internal structure of a computer device according to an embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

The music score processing method provided by the application can be applied to the application environment shown in fig. 1. Specifically, the terminal 102 acquires a music score to be processed, the music score to be processed including at least one music unit, the music unit including at least one of a note unit, a measure unit, and a vocal unit; receiving a first trigger operation based on a music score to be processed; identifying a target music unit at a corresponding position according to the trigger position of the first trigger operation; and acquiring unit position information corresponding to the target music unit in the music score to be processed, and playing target audio data according to the unit position information, wherein the target audio data is audio data corresponding to the target music unit.

The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, electronic organ devices, electronic piano devices, smart watches, and portable wearable devices.

In one embodiment, as shown in fig. 2, a method for processing a curved spectrum is provided, which is described by taking the method as an example for being applied to the terminal in fig. 1, and includes the following steps:

step S202: and acquiring a music score to be processed, wherein the music score to be processed comprises at least one music unit, and the music unit comprises at least one of a note unit, a measure unit and a sound unit.

The music score to be processed refers to a digital music score to be operated and selected by a user, and the music score can be a staff, a numbered musical notation and other music scores containing music information.

One or more music units may be included in the music score to be processed. The musical unit may be a note unit, a bar unit, or a vocal unit. The note unit may be made up of at least one note, which may be a note associated or nested in any sound or bar. A bar unit may be made up of at least one bar, which may be bars associated or nested in any vocal part. The vocal section unit may be constituted by at least one vocal section, which may be any of a left-hand vocal section characterizing left-hand playing, a right-hand vocal section characterizing right-hand playing, or a polyphonic section.

Specifically, the terminal device may obtain a music score specified by the user as a music score to be processed according to the selection of the user.

In one embodiment, different music units may be presented in the display screen by different forms of representation, e.g., different vocal units are presented in different colors.

In one embodiment, the music score to be processed may have a plurality of different layouts or may dynamically generate a layout adapted to the terminal device according to different terminal devices, so as to adapt to different devices and different display states such as a horizontal screen and a vertical screen. The music score to be processed can be amplified or reduced, and the position information of each music unit in the amplified or reduced music score can be correspondingly converted, so that the effect of a self-adaptive screen is realized.

Step S204: a first instruction triggered based on a to-be-processed music score is received.

The first trigger operation is used for triggering playing, and the first trigger operation may be a trigger operation selected by a user through a representation such as clicking, sliding or dragging of a music score to be processed through a terminal display interface, or a trigger operation selected by a representation through voice or the like. The triggering operation can be realized by a mouse, a keyboard, a touch screen or a voice input plug-in unit and the like.

Specifically, the user may perform a trigger operation based on the terminal screen, select a music unit desired to be played or practiced through operations such as clicking, dragging, and the like, or perform a trigger operation based on voice, and specify the position of the music unit desired to be played. The user can select one or more music units according to the requirement of the user. For example, the user may trigger selection of bars 5-8 of the left-hand portion, or a chord composed of some three notes in bar 1 of the right-hand portion, or the like. The terminal device detects that the user performs the selection operation of the music unit through the terminal, and takes the operation as a first trigger operation.

Step S206: and identifying the target music unit at the corresponding position according to the trigger position of the first trigger operation.

Specifically, for the screen-based triggering operation, scaling may be performed according to the size ratio of the screen and the music score, so as to calculate the position correspondence between the triggering position of the screen and each music unit in the music score, and further identify the corresponding music unit at the triggering position, and take the music unit at the triggering position as the target music unit. For the voice-based triggering operation, the voice input by the user can be analyzed, so that the position information contained in the voice is obtained, and the target music unit is locked in the music score according to the position information.

Step S208: and acquiring corresponding coordinate information and time value information of the target music unit in the music score to be processed, and playing target audio data according to the coordinate information and the time value information, wherein the target audio data is audio data corresponding to the target music unit.

Wherein the coordinate information is a position coordinate representing the correspondence of the target music unit in the vocal part track in the music score. The duration information refers to a corresponding time point value or time period range value of the target music unit in the time track of the music score. Specifically, the music score may be composed of note units, bar units, and vocal units parallel to the time axis, which are ordered by the time axis. For example, each bar unit has a corresponding time period (represented by duration information) in the music score and a corresponding sound part (represented by coordinate information).

Specifically, when recognizing that a user triggers a selected target music unit, the terminal device acquires coordinate information and duration information configured by the target music unit in a music score to be processed, determines audio data at a position matched with the coordinate information and the duration information from audio corresponding to the music score to be processed according to the coordinate information and the duration information, and plays the determined audio data as target audio data, more specifically, from a duration start point corresponding to the target audio data. More specifically, when playing, the cursor may track the position to which the cursor is played in real time along with the playing of the music score to be processed, that is, the cursor is displayed in real time at the position to which the cursor is currently played.

The music score processing method comprises the steps of splitting a music score into music units with sound parts, measures or notes as units, identifying the music units at corresponding positions according to the trigger positions of detected trigger operations, and determining and playing audio data with the matched coordinates and durations of the music units according to the corresponding coordinate information and duration information of the music units in the music score. By adopting the method, a user can flexibly select any one or more notes, bars or parts to play according to the requirement, thereby improving the flexibility of music score playing.

In one embodiment, the target audio data includes at least one of user historical performance audio data, real person performance audio data, standard performance audio data, and custom performance audio data.

In this embodiment, the target audio data may be user historical performance audio data saved when the user exercises the music unit before, where the user historical performance audio data may include correct performance data and/or incorrect performance data during the user historical performance, or audio data of real person singing recorded in advance, or any audio data of standard MIDI audio data or user-defined music font, and the like. According to the embodiment, through supporting matching of different audio data, a user can flexibly select, compare and learn the played audio, and the requirement of personalized exercise is met.

In one embodiment, the playing of the target audio data is performed while the keys corresponding to the currently played audio data are marked on the virtual piano keyboard in the display interface in real time. Illustratively, if the user selects to play the historical correct performance data and the historical incorrect performance data in the historical performance audio data of the user, keys corresponding to the correct performance data and the historical incorrect performance data in real time can be marked on the virtual piano keyboard in the display interface in real time, so that the user can perform comparison learning.

In one embodiment, the method further comprises: receiving input current performance exercise data while playing target audio data; acquiring music attribute information of a music symbol associated with a target music unit; the current performance exercise data is analyzed according to music attribute information of the music symbols associated with the target music units.

In one embodiment, musical symbols include, but are not limited to, notes, temporary ascenders (accounts), Key signatures (Key signatures), metronomes (Time signatures), hyphens (Tie), chorus hyphens (slip), gliders (Glissando), trilinear and hyphen (tuple), chords (Chord), Dynamics (Dynamics), weakest notes (pinassimo), weakest notes (pinano), weakest notes (Mezzo), strong notes (Forte), strong notes (fortimosis), strongest notes (fortisissimo), strong notes (fortisimo), strong notes (sfurando), strong notes (codendo), fades (dim), strong commentaries (Chord), strong commentaries (tuba), strong commentary (star), strong notes (elongated notes (Chord), strong commentary (tubby), strong commentary (commentary), commentary (score), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary (commentary), commentary), commentary (commentary), commentary, At least one of musical notes (Appogiatura), consonants (Acciaccatura), octaves (Ottava), fifteen degree characters (Quindecisoma), repeat and end-tone characters (Repetition and codes), and Pedal characters (Peadal marks).

The music attribute information may include symbol name, symbol meaning, symbol remark information, symbol adder information, symbol attribute value, coordinate information of symbol in melody, duration information of symbol in melody, fingering information, scope information, association information with music unit, association information with other music symbol, and the like.

In this embodiment, while the target audio data is played, the performance data input by the user through the input device is synchronously received as the current performance exercise data, the current performance exercise data can be analyzed in real time according to the music attribute information of each music symbol associated with the target music unit, and further, analysis feedback information can be generated in real time, so that the user can improve the deficiency in the current performance in time according to the information fed back in real time.

Illustratively, after analyzing the user's current performance exercise data, proficiency, accuracy, etc. of the user's performance may also be recorded to generate analysis feedback information so that the user may be able to autonomously select unfamiliar, inaccurate musical units for more exercises based on the analysis feedback information.

In one embodiment, the music symbols associated with the target music unit may include, in addition to the musical notes, other symbols capable of characterizing music attributes, such as tempo symbols, dynamics symbols, duration symbols, or slow-down symbols. Specifically, the terminal device may acquire music attribute information of each music symbol associated with the target music unit, respectively. For example, note attribute values (tone, duration, etc.) of notes in the target music unit are obtained, beat attribute values (beat number) of the metronome bound to the target music unit and action region information of the dynamics symbol, the sustain symbol or the slow symbol are obtained, and the current performance exercise data of the user is analyzed by synthesizing music information represented by the music symbols.

In this embodiment, since music information other than information of the pitch and duration is included in the music score, feedback other than the pitch and tempo can be further given. Whether the pitch and the duration of each note in the current playing practice data and the target music unit are matched or not is analyzed, and whether the dynamics, the lingering tone, the speed and the like of the current playing practice data are matched with music attributes represented by other music symbols such as dynamics symbols, delay notes, slow symbols and the like can be analyzed, so that the comprehensiveness and the accuracy of analysis are improved, and more accurate evaluation is given.

In one embodiment, the method further comprises: acquiring information of a marked error-prone playing area in a music score to be processed; prompt information is generated before the current performance exercise progresses to the performance error prone area.

In the present embodiment, by analyzing the user's historical performance exercise data, by analyzing errors or deficiencies in the historical performance data, the position of the performance error-prone region in the music score to be processed is determined and recorded, and in the vicinity of the position where the current performance exercise has progressed to the recorded performance error-prone region, a warning message of voice, image, or text may be generated to alert the user.

In one embodiment, the method further comprises: acquiring the beat information of the associated beat of the target music unit in the music score to be processed, and adjusting the playing speed of the metronome according to the beat information.

In this embodiment, it is also supported that the metronome is started while playing or playing, and the terminal device may automatically adjust the beat beating speed of the metronome according to the beat number information of the associated beat in the music score of the target music unit, so that the beat beating speed can match the playing speed of the target audio data being played. The metronome associated with the unit may be different according to the music unit selected by the user, but the adaptive adjustment of the beat beating speed according to the difference of the beat information of the metronome associated with the music unit can be realized.

In one embodiment, the method further comprises: receiving a second trigger operation based on the music score to be processed; identifying a target music symbol at a corresponding position according to the trigger position of the second trigger operation; and calling target music attribute information of the target music symbol.

In particular, the second triggering operation may be used to trigger a melody editing instruction. The terminal device can identify the music symbol at the corresponding position as the target music symbol according to the triggering position of the second triggering operation of the user. The target music symbol refers to any symbol capable of representing music attribute information, and specific reference may be made to the above list of music symbols, which is not described herein again. And calling corresponding music attribute information from the data table as target music attribute information according to the identified target note symbols. The music attribute information includes but is not limited to at least one of symbol name, symbol meaning, symbol remark information, symbol adder information, symbol attribute value, coordinate information of symbol in music score, time value information of symbol in music score, fingering information, scope information, association relation information with music unit, and association relation information with other music symbol.

In this embodiment, music symbols in the music score may be identified and selected, and a user may obtain music attribute information corresponding to each music symbol according to a requirement.

In one embodiment, the method further comprises: and displaying the target music attribute information in the music score to be processed.

By displaying the music attribute information of the selected music symbol, the embodiment can facilitate the user to know and learn the related knowledge of the music symbol in the music score. For example, if the selected target music symbol is a note, fingering information (music attribute information) of the note may be called and displayed in the music score.

In one embodiment, the method further comprises: and receiving a modification instruction of the target music attribute information, and modifying the target music attribute information according to the modification instruction.

In the present embodiment, the music attribute information can be edited and modified. For example, by modifying the coordinate information and duration information of a note in a music score, it can be realized that the corresponding note is arranged in any music unit. Or for example, the action range of the strength indicator in the music score can be adjusted by modifying the association relation information of the strength indicator and the music unit. In addition, the information of the addition person of the music symbol, the remark information and the like can be modified to realize the editing of the custom music score.

It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

In one embodiment, as shown in fig. 3, there is provided a spectrum processing apparatus including: a music score obtaining module 310, a trigger receiving module 320, a unit identifying module 330 and a unit playing module 340, wherein:

a music score obtaining module 310, configured to obtain a music score to be processed, where the music score to be processed includes at least one music unit, and the music unit includes at least one of a note unit, a measure unit, and a sound unit;

a trigger receiving module 320, configured to detect a first trigger operation performed based on a music score to be processed;

a unit identification module 330, configured to identify a target music unit at a corresponding position according to a trigger position of the first trigger operation;

the unit playing module 340 is configured to obtain coordinate information and duration information corresponding to the target music unit in the music score to be processed, and play target audio data according to the coordinate information and the duration information, where the target audio data is audio data corresponding to the coordinate and duration in the audio data of the music score to be processed and the target music unit.

In one embodiment, the unit playing module 340 is further configured to receive input current performance exercise data while playing the target audio data; acquiring music attribute information of a music symbol associated with a target music unit; the current performance exercise data is analyzed according to music attribute information of the music symbols associated with the target music units.

In one embodiment, the unit playing module 340 is further configured to obtain information of performance error-prone areas marked in the music score to be processed; prompt information is generated before the current performance exercise progresses to the performance error prone area.

In one embodiment, the unit playing module 340 is further configured to obtain the beat information of the associated beat of the target music unit in the music score to be processed, and adjust the beat striking speed of the metronome according to the beat information.

In one embodiment, the trigger receiving module 320 is further configured to detect a second trigger operation performed based on the music score to be processed; the unit identification module 330 is further configured to identify a target music symbol at a corresponding position according to the trigger position of the second trigger operation, and retrieve target music attribute information of the target music symbol.

In one embodiment, the unit identification module 330 is further configured to present the target music attribute information in the music score to be processed.

In one embodiment, the unit identifying module 330 is further configured to receive a modification instruction for the target music attribute information, and modify the target music attribute information according to the modification instruction.

For the specific definition of the score processing device, reference may be made to the above definition of the score processing method, which is not described herein again. The various modules in the music score processing device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.

In one embodiment, a computer device is provided, which may be a terminal, which may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, electronic organ devices, electronic piano devices, smart watches, portable wearable devices. The internal structure thereof may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of processing a spectrum of curves. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.

Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring a music score to be processed, wherein the music score to be processed comprises at least one music unit, and the music unit comprises at least one of a note unit, a measure unit and a sound unit; detecting a first trigger operation based on a music score to be processed; identifying a target music unit at a corresponding position according to the trigger position of the first trigger operation; and acquiring coordinate information and duration information corresponding to the target music unit in the music score to be processed, and playing target audio data according to the coordinate information and the duration information, wherein the target audio data is the audio data corresponding to the coordinate and duration information in the audio data of the music score to be processed and the target music unit.

In one embodiment, the processor, when executing the computer program, further performs the steps of: receiving input current performance exercise data while playing target audio data; acquiring music attribute information of a music symbol associated with a target music unit; the current performance exercise data is analyzed according to music attribute information of the music symbols associated with the target music units.

In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring information of a marked error-prone playing area in a music score to be processed; prompt information is generated before the current performance exercise progresses to the performance error prone area.

In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring the beat information of associated beats of the target music unit in the music score to be processed, and adjusting the beat striking speed of the metronome according to the beat information.

In one embodiment, the processor, when executing the computer program, further performs the steps of: detecting a second trigger operation based on the music score to be processed; identifying a target music symbol on a corresponding position according to the trigger position of the second trigger operation; and calling target music attribute information of the target music symbol.

In one embodiment, the processor, when executing the computer program, further performs the steps of: and displaying the target music attribute information in the music score to be processed.

In one embodiment, the processor, when executing the computer program, further performs the steps of: and receiving a modification instruction of the target music attribute information, and modifying the target music attribute information according to the modification instruction.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).

It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.

The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.

The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种曲谱生成方法、电子设备及可读存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!