Method and device for recording songs, electronic equipment and storage medium

文档序号:909787 发布日期:2021-02-26 浏览:6次 中文

阅读说明:本技术 录制歌曲的方法、装置、电子设备及存储介质 (Method and device for recording songs, electronic equipment and storage medium ) 是由 郝舫 张跃 白云飞 于 2019-08-22 设计创作,主要内容包括:本申请实施例提供了一种录制歌曲的方法、装置、电子设备及存储介质。该方法包括:获取用户选择的语言类型,以及待录制歌曲的伴奏信息和曲调信息;获取用户输入的第一语音信息,第一语音信息包含待录制歌曲的歌词内容;将第一语音信息转换为与语言类型对应的第二语音信息;基于第二语音信息,以及待录制歌曲的伴奏信息和曲调信息,生成相应的歌曲,在本申请实施例中,可以将用户输入的歌词信息转换为不同的语言,进而可以生成不同语言的歌曲,增加了录制歌曲的方式,可以更好的满足用户的实际需求,提升用户的体验。(The embodiment of the application provides a method and a device for recording songs, electronic equipment and a storage medium. The method comprises the following steps: acquiring a language type selected by a user, and accompaniment information and melody information of a song to be recorded; acquiring first voice information input by a user, wherein the first voice information comprises the lyric content of a song to be recorded; converting the first voice information into second voice information corresponding to the language type; based on the second voice information, the accompaniment information and the tune information of the song to be recorded are generated to generate the corresponding song, in the embodiment of the application, the lyric information input by the user can be converted into different languages, and then the songs in different languages can be generated, so that the mode of recording the song is increased, the actual requirements of the user can be better met, and the experience of the user is improved.)

1. A method of recording songs, comprising:

acquiring a language type selected by a user, and accompaniment information and melody information of a song to be recorded;

acquiring first voice information input by the user, wherein the first voice information comprises the lyric content of the song to be recorded;

converting the first voice information into second voice information corresponding to the language type;

and generating a corresponding song based on the second voice information and the accompaniment information and the tune information of the song to be recorded.

2. The method of claim 1, wherein converting the first speech information into second speech information corresponding to the language type comprises:

identifying the first language information to obtain text information of the lyric content contained in the first language information;

and converting the text information into second voice information corresponding to the language type.

3. The method of claim 2, wherein the converting the phonetic text into the second phonetic information corresponding to the language type comprises:

acquiring sound characteristic information of the user;

and obtaining second voice information corresponding to the language type based on the sound characteristic information and the text information.

4. The method of claim 3, wherein the obtaining the voice feature information of the user comprises:

performing sound feature extraction on the first language information to obtain sound feature information of the user;

and/or the presence of a gas in the gas,

and determining the sound characteristic information of the user based on the sound characteristic library of the user.

5. The method according to claim 3 or 4, wherein the sound characteristic information includes at least one of timbre, pitch and timbre.

6. The method according to claim 1, wherein generating the corresponding song based on the second voice information and the accompaniment information and the tune information of the song to be recorded comprises:

acquiring first sound parameter information of the second voice information and second sound parameter information of the tune information;

performing sound processing on the second voice information based on the first sound parameter information and the second sound parameter information to obtain processed second voice information;

and generating a corresponding song based on the processed second voice information and the accompaniment information.

7. An apparatus for recording songs, comprising:

the data acquisition module is used for acquiring the language type selected by the user, and the accompaniment information and the tune information of the song to be recorded;

the lyric obtaining module is used for obtaining first voice information input by the user, wherein the first voice information comprises the lyric content of the song to be recorded;

the voice conversion module is used for converting the first voice information into second voice information corresponding to the language type;

and the song generating module is used for generating corresponding songs based on the second voice information and the accompaniment information and the tune information of the songs to be recorded.

8. The apparatus according to claim 7, wherein the speech conversion module, when converting the first speech information into the second speech information corresponding to the language type, is specifically configured to:

identifying the first language information to obtain text information of the lyric content contained in the first language information;

and converting the text information into second voice information corresponding to the language type.

9. An electronic device, comprising:

a processor; and

a memory configured to store machine-readable instructions that, when executed by the processor, cause the processor to perform the method of any of claims 1-6.

10. A computer-readable storage medium storing a computer program, characterized in that,

the storage medium having stored thereon at least one instruction, at least one program, set of codes or set of instructions, which is loaded and executed by the processor to implement the method according to any of claims 1 to 6.

Technical Field

The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for recording songs, an electronic device, and a storage medium.

Background

With the rapid development of communication and information technologies, especially music-like products, users can record songs through the music products and share the recorded songs. At present, when a user records songs, the user mostly selects corresponding music accompaniment and records singing along with the music accompaniment and subtitles. Obviously, the current mode of recording songs is relatively single, and the users who seek freshness lack powerful attraction and cannot meet the requirements of the users.

Disclosure of Invention

The present application aims to solve at least one of the above-mentioned technical drawbacks, in particular the relatively single way of recording songs.

In a first aspect, an embodiment of the present application provides a method for recording a song, where the method includes:

acquiring a language type selected by a user, and accompaniment information and melody information of a song to be recorded;

acquiring first voice information input by a user, wherein the first voice information comprises the lyric content of a song to be recorded;

converting the first voice information into second voice information corresponding to the language type;

and generating a corresponding song based on the second voice information, and the accompaniment information and the tune information of the song to be recorded.

In an optional embodiment of the first aspect, converting the first speech information into second speech information corresponding to a language type includes:

identifying the first language information to obtain text information of lyric content contained in the first language information;

the text information is converted into second voice information corresponding to the language type.

In an alternative embodiment of the first aspect, converting the phonetic text into second phonetic information corresponding to the language type includes:

acquiring sound characteristic information of a user;

and obtaining second voice information corresponding to the language type based on the sound characteristic information and the text information.

In an optional embodiment of the first aspect, the obtaining the sound feature information of the user includes:

performing sound feature extraction on the first language information to obtain sound feature information of the user;

and/or the presence of a gas in the gas,

and determining the sound characteristic information of the user based on the sound characteristic library of the user.

In an alternative embodiment of the first aspect, the sound characteristic information comprises at least one of timbre, pitch and timbre.

In an optional embodiment of the first aspect, generating a corresponding song based on the second voice information and the accompaniment information and the tune information of the song to be recorded includes:

acquiring first sound parameter information of second voice information and second sound parameter information of tune information;

performing sound processing on the second voice information based on the first sound parameter information and the second sound parameter information to obtain processed second voice information;

and generating a corresponding song based on the processed second voice information and the accompaniment information.

In a second aspect, an embodiment of the present application provides an apparatus for recording songs, the apparatus including:

the data acquisition module is used for acquiring the language type selected by the user, and the accompaniment information and the tune information of the song to be recorded;

the lyric acquisition module is used for acquiring first voice information input by a user, wherein the first voice information comprises the lyric content of a song to be recorded;

the voice conversion module is used for converting the first voice information into second voice information corresponding to the language type;

and the song generating module is used for generating corresponding songs based on the second voice information, the accompaniment information and the tune information of the songs to be recorded.

In an optional embodiment of the second aspect, when the voice conversion module converts the first voice information into the second voice information corresponding to the language type, the voice conversion module is specifically configured to:

identifying the first language information to obtain text information of lyric content contained in the first language information;

the text information is converted into second voice information corresponding to the language type.

In an optional embodiment of the second aspect, when the speech conversion module converts the speech text into the second speech information corresponding to the language type, the speech conversion module is specifically configured to:

acquiring sound characteristic information of a user;

and obtaining second voice information corresponding to the language type based on the sound characteristic information and the text information.

In an optional embodiment of the second aspect, when acquiring the sound feature information of the user, the speech conversion module is specifically configured to:

performing sound feature extraction on the first language information to obtain sound feature information of the user;

and/or the presence of a gas in the gas,

and determining the sound characteristic information of the user based on the sound characteristic library of the user.

In an alternative embodiment of the second aspect, the sound characteristic information comprises at least one of timbre, pitch and loudness.

In an optional embodiment of the second aspect, when the song generating module generates a corresponding song based on the second voice information, and the accompaniment information and the tune information of the song to be recorded, the song generating module is specifically configured to:

acquiring first sound parameter information of second voice information and second sound parameter information of tune information;

performing sound processing on the second voice information based on the first sound parameter information and the second sound parameter information to obtain processed second voice information;

and generating a corresponding song based on the processed second voice information and the accompaniment information.

In a third aspect, an embodiment of the present application provides an electronic device, including: a processor; and a memory configured to store machine readable instructions that, when executed by the processor, cause the processor to perform the method of any one of the first aspects.

In a fourth aspect, there is provided a computer readable storage medium having stored thereon at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the method of any one of the first aspect.

The technical scheme provided by the embodiment of the application has the following beneficial effects:

in the embodiment of the application, when a user records a song, the lyric content can be input in a voice mode, and can be converted into second voice information represented by a language type selected by the user, and at this time, a corresponding song can be generated based on the second language information and accompaniment information and tune information selected by the user. Obviously, in the embodiment of the application, the lyric information input by the user can be converted into different languages, so that songs in different languages can be generated, the way of recording the songs is increased, the actual requirements of the user can be better met, and the user experience is improved.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.

Fig. 1 is a schematic flowchart of a method for recording songs according to an embodiment of the present application;

fig. 2 is a schematic structural diagram of an apparatus for recording songs according to an embodiment of the present application;

fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.

Detailed Description

Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.

As used herein, the singular forms "a", "an", "the" and "the" include plural referents unless the context clearly dictates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.

To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.

The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.

An embodiment of the present application provides a method for recording a song, as shown in fig. 1, the method includes:

step S110, acquiring the language type selected by the user, and the accompaniment information and the tune information of the song to be recorded.

The language type selected by the user refers to which language song the user wants to record, for example, the user wants to record an english song, and at this time, english is the language type selected by the user. The accompaniment information refers to audio information for instrumental performance for setback singing, that is, audio information for instrumental performance of a song to be recorded. The tune information refers to information for explaining key elements of tone, rhythm, beat, force, tone and the like of lyrics in the song to be recorded.

In the embodiment of the present application, the language type division manner may be preset according to actual needs, and the embodiment of the present application is not limited. For example, when the language type is divided, the language type may be divided according to different countries, for example, the language type may be divided into english, chinese, french, and the like. Of course, in practical applications, after the language types are divided according to different countries, further division is performed based on each language type, such as further dividing english into american english and english, and dividing chinese into mandarin and dialect.

In practical application, a specific implementation manner of the language type selected by the user is obtained, and the embodiment of the application is not limited. As an alternative implementation, after receiving a language type selection request triggered by a user, a list of identification information of selectable language types is presented to the user, where identifications of various language types, such as names of various language types, may be presented in the list of identification information, and the user may select a language based on which the user specifically wants to record a song.

In addition, in practical applications, if the language types include multiple types, the identification information list may sequentially display the identifications of the various language types based on a certain ordering rule, and a user may spend a long time searching the selected language type when selecting the language type. Based on this, in the embodiment of the application, a function option can be provided for the user, the user can select the language type which is interested in or selected frequently by the user based on the function option, and only the identifier of the language type which is interested in or selected frequently by the user is displayed when the identifier information list is displayed, at this time, the user can quickly find the language type which the user wants to select, and the use experience of the user is further improved.

The specific sources of the accompaniment information and the tune information of the song to be recorded are not limited in the embodiment of the application, and can be randomly selected from a preconfigured database or selected by a user based on a displayed information list.

The execution subject of the method shown in the embodiment of the present application is not limited, and may be executed by a terminal device or a server. If the method shown in the embodiment of the application is executed by the server, after the terminal device learns the language type selected by the user or learns the accompaniment selected by the user through the displayed accompaniment information list, the obtained selected language type information or the selected accompaniment information can be sent to the server, so that the server learns the language type specifically selected by the user and the accompaniment selected by the user.

Step S120, obtaining first voice information input by the user, where the first voice information includes lyric contents of a song to be recorded.

In practical applications, when a user records a song, the user may want to set the lyrics of the song to be specific, and at this time, the lyrics may be input in a voice manner. If the method shown in the embodiment of the application is executed by the server, after the terminal device obtains the lyric content input by the user, the lyric content can be sent to the server, so that the server can obtain the lyric content of the song to be recorded.

Step S130, converting the first voice message into a second voice message corresponding to the language type.

In practical application, when the first voice information input by the user is acquired, the lyric content of the song to be recorded can be acquired, and at the moment, the lyrics in the first voice information can be converted into the second voice information represented by the language type selected by the user.

In one example, if the language type selected by the user is English, the language type currently used by the user is Chinese, and the lyric content of the song to be recorded is "today's weather is really good". The user speaks the voice information of 'today weather is really good' in Chinese, namely the first voice information, at the moment, the first voice information can be converted into the second voice information, and the voice information of 'today weather is really good' which is spoken in English is taken as the second voice information.

Step S140, generating a corresponding song based on the second voice information, and the accompaniment information and the tune information of the song to be recorded.

In practical application, since the second voice information is obtained based on the first voice information, and the first voice information includes the lyric content of the song to be generated, the obtained second voice information also includes the lyric content of the song to be recorded, that is, the lyric content of the song to be recorded is known at this time, and the corresponding song can be generated based on the second voice information, the accompaniment information and the tune information selected by the user.

In the embodiment of the application, when a user records a song, the lyric content can be input in a voice mode, and can be converted into second voice information represented by a language type selected by the user, and at this time, a corresponding song can be generated based on the second language information and accompaniment information and tune information selected by the user. Obviously, in the embodiment of the application, the lyric information input by the user can be converted into different languages, so that songs in different languages can be generated, the way of recording the songs is increased, the actual requirements of the user can be better met, and the user experience is improved.

In an embodiment of the present application, converting the first voice information into second voice information corresponding to a language type includes:

identifying the first language information to obtain text information of lyric content contained in the first language information;

the text information is converted into second voice information corresponding to the language type.

In practical application, after the first voice information input by the user is obtained, voice recognition can be performed on the first voice information to obtain text content corresponding to the first voice information, and since the content expressed by the first voice information is the lyric content of the song to be recorded, the text information obtained through recognition is the text information corresponding to the lyric content. For example, the first speech information is speech information in which "today's weather is really good" is spoken in chinese, and the text information obtained by recognition at this time is "today's weather is really good" in chinese.

Further, after the text information corresponding to the content of the lyrics is obtained, the text information may be translated into a language type selected by the user to obtain translated text information, and then the translated text information may be converted into voice information.

In an example, if The language type selected by The user is english, The recognized text information is "weather today is really good", at this time, "weather today is really good" may be translated into "The weather is real good today", and "The weather is real good today" may be converted into voice information.

In an embodiment of the present application, converting the speech text into second speech information corresponding to the language type includes:

acquiring sound characteristic information of a user;

and obtaining second voice information corresponding to the language type based on the sound characteristic information and the text information.

The sound is generated by vibration of vocal cords caused by contraction of muscles of the throat and resonance of the oral cavity and the nasal cavity, and the sound characteristic information refers to information for explaining sounding characteristics.

Wherein, the timbre refers to different sound frequencies, which shows the characteristic of being always distinctive in terms of waveform; tone refers to the high or low of the sound frequency; the loudness is the larger the amplitude is.

In practical application, the sound characteristics of the user can be acquired, and when the recognized text information is converted into the second voice information, the second voice information corresponding to the language type can be obtained based on the sound characteristics and the text information of the user.

In the embodiment of the application, because the voice characteristic information of the user is considered when the text information is converted into the voice information, the obtained second voice information can better approach to the actual voice of the user, the use experience of the user can be further improved,

in the embodiment of the present application, acquiring the sound feature information of the user includes:

performing sound feature extraction on the first language information to obtain sound feature information of the user;

and/or the presence of a gas in the gas,

and determining the voice characteristic information of the user based on the voice characteristic library of the user.

In practical applications, there may be multiple specific implementation manners for acquiring the sound characteristic information of the user, and the following detailed description is made for different manners.

Mode 1: and performing sound feature extraction on the first language information to obtain sound feature information of the user.

In practical application, a specific implementation manner of extracting the sound feature of the first language information is not limited in the embodiment of the present application, for example, a neural network model for extracting the sound feature may be used to extract the sound feature of the first language information, so as to obtain the sound feature information of the user.

Mode 2: and determining the sound characteristic information of the user based on the sound characteristic library of the user.

In practical applications, if a voice feature library of a user is stored in advance, and the voice feature library includes voice feature information of the user, the voice feature information of the user can be directly obtained based on the voice feature library.

In practical applications, the sound feature library may further include sound feature information of the user corresponding to different language types, and at this time, the sound feature information of the user corresponding to the selected language type may be directly obtained based on the sound feature library.

If the voice feature library of the user contains voice feature information of the user corresponding to different language types, the voice feature information of the user corresponding to different language types may be obtained and stored after the user trains songs recorded in different languages before on the basis of a neural network model. In practical application, when a user records a song recorded in a different language, sound feature extraction can be performed on the song to obtain sound feature information of the user recorded in the language, and the sound feature information is stored in a sound feature library of the user.

In practical application, because the sound characteristics of different languages are different, the sound characteristic information of the same person emitted by different languages may also be different, and if the obtained sound characteristic information of the user is the sound characteristic information of the language type corresponding to the user, the second voice information obtained based on the sound characteristic information and the text information of the user can better approximate to the actual sound of the user.

In practical applications, only one of the two optional embodiments may be adopted or the two optional embodiments may be combined when determining the sound characteristic information of the user.

For example, when determining the voice feature information of the user, it may be determined whether the voice feature information of the language selected by the user is included in the voice feature library of the user, if so, the voice feature information of the user may be determined directly based on the voice feature library of the user, and if not, the voice feature extraction may be performed on the first language information to obtain the voice feature information of the user.

In this embodiment of the application, generating a corresponding song based on the second voice information, and the accompaniment information and the tune information of the song to be recorded includes:

acquiring first sound parameter information of second voice information and second sound parameter information of tune information;

performing sound processing on the second voice information based on the first sound parameter information and the second sound parameter information to obtain processed second voice information;

and generating a corresponding song based on the processed second voice information and the accompaniment information.

The first sound parameter information comprises at least one of the pitch, the audio frequency or the sound color of each character or word in the second voice information, and the second sound parameter information is at least one of the pitch, the audio frequency or the sound color of a tune.

In practical applications, when the first sound parameter information of the second speech information is obtained, the second speech information may be processed, for example, the second speech information is divided into speech information corresponding to each word, and then the pitch, the audio frequency, or the sound color of each word or each word may be obtained based on the speech information of each word; correspondingly, when the second sound parameter information of the tune is obtained, whether the second sound parameter information of the tune is stored in advance or not can be determined, and if the second sound parameter information of the tune is stored, the second sound parameter information of the tune can be directly obtained; if not, the tune may be processed to obtain the pitch, audio or color of the tune information.

Further, after the first sound parameter information of the second voice information and the second sound parameter information of the tune are acquired, the second voice information may be subjected to sound processing based on the acquired first sound parameter information and second sound parameter information, so as to obtain processed second voice information. The specific implementation manner of obtaining the processed second speech information may be configured in advance according to actual needs, and the embodiment of the present application is not limited thereto, for example, the first sound parameter information in the second speech information may be modified based on the second sound parameter information of the tune, and for example, the audio in the second speech information may be modified based on the audio of the tune, so as to obtain the processed second speech information. Correspondingly, after the processed second voice information is obtained, the processed second voice information and the accompaniment can be synthesized, and then a corresponding song is generated.

An embodiment of the present application provides an apparatus for recording songs, and as shown in fig. 2, the apparatus 20 for recording songs may include: a data acquisition module 210, a lyric acquisition module 220, a speech conversion module 230, and a song generation module 240, wherein,

the data acquisition module is used for acquiring the language type selected by the user, and the accompaniment information and the tune information of the song to be recorded;

the lyric acquisition module is used for acquiring first voice information input by a user, wherein the first voice information comprises the lyric content of a song to be recorded;

the voice conversion module is used for converting the first voice information into second voice information corresponding to the language type;

and the song generating module is used for generating corresponding songs based on the second voice information, the accompaniment information and the tune information of the songs to be recorded.

In an optional embodiment of the present application, when the voice conversion module converts the first voice information into the second voice information corresponding to the language type, the voice conversion module is specifically configured to:

identifying the first language information to obtain text information of lyric content contained in the first language information;

the text information is converted into second voice information corresponding to the language type.

In an optional embodiment of the present application, when the speech conversion module converts the speech text into the second speech information corresponding to the language type, the speech conversion module is specifically configured to:

acquiring sound characteristic information of a user;

and obtaining second voice information corresponding to the language type based on the sound characteristic information and the text information.

In an optional embodiment of the present application, when acquiring the sound feature information of the user, the speech conversion module is specifically configured to:

performing sound feature extraction on the first language information to obtain sound feature information of the user;

and/or the presence of a gas in the gas,

and determining the sound characteristic information of the user based on the sound characteristic library of the user.

In an alternative embodiment of the present application, the sound characteristic information includes at least one of a tone, and a loudness.

In an optional embodiment of the present application, when the song generation module generates a corresponding song based on the second voice information and the accompaniment information and the tune information of the song to be recorded, the song generation module is specifically configured to:

acquiring first sound parameter information of second voice information and second sound parameter information of tune information;

performing sound processing on the second voice information based on the first sound parameter information and the second sound parameter information to obtain processed second voice information;

and generating a corresponding song based on the processed second voice information and the accompaniment information.

The song recording apparatus of the embodiment of the present application may perform the method for recording a song provided in the embodiment of the present application, and the implementation principle is similar, the actions performed by each module in the song recording apparatus of the embodiments of the present application correspond to the steps in the method for recording a song of the embodiments of the present application, and for the detailed functional description of each module of the song recording apparatus, reference may be specifically made to the description in the corresponding method for recording a song shown in the foregoing, and details are not repeated here.

Based on the same principle as the method shown in the embodiment of the present application, an embodiment of the present application provides an electronic device, which includes: a memory and a processor; the memory is configured to store machine-readable instructions that, when executed by the processor, cause the processor to perform the method of recording songs as described above. Compared with the prior art: in the embodiment of the application, the lyric information input by the user can be converted into different languages, so that songs in different languages can be generated, the way of recording the songs is increased, the actual requirements of the user can be better met, and the user experience is improved.

Based on the same principles as the method shown in the embodiments of the present application, the embodiments of the present application provide a computer-readable storage medium storing at least one instruction, at least one program, code set, or set of instructions, which is loaded and executed by a processor to implement the method for recording songs as described above. Compared with the prior art: in the embodiment of the application, the lyric information input by the user can be converted into different languages, so that songs in different languages can be generated, the way of recording the songs is increased, the actual requirements of the user can be better met, and the user experience is improved.

The terms and implementation principles related to a computer-readable storage medium in the present application may specifically refer to a method for recording songs in the embodiments of the present application, and are not described herein again.

An embodiment of the present application provides an electronic device, as shown in fig. 3, an electronic device 2000 shown in fig. 3 includes: a processor 2001 and a memory 2003. Wherein the processor 2001 is coupled to a memory 2003, such as via a bus 2002. Optionally, the electronic device 2000 may also include a transceiver 2004. It should be noted that the transceiver 2004 is not limited to one in practical applications, and the structure of the electronic device 2000 is not limited to the embodiment of the present application.

The processor 2001 is applied in the embodiment of the present application to implement the functions of the modules shown in fig. 2.

The processor 2001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 2001 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs and microprocessors, and the like.

Bus 2002 may include a path that conveys information between the aforementioned components. The bus 2002 may be a PCI bus or an EISA bus, etc. The bus 2002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.

The memory 2003 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.

The memory 2003 is used to store application program code for performing the aspects of the present application and is controlled in execution by the processor 2001. The processor 2001 is configured to execute application program code stored in the memory 2003 to implement the actions of the apparatus for recording songs provided by the embodiment shown in fig. 2.

It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种燃机用进气系统矩阵式消音器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!