Score generation method, storage medium and terminal equipment

文档序号:1506821 发布日期:2020-02-07 浏览:9次 中文

阅读说明:本技术 一种配乐生成方法、存储介质及终端设备 (Score generation method, storage medium and terminal equipment ) 是由 豆泽云 于 2018-07-26 设计创作,主要内容包括:本申请公开了一种配乐生成方法、存储介质及终端设备,所述方法包括:接收用户输入的多媒体文件,其中,所述多媒体文件至少包含视频和/或图像;将所述多媒体文件转换为第一上下文向量;将所述第一上下文向量输入预设的神经网络以得到所述多媒体文件对应的音频信息;根据所述音频信息生成所述多媒体文件对应的配乐。本申请通过将多媒体文件对应的第一上下文向量输入预设神经网络中,通过所述神经网络生成所述多媒体文件的音频信息,在根据音频信息生成相应的配乐,以使得通过神经网络为多媒体文件配乐,从而实现了为自动为多媒体文件生成原创音频作为配乐,一方面提高了多媒体文件与配乐的匹配性,另一方面提高为多媒体文件配乐的便捷性和快速性。(The application discloses a score generation method, a storage medium and a terminal device, wherein the method comprises the following steps: receiving a multimedia file input by a user, wherein the multimedia file at least comprises a video and/or an image; converting the multimedia file into a first context vector; inputting the first context vector into a preset neural network to obtain audio information corresponding to the multimedia file; and generating the score corresponding to the multimedia file according to the audio information. According to the method and the device, the first context vector corresponding to the multimedia file is input into the preset neural network, the neural network generates the audio information of the multimedia file, and the corresponding score is generated according to the audio information, so that the score is distributed for the multimedia file through the neural network, and therefore the original audio generated for the multimedia file automatically is used as the score, on one hand, the matching performance of the multimedia file and the score is improved, and on the other hand, the convenience and the rapidity of the score distribution for the multimedia file are improved.)

1. A score generation method, comprising:

receiving a multimedia file input by a user, wherein the multimedia file at least comprises a video and/or an image;

converting the multimedia file into a first context vector;

inputting the first context vector into a preset neural network to obtain audio information corresponding to the multimedia file;

and generating the score corresponding to the multimedia file according to the audio information.

2. The method for generating a score as claimed in claim 1, wherein the converting the multimedia file into the first context vector is specifically:

inputting the multimedia file into a preset first coding neural network, and coding through the first coding neural network to obtain a first context vector.

3. The score generation method of claim 1, wherein converting the multimedia file into the first context vector further comprises:

extracting videos contained in the multimedia file, and extracting a plurality of image frames from each extracted video according to a preset strategy when the videos are extracted;

and replacing the corresponding video by adopting all the image frames corresponding to the videos so as to update the multimedia file.

4. The score generation method as claimed in claim 3, wherein the replacing of the corresponding video by all the image frames corresponding to each video to update the multimedia file specifically comprises:

acquiring video sequences of all image frames extracted from each video in the corresponding video, and determining the playing sequence corresponding to each video according to a preset playing sequence;

determining the playing sequence of each image frame according to the video sequence and the playing sequence of each video, and updating the preset playing sequence according to the playing sequence of each image frame;

and splicing all image frames and images contained in the multimedia file according to the updated playing sequence to obtain an image file, and replacing the multimedia file with the image file to update the multimedia file.

5. The method for generating a score according to claim 1, wherein the multimedia file further includes text information, and the inputting the first context vector into a preset neural network to obtain the audio information corresponding to the multimedia file specifically includes:

converting the text information into a second context vector, and updating the first context vector according to the first up-down vector and the second context vector;

and inputting the updated first context vector into a preset neural network to obtain corresponding audio information.

6. The method of generating music score according to claim 5, wherein said converting the text message into a second context vector and updating the first context vector according to the first up-down vector and the second context vector specifically comprises:

inputting the word vector corresponding to the text information into a second coding neural network for coding to obtain a second context vector;

and splicing the second context vector with the first context vector to obtain a third context vector, and updating the first context vector by adopting the third context vector.

7. The method for generating a score of any one of claims 1-6, wherein the inputting the first context vector into a preset neural network to obtain the audio information corresponding to the multimedia file specifically comprises:

inputting the first context vector into a preset main melody neural network and an accompaniment neural network respectively;

and the main melody neural network and the accompaniment neural network respectively generate corresponding main melody and accompaniment melody according to preset target duration so as to obtain the audio information corresponding to the first context vector.

8. The method for generating a score according to claim 7, wherein the generating of the score corresponding to the multimedia file according to the audio information specifically comprises:

and synthesizing the main melody and the accompaniment melody to obtain the score corresponding to the multimedia file.

9. A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of the score generation method according to any one of claims 1 to 8.

10. A terminal device, comprising: the device comprises a processor, a memory and a communication bus, wherein the memory is stored with a computer readable program which can be executed by the processor;

the communication bus realizes connection communication between the processor and the memory;

the processor, when executing the computer readable program, implements the steps in a soundtrack generation method according to any one of claims 1-8.

Technical Field

The present application relates to the field of intelligent terminal technologies, and in particular, to a score generation method, a storage medium, and a terminal device.

Background

With the rapid development of artificial intelligence technology in this year, the application of neural network technology in the field of artificial intelligence technology has been widely researched and applied, and the strong capability of neural network technology is reflected from image classification based on neural network, text classification to text generation, speech synthesis and the like. The technology is used in the fields of photography, photo processing, personal voice assistant and the like on intelligent devices. However, there are many functions that can apply neural network technology during the use of the smart device. For example, the intelligent terminal may configure a corresponding audio file (i.e., a score) for the image or video while playing the image or video, so as to improve the playing mood of the image or video. However, in the prior art, before playing an image, a terminal device usually needs to manually select a score matched with the image to synchronously play the image or video, and the processing mode of manual selection is relatively subjective, and the problem that the played image is not matched with the score is easy to occur. Therefore, how to apply the neural network technology to match music for images or videos becomes a focus of attention.

Disclosure of Invention

The technical problem to be solved by the present application is to provide a score generation method, a storage medium, and a terminal device for generating a score for a multimedia file through a neural network, aiming at the defects of the prior art.

The technical scheme adopted by the application is as follows:

a score generation method, comprising:

receiving a multimedia file input by a user, wherein the multimedia file at least comprises a video and/or an image;

converting the multimedia file into a first context vector;

inputting the first context vector into a preset neural network to obtain audio information corresponding to the multimedia file;

and generating the score corresponding to the multimedia file according to the audio information.

The method for generating the score, wherein the converting the multimedia file into the first context vector specifically comprises:

inputting the multimedia file into a preset first coding neural network, and coding through the first coding neural network to obtain a first context vector.

The method for generating a score, wherein the converting the multimedia file into the first context vector further comprises:

extracting videos contained in the multimedia file, and extracting a plurality of image frames from each extracted video according to a preset strategy when the videos are extracted;

and replacing the corresponding video by adopting all the image frames corresponding to the videos so as to update the multimedia file.

The method for generating the score, wherein the replacing the corresponding video with all the image frames corresponding to the videos to update the multimedia file specifically comprises:

acquiring video sequences of all image frames extracted from each video in the corresponding video, and determining the playing sequence corresponding to each video according to a preset playing sequence;

determining the playing sequence of each image frame according to the video sequence and the playing sequence of each video, and updating the preset playing sequence according to the playing sequence of each image frame;

and splicing all image frames and images contained in the multimedia file according to the updated playing sequence to obtain an image file, and replacing the multimedia file with the image file to update the multimedia file.

The method for generating the score, wherein the multimedia file further includes text information, and the inputting the first context vector into a preset neural network to obtain audio information corresponding to the multimedia file specifically includes:

converting the text information into a second context vector, and updating the first context vector according to the first up-down vector and the second context vector;

and inputting the updated first context vector into a preset neural network to obtain corresponding audio information.

The method for generating the score, wherein the converting the text information into a second context vector and updating the first context vector according to the first up-down vector and the second context vector specifically includes:

inputting the text information into a second coding neural network coding to obtain the second context vector;

and splicing the second context vector with the first context vector to obtain a third context vector, and updating the first context vector by adopting the third context vector.

The method for generating the score, wherein the inputting the first context vector into a preset neural network to obtain the audio information corresponding to the multimedia file specifically includes:

inputting the first context vector into a preset main melody neural network and an accompaniment neural network respectively;

and the main melody neural network and the accompaniment neural network respectively generate corresponding main melody and accompaniment melody according to preset target duration so as to obtain the audio information corresponding to the first context vector.

The method for generating the score, wherein the generating the score corresponding to the multimedia file according to the audio information specifically comprises:

and synthesizing the main melody and the accompaniment melody to obtain the score corresponding to the multimedia file.

A computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement steps in a score generation method as described in any above.

A terminal device, comprising: the device comprises a processor, a memory and a communication bus, wherein the memory is stored with a computer readable program which can be executed by the processor;

the communication bus realizes connection communication between the processor and the memory;

the processor, when executing the computer readable program, implements the steps in a soundtrack generation method as described in any one of the above.

Has the advantages that: compared with the prior art, the application provides a score generation method, a storage medium and a terminal device, wherein the method comprises the following steps: receiving a multimedia file input by a user, wherein the multimedia file at least comprises a video and/or an image; converting the multimedia file into a first context vector; inputting the first context vector into a preset neural network to obtain audio information corresponding to the multimedia file; and generating the score corresponding to the multimedia file according to the audio information. According to the method and the device, the first context vector corresponding to the multimedia file is input into the preset neural network, the neural network generates the audio information of the multimedia file, and the corresponding score is generated according to the audio information, so that the score is distributed for the multimedia file through the neural network, and therefore the original audio generated for the multimedia file automatically is used as the score, on one hand, the matching performance of the multimedia file and the score is improved, and on the other hand, the convenience and the rapidity of the score distribution for the multimedia file are improved.

Drawings

Fig. 1 is a flowchart of an embodiment of a score generation method provided in the present application.

Fig. 2 is a flowchart of step S20 in an embodiment of the score generation method provided in the present application.

Fig. 3 is a flowchart of step S22 in an embodiment of the score generation method provided in the present application.

Fig. 4 is a schematic structural diagram of a preferred embodiment of a control system using self-starting provided in the present application.

Detailed Description

The present application provides a score generation method, a storage medium, and a terminal device, and in order to make the purpose, technical solution, and effect of the present application clearer and clearer, the present application will be further described in detail below with reference to the drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.

It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

The following further describes the content of the application by describing the embodiments with reference to the attached drawings.

As shown in fig. 1, a method for generating a score provided in this embodiment includes:

s10, receiving a multimedia file input by a user, wherein the multimedia file at least comprises a video and/or an image.

Specifically, the multimedia file is input by a user, and may also be selected by the user from an image library of the terminal device. The multimedia file at least comprises one of video and image, namely, the multimedia file cannot be empty, and at least one picture or one video exists. Of course, the multimedia file may include both video and image, and the number of the video and the image may be multiple. In addition, the multimedia file may further include text information indicating a style of the score, for example, beauty. And the text information can also be provided with an upper limit value, when the text information is received, the byte number carried by the text information is obtained, the byte number is compared with the upper limit value, and when the byte number is larger than the upper limit value, a user is prompted to modify or re-input the text information. When the first text information which is input again or modified is received, the first text information is adopted to replace the text information, when the first text information which is input again or modified is not received, word segmentation and part-of-speech tagging are carried out on the text information, auxiliary part-of-speech is filtered, words with specified part-of-speech, such as nouns, adjectives and the like, are reserved, the text information is generated and updated according to the reserved words, if the updated text information meets the requirement of an upper limit value, the text information is reserved, and the requirement of the upper limit value is not met, the user is prompted to input errors.

Meanwhile, in the embodiment, when a multimedia file input by a user is received, an input sequence of each video and/or each image contained in the multimedia file is recorded, and the input sequence is displayed to the user so that the user can be determined to determine the playing sequence of each video and/or each image. Of course, after the input sequence of each video and/or each image is displayed, the adjustment of the playing sequence by the user can be received, the playing sequence is updated by adopting the adjusted sequence, and the updated playing sequence is used as the preset playing sequence corresponding to the multimedia file, so that the videos and/or the images can be sequenced according to the preset playing sequence. In practical application, the videos and/or the images contained in the multimedia file can be displayed on a display interface according to an input sequence, the dragging operation executed by a user on the videos and/or the images is received, the display sequence of the videos and/or the images is updated according to the dragging operation, and the final display sequence of the videos and/or the images is used as the preset playing sequence of the multimedia file.

S20, converting the multimedia file into a first context vector.

Specifically, the first context vector is sequence information corresponding to the multimedia file determined according to the multimedia file, the sequence information includes all content information of the multimedia file, the first up-down vector is an input item of a preset neural network, that is, the first up-down vector is input to the preset neural network as the input item of the preset neural network, so as to obtain audio information through the preset neural network. Wherein the first context vector may be obtained by encoding a neural network.

For example, the converting the multimedia file into the first context vector specifically includes: inputting the multimedia file into a preset first coding neural network, and coding through the first coding neural network to obtain a first context vector. The first coding neural network is obtained by pre-training and is used for converting the multimedia file into a first context vector. In this embodiment, the first coding neural network may adopt a convolutional neural network CNN, that is, a first context vector corresponding to the multimedia file is obtained through the convolutional neural network. Of course, the training process of the first coding neural network is to perform deep learning through several training samples to generate the first coding neural network model, and the deep learning process is the same as that of the existing neural network, and will not be described in detail here.

Meanwhile, in the embodiment, before the multimedia file is acquired, the video contained in the multimedia file needs to be converted into an image frame, so that only the picture is contained in the multimedia file, and the multimedia file containing only the picture is used as an input item of the first coding neural network. Accordingly, as shown in fig. 2, the converting the multimedia file into the first context vector further comprises:

s21, extracting videos contained in the multimedia file, and extracting a plurality of image frames from each extracted video according to a preset strategy when the videos are extracted;

and S22, replacing the corresponding video by all the image frames corresponding to the videos so as to update the multimedia file.

Specifically, the preset policy is preset, and picture frames included in each video are extracted from each video according to the preset policy. The preset strategy can be random extraction, or can be determined according to the contrast and/or brightness of the image, for example, an image frame with brightness within a preset range is extracted, or can be determined according to the content carried by the image frame, for example, an existing opencv algorithm is used to determine the image frame carrying objects such as human-shaped objects and architectural-shaped objects, so that meaningful images can be extracted with a higher probability. In addition, after the image frames are extracted according to a preset strategy, the image images and the number of the extracted image frames can be acquired, the number of the images is compared with an upper limit value of the number of the images, and if the number of the images is greater than the upper limit value of the number of the images, the extracted image frames can be screened so that the number of the images meets the upper limit value of the number of the images. The screening may be performed according to a preset screening condition, where the preset screening condition may be that the screening is performed according to the picture quality of the image frame and the image frame with high image quality is reserved, and the preset screening condition may be determined according to a color tone of the image frame, for example, the image frame is selected from a warm color tone to a cool color tone, or from a cool color tone to a warm color tone, where the color tone may be determined according to an average value of yellow components of all pixels in the image frame.

In addition, in the modified embodiment of this embodiment, before extracting image frames from each video, a second number of image frames to be extracted may be determined according to the first number of images and the upper limit value of the number of images included in the multimedia file, the number of image frames to be extracted from each video may be determined according to the second number and the number of videos, and a corresponding number of image frames may be extracted from each video according to a preset extraction policy. For example, a corresponding number of image frames are randomly extracted in each video. In addition, an equal division principle and the like can be adopted when determining the number of image frames required to be extracted by each video according to the second number and the number of videos.

Further, after the image frames are extracted, the image frames extracted by each video are adopted to replace the pair of the extracted image frames to obtain the video so as to update the multimedia file, namely the updated multimedia file has the received images and/or the extracted image frames. After the extracted image frames are used to replace the corresponding videos, the playing sequence of the extracted image frames needs to be determined according to a preset playing sequence, so that the images and/or the image frames can be conveniently sequenced according to the playing sequence. Correspondingly, as shown in fig. 3, the replacing the corresponding video with all the image frames corresponding to the respective videos to update the multimedia file specifically includes:

s221, acquiring video sequences of all image frames extracted from each video in the corresponding video, and determining a playing sequence corresponding to each video according to a preset playing sequence;

s222, determining the playing sequence of each image frame according to the video sequence and the playing sequence of each video, and updating the preset playing sequence according to the playing sequence of each image frame;

s223, all image frames and images contained in the multimedia file are spliced according to the updated playing sequence to obtain an image file, and the image file is adopted to replace the multimedia file so as to update the multimedia file.

Specifically, after each video is extracted into the image frames, the playing sequence of each image frame in the corresponding video can be determined according to the frame number of each image frame, when the sequence of the video in the preset playing sequence is obtained, all the image frames corresponding to the video are sequentially inserted into the position of the video according to the frame number, that is, the extracted image frames are arranged according to the frame number to generate an image frame sequence, replacing the video corresponding to the image frame column with the image frame column, recording the playing sequence corresponding to the video as the playing sequence of the image frame column, the playing sequence of each image frame can be determined according to the playing sequence of the image frame array and the frame number of each image frame in the image frame array, and then according to the first playing sequence of the multimedia file formed by the determined images and the image frames, splicing the images and the image frames into one image file according to the first playing sequence. For example, each image frame extracted from the image and/or video is spliced into a horizontal picture according to the first playing sequence to obtain the image file. In addition, after the image file is acquired, the length of the image file can be acquired to determine whether the image file meets the length requirement of the first coding neural network input, and when the image file does not meet the length requirement, 0 pixel can be added behind the image file to supplement the image file, so that the supplemented image file meets the input requirement of the first coding neural network.

And S30, inputting the first context vector into a preset neural network to obtain the audio information corresponding to the multimedia file.

Specifically, the first context vector is obtained according to a multimedia file, and when the multimedia file contains text information, the first context vector is obtained according to a context vector corresponding to an image file formed by assembling videos and/or images and a context vector corresponding to the text information. Therefore, when the multimedia file contains text information, the context vector corresponding to the text information is determined through the second coding neural network corresponding to the text information, and the context vector corresponding to the image file and the context vector corresponding to the text information are spliced to obtain the first context vector. Correspondingly, when the multimedia file includes text information, the inputting the first context vector into a preset neural network to obtain audio information corresponding to the multimedia file specifically includes:

converting the text information into a second context vector, and updating the first context vector according to the first up-down vector and the second context vector;

and inputting the updated first context vector into a preset neural network to obtain corresponding audio information.

Specifically, the second context vector corresponding to the text information may be obtained by a preset second coding neural network, and the second coding neural network may adopt a recurrent network RNN, and the second context vector corresponding to the text information is determined by the recurrent network. It should be noted that, before converting the text information into the corresponding second context vector, a dictionary/dictionary is required to be established in advance, each word/word in the dictionary/dictionary has a corresponding id information, the id information is a vector information, the vector information here can be a "word/word vector" initialized randomly or pre-trained by a large amount of comprehensive text corpora, for example, the text information is "beauty-only", wherein the vector corresponding to the "beauty-only" word is [0.02, 0.14, 0.45], "beauty" word is [0.77, 0.22, 0.11], and then the word vector corresponding to the beauty-only is [0.02, 0.14, 0.45, 0.77, 0.22, 0.11 ].

For example, the converting the text information into a second context vector and updating the first context vector according to the first up-down vector and the second context vector specifically includes:

inputting the word vector corresponding to the text information into a second coding neural network for coding to obtain a second context vector;

and splicing the second context vector with the first context vector to obtain a third context vector, and updating the first context vector by adopting the third context vector.

Specifically, before the text information is input into the second encoding neural network, an ID corresponding to each word in the text information may be determined according to a preset text dictionary, a vector corresponding to the text information is generated according to the ID, the vector is used as an input item of the second encoding neural network, and the vector is input into the second encoding neural network to obtain a second context vector corresponding to the second encoding neural network. After a second context vector is obtained, the second context vector may be spliced with the first context vector to obtain a third context vector, and the first context vector is updated by using the third context vector. In practical applications, the second context vector and the first context vector may be spliced in different dimensions, for example, when the second context vector and the first context vector are matrices of m × n, the second context vector and the first context vector may be spliced into 2m × n, m × 2n, 2 × m × n, and so on. In this embodiment, it is preferable that the second context vector and the first context vector are spliced in the column direction, and the context vector with the smaller line number adopts 0 to complement the line number.

Meanwhile, in this embodiment, the preset neural network may include two neural networks, which are a main melody neural network and an accompaniment neural network, respectively, and the input of the first context vector into the preset neural network is to input the first context vector into the main melody neural network and the accompaniment neural network, respectively. Correspondingly, the inputting the first context vector into a preset neural network to obtain the audio information corresponding to the multimedia file specifically includes:

inputting the first context vector into a preset main melody neural network and an accompaniment neural network respectively;

and the main melody neural network and the accompaniment neural network respectively generate corresponding main melody and accompaniment melody according to preset target duration so as to obtain the audio information corresponding to the first context vector.

Specifically, the main melody neural network and the accompaniment neural network are two neural networks, for example, the main melody neural network and the accompaniment neural network are RNN type neural networks, and the main melody neural network and the accompaniment neural network can generate the main melody and the accompaniment melody for the first context vector according to the target duration. For example, if the main melody neural network and the accompaniment neural network have notes spaced at intervals of 500ms, 120 notes constitute one minute of music content, and the rhythm of the music content is determined according to whether the notes of each measure are the same or whether the notes are null notes, thereby obtaining audio information.

And S40, generating the score corresponding to the multimedia file according to the audio information.

Specifically, the audio information comprises a main melody and an accompaniment melody, and the main melody and the accompaniment rotation are synthesized to obtain the score corresponding to the multimedia file. In addition, the audio information may be note information or a spectrogram. If the obtained note information is the note information, the tone of the corresponding musical instrument can be determined for the notes contained in the main melody and the accompaniment melody, and the dubbing music of the multimedia file is generated according to the music and the notes; if the obtained sound spectrum information is sound spectrum information, the sound spectrums can be directly synthesized into a waveform file to obtain the score corresponding to the multimedia file. In practical applications, when the audio information is note information, configuring musical instrument timbre for the note information may be selected according to a preset rule, where the preset rule is preset, for example, when the audio information includes 120 notes and the target duration of dubbing music is 1 minute, 2 notes need to be played in one second, and accordingly, the dubbing music may be 44 beats, and one musical instrument timbre is used every 10 measure.

Based on the above-mentioned score generation method, the present application further provides a computer-readable storage medium, wherein the computer-readable storage medium stores one or more programs, and the one or more programs are executable by one or more processors to implement the steps in the score generation method according to the above-mentioned embodiment.

Based on the above score generation method, the present application further provides a terminal device, as shown in fig. 4, including at least one processor (processor) 20; a display screen 21; and a memory (memory) 22, and may further include a communication Interface (Communications Interface) 23 and a bus 24. The processor 20, the display 21, the memory 22 and the communication interface 23 can communicate with each other through the bus 24. The display screen 21 is configured to display a user guidance interface preset in the initial setting mode. The communication interface 23 may transmit information. The processor 20 may call logic instructions in the memory 22 to perform the methods in the embodiments described above.

Furthermore, the logic instructions in the memory 22 may be implemented in software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.

The memory 22, which is a computer-readable storage medium, may be configured to store a software program, a computer-executable program, such as program instructions or modules corresponding to the methods in the embodiments of the present disclosure. The processor 30 executes the functional application and data processing, i.e. implements the method in the above-described embodiments, by executing the software program, instructions or modules stored in the memory 22.

The memory 22 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. Further, the memory 22 may include a high speed random access memory and may also include a non-volatile memory. For example, a variety of media that can store program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk, may also be transient storage media.

In addition, the specific processes loaded and executed by the storage medium and the instruction processors in the mobile terminal are described in detail in the method, and are not stated herein.

Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种智能电子曲谱仪

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!