Method for generating analysis file of album file and terminal equipment

文档序号:1863128 发布日期:2021-11-19 浏览:10次 中文

阅读说明:本技术 专辑文件的解析文件生成方法和终端设备 (Method for generating analysis file of album file and terminal equipment ) 是由 康凯 孙磊 于 2021-08-18 设计创作,主要内容包括:本申请公开了一种专辑文件的解析文件生成方法和终端设备,用以解决制作专辑文件的解析文件比较困难的问题。该方法包括:获取包含多首单曲的专辑文件;从所述专辑文件中提取指定音频特征;所述指定音频特征用于描述相邻两首单曲之间的间隔;基于所述指定音频特征,从所述专辑文件中提取每首单曲的起始播放时间;基于所述专辑文件的文件名称和提取的每首单曲的起始播放时间,按照预设的解析文件模板的格式要求,生成所述专辑文件的解析文件。(The application discloses an album file analysis file generation method and terminal equipment, which are used for solving the problem that the analysis file of the album file is difficult to manufacture. The method comprises the following steps: acquiring an album file containing a plurality of single songs; extracting specified audio features from the album file; the specified audio features are used for describing the interval between two adjacent single songs; extracting the starting playing time of each single song from the album file based on the specified audio characteristics; and generating an analysis file of the album file according to the format requirement of a preset analysis file template based on the file name of the album file and the initial playing time of each extracted single song.)

1. A method for generating an analysis file of an album file is characterized by comprising the following steps:

acquiring an album file containing a plurality of single songs;

extracting specified audio features from the album file; the specified audio features are used for describing the interval between two adjacent single songs;

extracting the starting playing time of each single song from the album file based on the specified audio characteristics;

and generating an analysis file of the album file according to the format requirement of a preset analysis file template based on the file name of the album file and the initial playing time of each extracted single song.

2. The method of claim 1, wherein extracting specified audio features from the album file comprises:

acquiring waveform data of the album file;

acquiring bytes representing the audio magnitude of a mute state from the waveform data;

and if the number of bytes of the audio magnitude representing the mute state in the specified duration meets a preset condition, determining to extract the audio features.

3. The method according to claim 2, wherein the specified duration comprises n unit durations, n is a positive integer greater than 1, the preset condition includes that n consecutive unit durations are in a mute state, and determining whether the number of bytes in the specified duration, which represents the audio level of the mute state, satisfies the preset condition includes:

counting the ratio of the number of bytes of the audio magnitude representing the mute state in each unit time length;

if the occupation ratio exceeds a preset occupation ratio, determining the unit time length to be in a mute state;

if the continuous n unit time lengths are all in the mute state, determining that the preset condition is met;

otherwise, determining that the preset condition is not met.

4. The method as claimed in claim 1, wherein the extracting the start playing time of each single song from the album file based on the specified audio feature comprises:

acquiring an end time point of the specified audio feature on a playing time axis of the album file;

recording the ending time point as the starting playing time of the next single song;

wherein, the starting playing time of the first single track of the album file is set as a default value.

5. The method as claimed in claim 1, wherein the generating of the parsed file of the album file according to a format requirement of a preset parsed file template based on the file name of the album file and the extracted start playing time of each single song comprises:

modifying the name of the preset analysis file template into the file name of the album file; and the number of the first and second groups,

generating description information of the album file in the analysis file based on the file name of the album file; and the number of the first and second groups,

and generating the description information of each single song in the analysis file based on the initial playing time of each single song.

6. The method according to claim 5, wherein the description information of the single song includes a title of the single song, and generating a single song title of each single song in the parse file includes:

and naming each single song according to the playing sequence of the single song in the album.

7. The method of any of claims 1-6, wherein after generating the parsed file of the album file, the method further comprises:

showing the single song title in the analysis file in a user interface;

responding to a naming request for the target single song triggered on the user interface, and acquiring a self-defined title of the target single song as a final title of the target single song; and the number of the first and second electrodes,

and updating the single song title of the target single song in the analysis file by adopting the final title.

8. The method according to claim 5, wherein the description information of the single song includes a name of a single song author of the single song, and generating the name of the single song author of each single song in the parsed file includes:

and naming the name of the author of the single song according to the playing sequence of the single song in the album aiming at each single song.

9. The method as claimed in any one of claims 1-5 and claim 8, wherein after generating the parsed file of the album file, the method further comprises:

displaying the name of the single song author in the analysis file in a user interface;

responding to a naming request for the author name of the target single song triggered by the user interface, and acquiring a custom author name of the target single song as a final author name of the target single song; and the number of the first and second electrodes,

and updating the single song author name of the target single song in the analysis file by adopting the final author name.

10. A terminal device, comprising:

a display, a processor, and a memory;

the display is used for displaying information;

the memory to store the processor-executable instructions;

the processor is configured to execute the instructions to implement a parse-file generating method of an album file as claimed in any one of claims 1 to 9.

Technical Field

The present application relates to the field of data processing technologies, and in particular, to a method for generating an analysis file of an album file and a terminal device.

Background

As the sound source quality of sound source files in the market is gradually improved, the album file format is concerned by more and more music enthusiasts. More and more users tend to pack their favorite music into an album file without losing the sound quality.

In the prior art, album files without parsed files cannot be played. Therefore, when the album file is made by the user, the analysis file needs to be made by the user, and many users can also make the analysis file to be matched with the album file for use. In practical applications, it is difficult to create an analysis file.

Disclosure of Invention

The application aims to provide an album file analysis file generation method and terminal equipment, and the method and the terminal equipment are used for solving the problem that the analysis file of the album file is difficult to manufacture.

In a first aspect, the present application provides a method for generating an analysis file of an album file, where the method includes:

acquiring an album file containing a plurality of single songs;

extracting specified audio features from the album file; the specified audio features are used for describing the interval between two adjacent single songs;

extracting the starting playing time of each single song from the album file based on the specified audio characteristics;

and generating an analysis file of the album file according to the format requirement of a preset analysis file template based on the file name of the album file and the initial playing time of each extracted single song.

In some embodiments, said extracting specified audio features from said album file comprises:

acquiring waveform data of the album file;

acquiring bytes representing the audio magnitude of a mute state from the waveform data;

and if the number of bytes of the audio magnitude representing the mute state in the specified duration meets a preset condition, determining to extract the audio features.

In some embodiments, the specified duration includes n unit durations, n is a positive integer greater than 1, the preset condition includes that n consecutive unit durations are in a mute state, and determining whether the number of bytes in the specified duration, which represents the audio magnitude in the mute state, satisfies the preset condition includes:

counting the ratio of the number of bytes of the audio magnitude representing the mute state in each unit time length;

if the occupation ratio exceeds a preset occupation ratio, determining the unit time length to be in a mute state;

if the continuous n unit time lengths are all in the mute state, determining that the preset condition is met;

otherwise, determining that the preset condition is not met.

In some embodiments, the extracting the start playing time of each single song from the album file based on the specified audio feature includes:

acquiring an end time point of the specified audio feature on a playing time axis of the album file;

recording the ending time point as the starting playing time of the next single song;

wherein, the starting playing time of the first single track of the album file is set as a default value.

In some embodiments, the generating an analysis file of the album file according to a format requirement of a preset analysis file template based on the file name of the album file and the extracted start playing time of each single track includes:

modifying the name of the preset analysis file template into the file name of the album file; and the number of the first and second groups,

generating description information of the album file in the analysis file based on the file name of the album file; and the number of the first and second groups,

and generating the description information of each single song in the analysis file based on the initial playing time of each single song.

In some embodiments, the generating of the single-song title of each single song in the parsed file includes:

and naming each single song according to the playing sequence of the single song in the album.

In some embodiments, after the generating the parsed file of the album file, the method further comprises:

showing the single song title in the analysis file in a user interface;

responding to a naming request for the target single song triggered on the user interface, and acquiring a self-defined title of the target single song as a final title of the target single song; and the number of the first and second electrodes,

and updating the single song title of the target single song in the analysis file by adopting the final title.

In some embodiments, the describing information of the single song includes an author name of the single song, and generating the author name of each single song in the parsed file includes:

and naming the name of the author of the single song according to the playing sequence of the single song in the album aiming at each single song.

In some embodiments, after the generating the parsed file of the album file, the method further comprises:

displaying the name of the single song author in the analysis file in a user interface;

responding to a naming request for the author name of the target single song triggered by the user interface, and acquiring a custom author name of the target single song as a final author name of the target single song; and the number of the first and second electrodes,

and updating the single song author name of the target single song in the analysis file by adopting the final author name.

In a second aspect, the present application provides a terminal device, comprising:

a display, a processor, and a memory;

the display is used for displaying information;

the memory to store the processor-executable instructions;

the processor is configured to execute the instructions to implement a parse-file generating method of album files as described in any of the above first aspects.

In a third aspect, the present application provides a computer-readable storage medium, wherein when instructions in the computer-readable storage medium are executed by a terminal device, the terminal device is enabled to execute the method for generating a parse file of an album file according to any one of the first aspect.

In a fourth aspect, the present application provides a computer program product comprising a computer program, which when executed by a processor, implements the method for generating a parse file of album files as described in any one of the above first aspects.

Based on the processing method for generating the analysis file of the album file, the embodiment of the application provides the analysis file template with the fixed format, so that a user can make the analysis file more conveniently and fast, and the user experience is improved. Meanwhile, the starting playing time of each single song can be obtained through the audio characteristics between the two single songs, so that the album file is split, a complete album is analyzed into a plurality of single songs, the duration of the single songs in the album file can be displayed when the album file is played, a user can switch one single song instead of the whole album when the user cuts the song, the single songs in the album can be added into the song list, the collection and the random playing, the complete album does not need to be played during the random playing, and the user can play the next single song randomly. By the method for generating the analysis file, the requirement on the format of the analysis file is reduced, and the analysis file can be easily made by a user.

Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.

Drawings

In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.

Fig. 1A is a schematic structural diagram of a terminal device according to an embodiment of the present application;

fig. 1B is a block diagram of a software structure of a terminal device according to an embodiment of the present application;

fig. 1C is a schematic view of an application scenario provided in the embodiment of the present application;

fig. 1D is a schematic diagram of parsing a file template format according to an embodiment of the present application;

fig. 2 is a schematic diagram illustrating a method for generating an analysis file of an album file according to an embodiment of the present application;

fig. 3 is a flowchart for determining existence of an parsed file according to an embodiment of the present application;

FIG. 4 is a flowchart of extracting audio features according to an embodiment of the present application;

fig. 5 is a flowchart illustrating a process of determining that an audio magnitude meets a preset condition according to an embodiment of the present application;

fig. 6 is a flowchart of generating an analysis file according to an embodiment of the present application;

fig. 7 is a system flowchart of a method for generating a parse file of an album file according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. The embodiments described are some, but not all embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Also, in the description of the embodiments of the present application, "/" indicates or means, for example, a/B may indicate a or B; "and/or" in the text is only an association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: three cases of a alone, a and B both, and B alone exist, and in addition, "a plurality" means two or more than two in the description of the embodiments of the present application.

The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as implying or implying relative importance or otherwise implying any number of technical features indicated. Thus, a feature defined as "first," "second," or "second" may explicitly or implicitly include one or more of that feature, and in the description of embodiments of the application, "a plurality" means two or more unless otherwise indicated.

With the development of computer technology, each music fan can use corresponding music production tools to self-make album files. Especially for lossless albums, the quality of the tone quality is high and is popular with users. However, there is no way to play album files without parsing files, so that when a user self-prepares a lossless album file, the user also needs to self-prepare the related parsing files to be used together with the lossless album file. However, in practical applications, it is often difficult for users to self-prepare parsing files. The main reason is that the requirement of parsing the file for format is strict, and the non-professional person often cannot meet the relevant requirement when making the parsing file by himself.

In view of this, the present application provides a method and a terminal device for generating an analysis file of an album file. In the embodiment of the application, an album file containing a plurality of single songs is obtained first, the initial playing time of each single song is extracted from the album file through the audio characteristics between two adjacent single songs, so that the album file can be split, a complete album is analyzed into a plurality of single songs, and then an analysis file containing single song information is generated according to the format requirement of an analysis file template. By the method for generating the analysis file, the requirement on the format of the analysis file is reduced, the user can make the analysis file more convenient and faster, and the user experience is improved.

After the inventive concept of the present application is introduced, the terminal device provided in the present application will be described below. Fig. 1A shows a schematic structural diagram of a terminal device 100. It should be understood that the terminal device 100 shown in fig. 1A is only an example, and the terminal device 100 may have more or less components than those shown in fig. 1A, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.

A block diagram of a hardware configuration of the terminal device 100 according to an exemplary embodiment is exemplarily shown in fig. 1A. As shown in fig. 1A, the terminal device 100 includes: a Radio Frequency (RF) circuit 110, a memory 120, a display unit 130, a camera 140, a sensor 150, an audio circuit 160, a Wireless Fidelity (Wi-Fi) module 170, a processor 180, a bluetooth module 181, and a power supply 190.

The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and may receive downlink data of a base station and then send the downlink data to the processor 180 for processing; the uplink data may be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like.

The memory 120 may be used to store software programs and data. The processor 180 performs various functions of the terminal device 100 and data processing by executing software programs or data stored in the memory 120. The memory 120 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. The memory 120 stores an operating system that enables the terminal device 100 to operate. The memory 120 in the present application may store an operating system and various application programs, and may also store program codes for executing the method for generating the analysis file of the album file according to the embodiment of the present application.

The display unit 130 may be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal device 100, and particularly, the display unit 130 may include a touch screen 131 disposed on the front surface of the terminal device 100 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.

The display unit 130 may also be used to display a Graphical User Interface (GUI) of information input by or provided to the user and various menus of the terminal apparatus 100. Specifically, the display unit 130 may include a display screen 132 disposed on the front surface of the terminal device 100. The display screen 132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 130 may be used to display an interface of album file information or single song information in the present application.

The touch screen 131 may cover the display screen 132, or the touch screen 131 and the display screen 132 may be integrated to implement the input and output functions of the terminal device 100, and after the integration, the touch screen may be referred to as a touch display screen for short. In the present application, the display unit 130 may display the application programs and the corresponding operation steps.

The camera 140 may be used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals which are then passed to the processor 180 for conversion into digital image signals.

The terminal device 100 may further comprise at least one sensor 150, such as an acceleration sensor 151, a distance sensor 152, a fingerprint sensor 153, a temperature sensor 154. The terminal device 100 may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.

The audio circuitry 160, speaker 161, microphone 162 may provide an audio interface between the user and the terminal device 100. The audio circuit 160 may transmit the electrical signal converted from the received audio data to the speaker 161, and convert the electrical signal into a sound signal for output by the speaker 161. The terminal device 100 may also be provided with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 162 converts the collected sound signal into an electrical signal, converts the electrical signal into audio data after being received by the audio circuit 160, and outputs the audio data to the RF circuit 110 to be transmitted to, for example, another terminal device, or outputs the audio data to the memory 120 for further processing. In the present application, the microphone 162 may collect audio data, so that the user may make album files, and the speaker 161 may be used to play single music.

Wi-Fi belongs to a short-distance wireless transmission technology, and the terminal device 100 can help a user to send and receive e-mails, browse webpages, access streaming media and the like through the Wi-Fi module 170, and provides wireless broadband internet access for the user.

The processor 180 is a control center of the terminal device 100, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs stored in the memory 120 and calling data stored in the memory 120. In some embodiments, processor 180 may include one or more processing units; the processor 180 may also integrate an application processor, which mainly handles operating systems, user interfaces, applications, etc., and a baseband processor, which mainly handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 180. In the present application, the processor 180 may run an operating system, an application program, a user interface display, a touch response, and the method for generating the analysis file of the album file according to the embodiment of the present application. Further, the processor 180 is coupled with the display unit 130.

And the bluetooth module 181 is configured to perform information interaction with other bluetooth devices having a bluetooth module through a bluetooth protocol. For example, the terminal device 100 may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) having a bluetooth module via the bluetooth module 181, so as to perform data interaction.

The terminal device 100 also includes a power supply 190 (such as a battery) for powering the various components. The power supply may be logically connected to the processor 180 through a power management system to manage charging, discharging, power consumption, etc. through the power management system. The terminal device 100 may further be configured with a power button for powering on and off the terminal device, and locking the screen.

Fig. 1B is a block diagram of a software configuration of the terminal device 100 according to the embodiment of the present application.

The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer, from top to bottom, respectively.

The application layer may include a series of application packages.

As shown in fig. 1B, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.

The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.

As shown in FIG. 1B, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.

The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.

The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, dialed and answered calls, browsing history and bookmarks, phone books, short messages, etc.

The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying a picture.

The phone manager is used to provide the communication function of the terminal device 100. Such as management of call status (including on, off, etc.).

The resource manager provides various resources, such as localized strings, icons, pictures, layout files, video files, etc., to the application.

The notification manager allows the application to display notification information (e.g., message digest of short message, message content) in the status bar, can be used to convey notification-type messages, and can automatically disappear after a short dwell without user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, text information is prompted in the status bar, a prompt tone is given, the terminal device vibrates, an indicator light flickers, and the like.

The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.

The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.

The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.

The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.

The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.

The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.

The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.

A 2D (an animation mode) graphics engine is a drawing engine for 2D drawing.

The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.

The terminal device 100 in the embodiment of the present application may be an electronic device including, but not limited to, a mobile terminal, a tablet computer, and the like.

Some brief descriptions are given below to application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In specific implementation, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.

Reference is made to fig. 1C, which is a schematic diagram of an application scenario provided in the embodiment of the present application. The application scene comprises acquisition equipment 101, terminal equipment 102, a network 103 and intelligent terminal equipment 104. The collecting device 101 is a device for collecting music, such as an audio collecting card, a recording pen, a microphone, and the like. The terminal device 102 includes, but is not limited to, an electronic device such as a desktop computer, a mobile computer, a tablet computer, and a smart television. The intelligent terminal device 104 includes, but is not limited to, a digital conference desktop intelligent terminal, a video phone, a conference terminal, a personal computer with built-in multimedia function, a palm computer, a smart phone, etc.

The acquisition device 101 and the terminal device 102 are connected through a wireless or wired network. The embodiment of the application is applicable to the situation that a user can record music by adopting the acquisition equipment 101 to generate album files, make analysis files through the terminal equipment 102 and upload the analysis files to the network 103 for the remote intelligent terminal equipment 104 to play.

Of course, the method provided in the embodiment of the present application is not limited to the application scenario shown in fig. 1C, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. Functions that can be implemented by each device of the application scenario shown in fig. 1C will be described in the following method embodiment, and will not be described in detail herein.

In the parsing method for generating the parsed file of the album file according to the embodiment of the present application, the content of the preset parsed file template needs to be replaced, so that a parsed file template with a correct format needs to be created in advance.

The parsing file template format provided by the embodiment of the present application is described below with reference to fig. 1D, so as to be easily understood by those skilled in the art. Referring to fig. 1D, wherein:

FILE: refers to the file name of the album file.

TRACK (single song), each single song has a corresponding TRACK, is used for identifying the starting position of the single song information of a single song in the analysis file.

Album Title: TITLE before the first TRACK is used to indicate album TITLE.

Single-song Title: the TITLE in the next row of each TRACK is the one-song TITLE of that TRACK.

Album Performer (author): PERFORMER before the first TRACK appears is used to indicate album author.

Single-run Performer: PERFORMER in the next row of each TRACK is used to indicate the single song author.

INDEX: each INDEX corresponds to a TRACK, and is used for indicating the starting playing time of the TRACK.

In order to facilitate understanding of the method for generating the parsed file of the album file according to the embodiment of the present application, the following description is further provided with reference to the accompanying drawings.

Fig. 2 is a flowchart illustrating a method for generating an analysis file of an album file according to an embodiment of the present disclosure. Particularly for lossless albums, as shown in fig. 2, the method comprises the following steps:

step 201: and acquiring an album file containing a plurality of single songs.

In some embodiments, it may be necessary to determine that there are no parse files in the album file before generating the parse files. If the album file has the analysis file, the analysis file does not need to be generated, and if the album file does not have the analysis file, the analysis file needs to be generated. Fig. 3 is a schematic flow chart of determining whether an analysis file exists, which specifically includes the following steps:

first, an album file containing a plurality of single songs is obtained, and then, in step 301, music a in the album file is played. In step 302, the database information is consulted to obtain the complete path of music A. Then, in step 303, it is determined whether a file with the same name as music a and a suffix of. cue exists in the path of music a, and if so, it is determined that there is an analysis file already in the album file, and in step 304, the operation ends. If not, it indicates that there is no parse file in the album file, and in step 305, a parse file of the album file is generated according to the method provided by the embodiment of the present application.

Step 202: specified audio features are extracted from the album file.

Wherein the specified audio features are used for describing the interval between two adjacent single songs. In one possible implementation, the audio feature may be a mute duration. Extracting the specified audio features based on the mute duration may be implemented as: acquiring waveform data of the album file, and acquiring bytes of an audio magnitude representing a mute state from the waveform data; and if the number of bytes of the audio magnitude representing the mute state in the specified duration meets a preset condition, determining to extract the audio features. The specified duration refers to the song switching time required from the end of playing the previous piece of music to the start of playing the next piece of music, and can be set by the user or can be a fixed value. The specified duration comprises n unit durations, and n is a positive integer greater than 1.

The manner of extracting the specified audio feature based on the unit duration in the mute duration may be implemented as the steps shown in fig. 4:

the album file is first played and, in step 401, the Visualizer (visualization tool) is opened. The Visualizer is then connected to the album file being played in step 402.

After the visualization tool is started, an executable instruction can be written in a visualization tool interface, the album file which is being played is put into the visualization tool, and the visualization tool is connected with the album file which is being played.

In step 403, waveform data of the album file is acquired.

In step 404, bytes representing the audio magnitude of the mute state are obtained from the waveform data.

In step 405, it is determined whether the number of bytes of the audio magnitude representing the mute state in the specified duration satisfies a preset condition, if yes, in step 406, the specified audio feature is extracted, and if not, step 404 is continuously executed until the album file is played.

The preset conditions comprise that the continuous n unit time durations are all in a mute state, and the number of bytes of the audio magnitude in the specified time duration, which represents the mute state, exceeds a preset number threshold.

In some embodiments, the determining whether the number of bytes representing the audio magnitude in the mute state in the specified duration satisfies the preset condition may be implemented by counting a ratio of the number of bytes representing the audio magnitude in the mute state in each unit duration, and comparing the counted ratio with a preset ratio. The specific steps are shown in fig. 5:

in step 501, the percentage of the number of bytes per unit time length representing the audio magnitude of the mute state is counted.

In step 502, the counted occupation ratio is compared with a preset occupation ratio, if the counted occupation ratio exceeds the preset occupation ratio, step 503 is executed, the current unit duration is determined to be in a mute state, and if the counted occupation ratio does not exceed the preset occupation ratio, step 501 is executed again.

In step 504, it is determined whether n consecutive unit durations are all in a mute state, if yes, in step 505, a preset condition is satisfied, and if not, in step 506, the preset condition is not satisfied.

In another embodiment, the audio segment to be extracted may be preprocessed and subjected to discrete fourier transform to obtain a two-dimensional spectrogram signal, and the two-dimensional spectrogram signal is sent to the neural network for calculation to obtain the audio feature to be generated. And comparing the generated audio features with the specified audio features, and determining to extract the specified audio features if the comparison similarity exceeds a specified threshold.

Therefore, through the method, the specified audio features can be determined to be extracted from the album file, so that the single music can be cut out from the album file according to the specified audio features.

Step 203: based on the specified audio characteristics, the start playing time of each single song is extracted from the album file.

The starting playing time of the first single track of the album file can be set to a default value, such as the starting playing time of the album file. Then, the initial play time of each single track may be determined by acquiring an end time point of the specified audio feature on the play time axis of the album file, and recording the end time point as the initial play time of the next single track. For example, if 4 seconds between two single songs are in a mute state, the 4 th second time point is used as the starting playing time of the second single song. For example, in fig. 1D, INDEX 01 of the first single-bend TRACK01 is 00: 00: 00, INDEX 01 cut to a second single-bend TRACK02 is 04: 19: 28.

of course, in another embodiment, any time point in the time interval occupied by the specified audio feature may be taken as the starting playing time of the next single track, and the same is also applicable to the embodiment of the present application.

Therefore, by the mode, the initial playing time of each single song can be extracted according to the specified audio characteristics, the album file is split, the duration of the single song in the album can be displayed when the album file is played, and meanwhile, a user can switch one single song instead of the whole album when the user cuts songs.

Step 204: and generating an analysis file of the album file according to the format requirement of a preset analysis file template based on the file name of the album file and the initial playing time of each extracted single song.

In some embodiments, after the album file is played, the parsing file is generated according to the format requirement of the preset parsing file template, which may be implemented as the steps shown in fig. 6:

in step 601, the name of the preset parsing file template is modified to the file name of the album file.

In step 602, description information of the edit file in the parse file is generated based on the file name of the album file.

In step 603, the description information of each single song in the parsed file is generated based on the starting playing time of each single song.

In the embodiment of the present application, the order of executing the above three steps is not required, and may be executed according to the order of the steps 601 and 603, or may be executed simultaneously, or may be executed first in step 602, then in step 601, then in step 603, and so on.

Wherein, the description information of the single song comprises the title of the single song. The single-song title of each single song in the parse file may be named for each single song according to the playing order of each single song in the album file. The description information of the single song also comprises the name of the author of the single song, and the name of the author of the single song of each single song in the analysis file can be named for each single song according to the playing sequence of each single song in the album file. Wherein the playing sequence is determined according to the initial playing time of each single track.

Therefore, through the mode, a new analysis file can be generated according to the preset template of the analysis file, and the problem that the analysis file is difficult to generate is solved.

For example, the album file M contains 3 single songs, and the file name M of the album file, the album title M of the album file, the author name M1 of the album file, the single song title X, Y, Z of the single song, the author names X1, Y1, Z1 of the single song, the start play time 00 of the single song: 00: 00. 04: 22: 18. 08: 25: 34. then, the parsing file is generated according to the template format shown in fig. 1D. The specific operation is to replace the FILE name M of the album FILE with "B" in FILE "2007. B.flac" in the template format shown in FIG. 1D; replace "B" in TITLE "2007. B" in the template format shown in FIG. 1D with the album TITLE M of the album file; the author name of the album file M1 is substituted for "A" in PERFORMER "A" in the template format shown in FIG. 1D. Then, according to the starting playing time sequence of the single track, the single track title X, the author name X1 of the single track X, the starting playing time 00 of the single track X: 00: 00 replaces TRACK01 in the template format shown in FIG. 1D with the following TITLE "B", PERFORMER "B1", INDEX 0100: 00: "B", "B1", "00" in 00: 00: 00'; the title Y of the single song, the author name Y1 of the single song Y, the start playing time 04 of the single song Y: 22: 18 replace TRACK02 in the template format shown in FIG. 1D with TITLE "C", PERFORMER "C1", INDEX 0104 below: 19: "C", "C1", "04" in 28: 19: 28'; the title Z of the single song, the author name Z1 of the single song Z, and the start playing time 08 of the single song Z: 25: 34 replace TRACK 03 in the template format shown in FIG. 1D with the following TITLE "D", PERFORMER "D1", INDEX 0108: 29: "D", "D1", "08" in 40: 29: 40 ".

In addition, in order to distinguish each single song in the analysis file, the user can quickly find the single song wanted by the user, and meanwhile, the user can modify the title and the author name of the single song at the corresponding position, so that convenience is brought to the user. In some embodiments, after generating a parsed file of the album file, a title of the single song and a name of an author of the single song in the parsed file may be presented in a user interface; responding to a naming request for the target single song and the author name of the target single song triggered on a user interface, and acquiring a custom title and a custom author name of the target single song as a final title and a final author name of the target single song; and updating the single-song title and the single-song author name of the target single song in the analysis file by using the final title and the final author name.

The method for generating an analysis file of an album file according to the embodiment of the present application is further described in detail with reference to fig. 7, and the specific steps are as follows:

in step 701, music within the album file is played.

In step 702, it is determined whether a parse file exists within the album file. If there is an analysis file in the album file, in step 703, playing a single song; if no parse file exists in the album file, go to step 704.

In step 704, the complete album file is played.

In step 705, the Visualizer is opened.

In step 706, the Visualizer is connected to the album file being played.

In step 707, waveform listening is set to acquire waveform data of the album file.

In step 708, an audio magnitude array is obtained from the waveform data.

In step 709, data of each audio magnitude of the audio magnitude array is calculated.

Because the value of the audio magnitude is erroneous, the audio magnitude array may contain other audio magnitude data besides the audio magnitude data indicating the mute state, but if the other audio magnitude data are less, the overall mute state is not affected. Thus, in step 710, bytes per second that represent the audio magnitude of the mute state are obtained, and the percentage of the number of bytes per second that represent the audio magnitude of the mute state is counted. Then, in step 711, it is compared whether the ratio exceeds a preset ratio. If the preset ratio is exceeded, it indicates that the second is in a mute state, and step 712 is continuously executed, otherwise step 710 is continuously executed.

In step 712, it is determined whether the continuous duration in the mute state exceeds a set duration threshold. If the time length exceeds the set time length threshold, step 713 is executed, otherwise, step 710 is continuously executed.

In step 713, the last time point A1-An of the set duration threshold is recorded.

In step 714, it is determined whether the playing of the single track is finished. If not, the steps 710 and 714 are repeatedly executed until the playing of the single music is finished, and if the playing is finished, the user is prompted whether to save in step 715. If the user does not save, then the operation ends at step 716. If the user saves, then in step 717, the preset parse file template is renamed.

In step 718, the renamed parse file template is opened and the fields in the parse file template are saved as String types.

In step 719, a1(An) is written to the start play position of TRACK01(TRACK n).

In step 720, the generated new String is written into the parse file.

In step 721, the parsed file is saved.

Based on the foregoing description, by extracting the mute time length between two single songs in the album file, the initial play time of each single song is extracted, and the album file is split. And a new analysis file is generated according to a preset template of the analysis file, so that the duration of a single song in the album file can be displayed when the album file is played, a user can switch the single song instead of the whole album when the user cuts a song, the single song in the album can be added into a song list, collection and random playing, the next single song can be played at random without playing the whole album during random playing, and convenience is brought to the user. Finally, the problem that the file is difficult to analyze when the album file is made is solved.

The embodiments provided in the present application are only a few examples of the general concept of the present application, and do not limit the scope of the present application. Any other embodiments extended according to the scheme of the present application without inventive efforts will be within the scope of protection of the present application for a person skilled in the art.

As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.

These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:音频混音方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!