Music genre classification method based on deep learning

文档序号:1965016 发布日期:2021-12-14 浏览:14次 中文

阅读说明:本技术 一种基于深度学习的音乐流派分类方法 (Music genre classification method based on deep learning ) 是由 刘金良 曹杰 王昌辉 申冬琴 张佳禹 靖慧 马丽娜 罗婕 于 2021-08-31 设计创作,主要内容包括:本发明提供了一种基于深度学习的音乐流派分类方法,所述生成方法包括,首先对目标音频进行预处理得到所述目标音频的视觉特征和音频特征;通过10折交叉验证,将所述目标音频的特征数据依次放入每个模型中进行训练,选取泛化能力最优的模型;对所述最优模型用全部的数据重新进行训练,保留最优参数;将录制音频或原始音频文件进行预处理后投入使用所述最优参数的神经网络进行分类预测,分类器给出分类结果。(The invention provides a music genre classification method based on deep learning, which comprises the steps of firstly preprocessing a target audio frequency to obtain a visual characteristic and an audio characteristic of the target audio frequency; through 10-fold cross validation, sequentially putting the characteristic data of the target audio into each model for training, and selecting the model with the optimal generalization capability; retraining the optimal model by using all data, and reserving optimal parameters; and preprocessing the recorded audio or the original audio file, then putting the preprocessed audio or the original audio file into a neural network using the optimal parameters for classification and prediction, and giving a classification result by a classifier.)

1. A music genre classification method based on deep learning is characterized by comprising the following steps:

step S1, firstly, preprocessing a target audio frequency to obtain visual characteristics and audio characteristics of the target audio frequency;

step S2, through 10-fold cross validation, the feature data of the target audio is placed into each candidate model for training at one time, and a model with the optimal generalization capability is selected;

step S3, retraining all data for the optimal model and reserving optimal parameters;

and step S4, preprocessing the recorded audio or the original audio file, then putting the preprocessed audio or the original audio file into a neural network using the optimal parameters for classification and prediction, and giving a final classification result by a classifier.

2. The music genre classification method based on deep learning of claim 1, wherein the step S1 specifically comprises the following steps:

s1.1, cutting the target audio into segments with the length of 3S;

step S1.2, applying a pre-emphasis filter to the audio clip to amplify high frequency;

step S1.3, after pre-emphasis, dividing the audio clip into short time frames, and applying a window function to each frame;

s1.4, performing N-point Fast Fourier Transform (FFT) on each frame to calculate a frequency spectrum and calculating a power spectrum;

s1.5, applying a triangular filter to the power spectrum, extracting a frequency band on the Mel scale, and finally forming a visual characteristic data tensor;

s1.6, extracting a plurality of audio features with different audio dimensions from the target audio clip obtained in the step S1.1, wherein the mean value and variance form of each feature are reserved, and finally, an audio feature data tensor is formed.

3. The music genre classification method based on deep learning of claim 1, wherein the step S2 specifically comprises the following steps:

s2.1, dividing the audio characteristic data and the visual characteristic data of the target audio into 10 disjoint equal parts, then taking one part for testing, taking the other 9 parts for training, and then obtaining the average value of error as a final evaluation index;

and S2.2, applying the step S2.1 to the candidate models, and selecting the model with the minimum generalization error as a final model.

4. The music genre classification method based on deep learning of claim 1, wherein the step S4 specifically comprises the following steps:

s4.1, preprocessing the target audio to be predicted in the S1 step, and then putting the target audio into a neural network using optimal parameters for classification prediction;

s4.2, the classifier of the neural network gives out the possible probability of each genre, each segment is voted to one genre after passing through the network model, and the genre which is voted most is selected; when the probabilities of the top3 are all low, the method provides the probability of the top3 to the user and determines that the result is poror.

5. The method of claim 3, wherein the optimal model with the least generalization error comprises a visual feature processing module, an audio feature processing module and a classifier.

Technical Field

The invention relates to the technical field of audio information retrieval, and mainly discloses a music genre classification method based on deep learning.

Background

With the rise of music streaming services, tens of thousands of digital audio are uploaded onto the internet. A key feature of these services is playlists, usually grouped by genre. The characteristics of different music genres do not have strict limits, but the music of the same genre has similar characteristics. Through analysis of these features, a human may perform genre labeling for many musical works.

In general, existing methods focus only on the visual characteristics of the target audio, and ignore the audio information of the music itself. This is not justified by the music genre classification task. Meanwhile, the existing method has no corresponding solution for the condition that the classification probability result is low.

Disclosure of Invention

The purpose of the invention is as follows: the invention provides a music genre classification method based on deep learning, which effectively predicts the genre of a target audio by fully utilizing the visual characteristics and the audio characteristics of the target audio and makes corresponding processing methods for the conditions of different classification probabilities.

In order to achieve the purpose, the technical scheme adopted by the invention is as follows:

a music genre classification method based on deep learning comprises the following steps:

step S1, firstly, preprocessing a target audio frequency to obtain visual characteristics and audio characteristics of the target audio frequency;

step S2, through 10-fold cross validation, putting the feature data of the target audio into each model for training at one time, and selecting a model with the optimal generalization capability;

step S3, retraining all data for the optimal model and reserving optimal parameters;

and step S4, preprocessing the recorded audio or the original audio file, then putting the preprocessed audio or the original audio file into a neural network using the optimal parameters for classification and prediction, and giving a final classification result by a classifier.

Further, in the step S1, the specific steps of firstly preprocessing the target audio to obtain the visual characteristic and the audio characteristic of the target audio are as follows:

s1.1, in order to increase the data volume, cutting the target audio into segments with the length of about 3S;

step S1.2, applying a pre-emphasis filter to the audio clip to amplify high frequency;

step S1.3, after pre-emphasis, dividing the audio segment into short time frames, and after slicing the audio segment into frames, applying a window function, such as a Hamming window, to each frame;

step S1.4, an N-point Fast Fourier Transform (FFT) is performed on each of said frames to calculate the spectrum, also called Short Time Fourier Transform (STFT), where N is typically 512 or 256, and the power spectrum (periodogram) is also calculated by a corresponding formula.

Step S1.5, applying a triangular filter, typically 40 filters, to the power spectrum to extract frequency bands on Mel scale, finally forming a visual feature data tensor of (None,128,130, 1);

step S1.6, extracting a plurality of audio features with different audio dimensions from the target audio clip obtained in the step S1.1, wherein the mean value and variance form of each feature are kept, and finally forming an audio feature data tensor of (None, m).

Further, in the step S2, the zhong tong 10-fold cross validation, the feature data of the target audio is sequentially put into each candidate model for training, and the candidate model takes the visual feature and the audio feature as input, which takes into account the visual feature and the audio information of the music itself. The specific steps for selecting the model with the optimal generalization ability are as follows:

s2.1, dividing the audio characteristic data and the visual characteristic data of the target audio into 10 equal and disjoint parts, taking one part for testing, taking the other 9 parts for training, and then obtaining the average value of error as a final evaluation index;

and S2.2, applying the step S2.1 to the candidate models, and selecting the model with the minimum generalization error as a final model.

Further, in step S4, after preprocessing the recorded audio or the original audio file, the neural network using the optimal parameters is used for classification and prediction, and the specific steps of the classifier for providing the final classification result are as follows:

s4.1, preprocessing the target audio to be predicted in the S1 step, and then putting the target audio into a neural network using optimal parameters for classification prediction;

step S4.2, the classifier of the neural network gives a possible probability of each genre, each segment is "voted" for one genre (generally, the genre with the highest classification probability) after passing through the network model, and we select the genre most voted for. When the probabilities of the top3 are all low, the method provides the probability of the top3 to the user and determines that the result is poror.

Has the advantages that:

the method and the device make full use of the visual characteristics and the audio characteristics of the target audio to effectively predict the genre of the target audio, make corresponding processing methods for the conditions of different classification probabilities, and improve the user experience.

Drawings

FIG. 1 is a general flowchart of a music genre classification method based on deep learning according to the present invention;

FIG. 2 is a diagram of a deep neural network architecture provided by the present invention;

FIG. 3 is a result graph of lower classification confidence provided by the present invention;

fig. 4 is a confusion matrix obtained based on a DTZAN data set provided by the present invention;

FIG. 5 is a diagram of a triangular filterbank provided by the present invention;

fig. 6 is a mel frequency spectrum diagram of various genres provided by the present invention.

Detailed Description

The invention will be further described by the following specific embodiments provided in conjunction with the accompanying drawings.

A music genre classification method based on deep learning specifically comprises the following steps in combination with FIG. 1:

step S1, first, the target audio is loaded as the source data, and they are divided into a window of approximately 3 seconds. Specifically, 66149 sample points are retained every three seconds and the short segments are discarded. This step can greatly increase the data size and simplify the transformation process (such as Mel spectrum). The mel frequency spectrum diagrams of different genres shown in fig. 6 have obvious difference in feature texture, so that the deep learning model can learn different features and then classify the features. At the same time, the data sequence is scrambled prior to segmenting the data set in order for the model to better learn the characteristics of each genre. The method includes two different data features, a visual feature and an audio feature. For the extraction of visual features, the specific steps are as follows:

the first step is to apply a pre-emphasis filter to the signal to amplify the high frequencies. The pre-emphasis filter is useful in several respects: (1) the spectrum is balanced, since high frequencies are usually smaller than low frequencies; (2) numerical problems during fourier transform operations are avoided; (3) the signal-to-noise ratio (SNR) may also be improved. The pre-emphasis filter can be applied to the signal x using a first order filter in the following equation:

y(t)=x(t)-αx(t-1)

typical values for the filter coefficient (α) are 0.95 or 0.97.

After pre-emphasis, we need to divide the signal into short time frames. The rationale for this step is that the frequencies in the signal vary with time, so in most cases it makes no sense to fourier transform the entire signal, as we lose the frequency profile of the signal over time. To avoid this, we can safely assume that the frequencies in the signal are stationary for a short time. Thus, by performing a fourier transform on this short time frame, we can obtain a good approximation of the signal frequency profile by concatenating adjacent frames. The typical true size range is 20ms to 40ms with a 50% (+/-10%) overlap ratio between frames. The frame size is 23.22ms in this embodiment.

After slicing the signal into frames, we apply a windowing function, e.g., a Hamming window, to each frame. The Hamming window has the following form, where N is the window length:

an N-point Fast Fourier Transform (FFT) is performed on each of the frames to compute the spectrum, also known as a Short Time Fourier Transform (STFT), where N is typically 512 or 256, and a power spectrum (periodogram) is also computed. Finally, a triangular filter is applied to the power spectrum as shown in fig. 5, typically 40 filters, on the Mel scale to extract the frequency bands, ultimately forming the visual feature data tensor of (None,128,130, 1).

For the extraction of the audio features, the specific steps are as follows:

extracting audio features of a plurality of different audio dimensions, such as timbre texture features, from the target audio segment obtained in step S1.1: chroma, Spectral central, Spectral roll-off, etc., each feature retains its mean and variance form, resulting in an (None, m) audio feature data tensor, where m is set to 55 in this embodiment.

Thus, we obtain one (None,128,130,1) visual feature data tensor and one (None, 55) audio feature data tensor. Next, we need to divide the two feature tensors into 10 equal and disjoint parts, and take 1 part at a time as the test set, and another nine parts as the training set, so we can get 10 different forms of data sets to evaluate the generalization ability of the model. And averaging errors obtained according to 10 times to serve as a good and bad index of the generalization ability of the model, so that the optimal value model is selected to perform the next operation. The above method is called 10-fold cross validation. The purpose of 10-fold cross validation is mainly to select the number of layers of the model, the activation function of the neurons, and the number of neurons per layer of the model (so-called hyper-parameters). Therefore, the hyper-parameters need to be continuously optimized and improved according to the finally obtained average error, so as to obtain the current optimal model structure.

In this embodiment, the models used include an audio feature processing module (AFE), a visual feature processing model (VFE) and a classifier. With reference to fig. 2, the specific structure is as follows:

to better process the Mel spectrogram of audio, the VFE module was fine tuned with parallel convolution layers, including 3-layer two-dimensional convolution, 1-layer parallel convolution (using max and average pools, respectively), and 2-layer Recurrent Neural Networks (RNNs). In contrast to using only one convolutional layer and then performing a pooling operation on a fourth convolutional layer, the present embodiment chooses to use parallel convolutional layers with different pooling operations. The parallel convolution layer has the main advantage of providing more statistical information for the subsequent layers and further improving the identification capability of the model. In each convolution operation, the other convolution layers have 128 cores, except the first convolution layer has 64 different cores of equal size. Each convolution kernel has a size of 3 x 3 and a hop length of 1, and forms a mapping relation with all the underlying features. The convolution kernel is overlaid at the corresponding location of the input. Each value in the convolution kernel is multiplied by the value of the corresponding pixel in the input. The sum of the above products is the value of the target pixel in the output. This operation is repeated for all positions of the input. After each convolution, a Batch Normalization (BN) and corrective linear unit (ReLU) operation is performed. We also add one max pool operation (only one branch for parallel convolutional layers) to reduce the number of parameters. In addition, it helps to broaden the receptive field and achieve non-linearity. The filter size for the cell operation was primarily in 2 x 2 band steps 2, 3 x 3 band steps 3 for the first and second cell operations, respectively, and 4 x 4 band steps 4 for the other cell operations. The role of the convolutional layer and the merging layer is to map the raw data to the hidden layer feature space. The VFE module uses the 2-layer RNN of the gate control unit (GRU) to summarize the time patterns of the two-dimensional 3-layer convolution and the 1-layer parallel convolution. However, not all outputs of the parallel convolution layer are put into the RNN, and only the branch output of the maximum pool parallel convolution is added to the RNNs. It is considered that human beings may pay more attention to a prominent rhythm in a short time when recognizing a music genre. Finally, there will be a length 160 vector output consisting of the output of the GRU and the branch output of the parallel convolution using the average pool operation. Instead of simply adding the outputs, the outputs are concatenated to avoid losing some information. In this way, more features with low-level information can be obtained.

The AFE module consists of five compact layers, each of 1024, 512, 256, 128, and 64, respectively. To solve the over-fitting problem in the experiment, a Dropout layer of 0.4 was added after each BN layer. Finally, the AFE module will output a vector of length 64.

The VFE module, AFE module and classifier constitute the entire network model. Finally, the outputs of the two modules are concatenated to form a length 224 eigenvector. The fully connected layer (FC) typically plays the role of a "classifier" throughout the neural network. But to reduce the number of parameters, only one FC layer with SoftMax function is used for classification herein. The correspondence between feature maps and types is easier to interpret and less prone to overfitting than traditional multi-layer fully connected layers. Since the last layer uses the SoftMax function, we will get the classification probability of each genre.

10-fold cross-validation selects the model with the least generalization error as the final model and trains the model again over the entire training set, resulting in the optimal model. While preserving the model parameters for achieving genre classification of the target audio.

In the implementation process of the target audio classification, the pre-processing operation of step S1 needs to be performed on the recorded audio or original audio file to obtain the visual feature data tensor of (None,128,130,1) and the audio feature data tensor of (None, 55). And putting the data tensor into a model to obtain the classification probability of each genre.

Consider that we have segmented the target audio, resulting in several consecutive segments. To this end, we will use a voting system. Each segment will "vote" for a genre (in general, the genre with the highest classification probability) after passing through the network model, and we will select the genre that votes the most, which will improve the accuracy of classification.

The last layer of the classifier we constructed is the softmax layer. This means that it does not really output the detected types, but rather the probabilities for each type. This is what we say classification confidence. For example, as shown in fig. 3, we can reject votes from low classification confidence slices. If there is no clear winner, we will reject the vote. If none of the genres gets more than a certain score (70%), the song may be judged to be por and only the classification results of top3 are given for the user to select, thus avoiding false labeling of the song and allowing further classification of the song under the user's feedback.

Fig. 4 is a confusion matrix obtained based on the DTZAN data set of the present embodiment. In the field of machine learning, a confusion matrix (also known as a probability table or error matrix). It is a specific matrix used to present the visualization effect of the performance of the algorithm. Each column represents a prediction value and each row represents the actual category. All correct predictions are on the diagonal, so it is easy to visually see from the confusion matrix where there are errors, since they are all outside the diagonal. The confusion matrix allows us to make more analyses than just a limitation on the accuracy.

The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于运动检测辅助识别的打击乐智能教育系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!