Sound event detection and positioning method based on deep learning

文档序号:1353108 发布日期:2020-07-24 浏览:7次 中文

阅读说明:本技术 基于深度学习的声音事件检测与定位方法 (Sound event detection and positioning method based on deep learning ) 是由 齐子禛 黄青华 鲁乃达 房伟伦 于 2020-03-16 设计创作,主要内容包括:本发明涉及一种基于深度学习的声音事件检测与定位方法,包括以下步骤:步骤一,分割数据集;步骤二,预处理,即将包含声音信号的数据集进行特征提取得到Log-Mel谱图和GCC-PHAT;步骤三,构建深度学习模型,即借鉴ResNet框架,构建出结合ResNet框架和RNN相结合的一种网络架构,并且层与层之间复合了池化、正则化、归一化模块用于优化特征提取提高非线性度;步骤四:两步训练,即先进行SED任务的训练,得到最佳模型并将训练结果作为特征输入到DOA任务的训练中;之后再进行DOA任务的训练,最终得到最佳训练模型。本发明首先提取出适合于该任务训练的特征,从而提高了抗混响性能,并提出一种新的框架结构来解决网络加深却导致精度下降的问题,最终提高了预测的精度。(The invention relates to a sound event detection and positioning method based on deep learning, which comprises the following steps of firstly, segmenting a data set, secondly, preprocessing, namely, carrying out feature extraction on the data set containing sound signals to obtain an L og-Mel spectrogram and GCC-PHAT, thirdly, constructing a deep learning model, namely, constructing a network architecture combining a ResNet framework and an RNN by using a ResNet framework for reference, and compositing a pooling, regularization and normalization module between layers for optimizing the feature extraction and improving the nonlinearity, fourthly, firstly, carrying out training of an SED task to obtain an optimal model and inputting a training result as a feature into the training of a DOA task, then carrying out the training of the DOA task to finally obtain the optimal training model.)

1. A sound event detection and positioning method based on deep learning is characterized by specifically comprising the following steps:

firstly, a data set is divided, namely the data set is divided into a training set, a verification set and a test set and divided according to a certain proportion;

step two, preprocessing, namely, performing feature extraction on a data set containing sound signals to obtain an L og-Mel spectrogram suitable for SED training and GCC-PHAT with high calculation speed and certain reverberation resistance;

constructing a deep learning model, namely constructing a network architecture combining a ResNet frame and an RNN by using a ResNet residual network frame which is commonly used in the field of computer vision and used for solving the problem of layer number deepening precision reduction, and compositing a pooling, regularization and normalization module between layers for optimizing feature extraction and improving nonlinearity;

step four: two-step training, namely training the SED task to obtain an optimal model and inputting a training result serving as a characteristic into the training of the DOA task; and then, carrying out DOA task training to finally obtain the optimal training model.

Technical Field

The invention relates to a sound event detection and positioning method based on deep learning, which is applied to the technical fields of robots, natural science, environmental monitoring, navigation and the like.

Background

In recent years, with the development of digital signal processing technology and neural network technology, sound localization technology has been greatly developed. For example, Soumitro et al propose a single source DOA (direction of Arrival estimation) estimation method based on CNN (convolutional neural network), which is to perform short-term fourier transform on a microphone received signal, then use a phase component as an input of the entire CNN network, and obtain a layered posterior probability of an output by using a softmax activation function through three convolutional layers and two fully connected layers. Experiments show that the method can overcome the transformation weighted Response Power-Phase Transform (SRP-PHAT) in noise and reverberation acoustic environment. However, it is not suitable for multiple sound source environments, and the estimated angle of the sound source cannot be spatially localized. To accommodate a multi-source environment, the authors also propose an improved approach to solve the problem of multi-source DOA estimation of multi-time frame angles in a mixed-time structured dataset. And also verifies that M-1 convolutional layers are required for optimal performance of the M microphone DOA estimates. This network structure can adapt to a wide range of noise environments, but estimation performance is poor in a dynamic acoustic environment, and computational complexity is high as the number of microphones increases.

Sound detection and localization (sound event location and detection) is a combined task of determining each active sound event and estimating their respective spatial locations. Sharth avananne et al, 2017, proposed using a combination of RNN (recurrent neural network) and CNN, CRNN (convolutional recurrent neural network), to achieve DOA estimation, which takes multi-channel audio as input, first extracts spectrograms of all channels, and then uses CRNN to sequentially map the phase and amplitude of the spectrogram to two outputs. Later, Sharath avanane et al proposed under the CRNN network to combine SED (detection of sound events) with DOA estimation for sound localization, the first output being sound event detection for multi-label multi-classification tasks and the other output being DOA estimation, localized at 3D cartesian coordinates with microphone as origin. The method is a positioning method based on a regression method, the recall rate is improved, but the error rate is obviously higher than that of DOA estimation based on a classification method.

The method comprises the steps of firstly extracting a MRNN (multi-resolution) graph on the basis of a CRNN, carrying out frame and input change, changing a task of training SED and DOA simultaneously into a two-step task of training SED and then DOA, avoiding mutual influence of two loss values in the training process, inputting the SED training result into a training seed of DOA as a mask, and simultaneously deepening a network structure, besides, changing the characteristic extraction from an original amplitude phase spectrogram into a L og-Mel spectrogram and GCC-PHAT (phase weighted generalized cross correlation transformation) more suitable for network training, and using the method to greatly improve the prior Sharath adavanne network, but researching the result unstable and reduced accuracy when the network is further deepened, after two-step training is proposed by YIN C and the like, other researchers also use the idea to draw the combined training into Kyoungjin Noh, three-step training stages are proposed, firstly extracting a MRNN (multi-resolution) and training a multi-pass through a training laminated layer, and obtaining a multi-step training model, and adding a SAD-SAD (SAD) to a 3-DOA 3-SAD laminated layer, and a three-layer (SAD) layers, and a three-laminated layer (SAD-8) layers are used as a laminated training layer, and a three-layer (SAD layer, and a three-laminated layer combined training layer (SAD) layer, and a three-laminated layer (SAD layer is used as a three-laminated layer, and a three-laminated layer combined training layer, and a three layer (SAD layer) with a combined training layer (SAD layer, wherein the SAD layer) training layer, the SAD layer is used as a three layers, the original SAD layer, the SAD layer is used as a method is used for training method of a method of.

Disclosure of Invention

Aiming at the defects of the prior art, the invention provides a sound event detection and positioning method based on deep learning, which aims to solve the problems of poor anti-reverberation performance and reduced precision caused by deepening a network in the sound event detection and positioning by adopting the conventional deep learning model and comprises the steps of firstly carrying out SED part to detect the occurrence and the offset of a sound event and further associating a text label with the detected sound event; and then carrying out DOA partial training to calculate the error of the positioning sound source position. The method finally further reduces the error rate of SED and improves DOA estimation precision.

In order to achieve the above object, the idea of the present invention is:

firstly, dividing a data set containing sound signals into a training set, a verification set and a test set; then preprocessing is carried out, namely, characteristics suitable for the task training are extracted; then constructing a neural network structure suitable for the task training and training; and finally, obtaining the best model with the lowest error rate of SED and DOA through training.

According to the inventive concept, the technical scheme adopted by the invention is as follows:

a sound event detection and positioning method based on deep learning specifically comprises the following steps:

firstly, a data set is divided, namely the data set is divided into a training set, a verification set and a test set and divided according to a certain proportion;

step two, preprocessing, namely, performing feature extraction on a data set containing sound signals to obtain an L og-Mel spectrogram suitable for SED training and GCC-PHAT with high calculation speed and certain reverberation resistance;

constructing a deep learning model, namely constructing a network architecture combining a ResNet frame and an RNN (radio network) by using a ResNet (residual error network) frame which is commonly used in the field of computer vision and used for solving the problem of layer number deepening precision reduction, and compositing a pooling, regularization and normalization module between layers for optimizing feature extraction and improving nonlinearity;

step four: two-step training, namely training the SED task to obtain an optimal model and inputting a training result serving as a characteristic into the training of the DOA task; and then, carrying out DOA task training to finally obtain the optimal training model.

Compared with the prior art, the invention has the following outstanding advantages and substantive features:

the method adopts a preprocessing technology to extract the characteristics suitable for the task training, thereby improving the reverberation resistance, and provides a neural network framework combining ResNet and RNN networks, thereby solving the problem of accuracy reduction caused by network deepening and finally improving the prediction accuracy.

Drawings

FIG. 1 is a flowchart of a deep learning based method for detecting and locating sound events according to the present invention.

FIG. 2 is a diagram of a neural network framework according to the present invention.

Fig. 3 is a detailed schematic diagram of the ResNet layer in the neural network of the present invention.

Detailed Description

For a better understanding of the technical solution of the present invention, the following detailed description is given with reference to the accompanying drawings:

referring to the flow of the method in fig. 1, the invention provides a sound event detection and positioning method based on deep learning, which uses two-step training, namely, firstly, performing an SED (sound event detection) part, detecting occurrence and offset of a sound event, and further associating a text label with the detected sound event in order to keep low complexity; and then carrying out DOA partial training to calculate the error of the positioning sound source position. The method finally further reduces the error rate of SED and improves DOA estimation precision. The specific implementation steps are as follows:

step S1: segmenting the data set; dividing a data set into a training set, a verification set and a test set, and dividing the data set according to a certain proportion, wherein the method specifically comprises the following steps:

the data set consists of four cross validation splits, part 1, 2, 3, 4 respectively. The first group is training set using parts 3 and 4, validation set using part 2, and test set using part 1; the second group uses parts 4 and 1 in the training set, uses part 3 in the verification set, and uses part 2 in the test set; the third group uses parts 1 and 2 in the training set, uses part 4 in the verification set, and uses part 3 in the test set; the fourth group is that the training set uses parts 2 and 3, the validation set uses part 1, and the test set uses part 4. Overfitting in the training process can be reduced through cross validation, and effective information as much as possible can be obtained in limited data.

Step S2, preprocessing, extracting the characteristics of the data set containing the sound signals to obtain L og-Mel spectrogram suitable for SED training and GCC-PHAT (phase weighted generalized cross-correlation transform) with high calculation speed and certain reverberation resistance, wherein the method comprises the following steps:

firstly, STFT (short-time Fourier transform) is carried out to obtain the spectrum information of each group of signals, and then an L og-Mel spectrogram of the current channel signal can be obtained by a Mel filter and logarithm extraction, wherein the Mel spectrogram is extracted to convert the actual spectrum into a frequency range which can be perceived by human ears, and the conversion formula is as follows:

Mel(f)=2595log(1+f/700) (1)

where f is the frequency.

The GCC-PHAT can be obtained by calculating the cross-correlation power spectrums of the two groups of signals, multiplying by a weighting function and then carrying out inverse Fourier transform. The calculation formula is as follows:

wherein IFFT represents an inverse fourier transform that changes the signal from the frequency domain to the time domain; xi(f, t) is the short-time Fourier transform of the ith microphone signal; [ X ]j(f,t)]*Is Xj(f, t) conjugate function.

Step S3: constructing a deep learning model; by using a ResNet frame which is commonly used in the field of computer vision and used for solving the problem of deepening of the layer number and reducing the precision, a network architecture combining the ResNet frame and an RNN is constructed, and a pooling, regularization and normalization module is compounded among layers and used for optimizing feature extraction and improving the nonlinearity; the method comprises the following specific steps:

in the step, the initial learning rate of the network is set to be 0.001, the initial learning rate is used for the first 30 iterations, the learning rate of each iteration is reduced by 10%, an Adam optimizer is adopted, and concrete parameters of a training model are respectively as follows according to the sequence shown in FIG. 2:

1) the convolution layer 1 comprises 64 convolution kernels in total, the size of the convolution kernels is 3 x 3, the step size is set to be 2, the filling mode is set to be SAME, the activation function is set to be Re L U, the local response normalization is carried out, and the bias execution unit is not arranged;

2) the ResNet network counts 32 convolutional layers, and a directly related channel is established between the input and the output of each two convolutional layers;

3) reducing dimension, namely performing dimension reduction on the output dimension in the step 2) and inputting the dimension reduction into the step 4);

4) Bi-GRU (Bi-directional gated loop unit) which is mapped to 256 dimensions for SED branch using one layer of Bi-GRU and the first dimension of input and output is batch _ size (size of one input data), thus setting batch _ first to True, bidirectional also to True, num _ layers to 1, hidden _ size to 256; for DOA branches, the number of the superimposed layers of Bi-GRU is set to be 2, and other settings are the same as those of SED branches;

5) the full connection layer is mapped into 512 dimensions, a bias execution unit bias is set, the SED branch output is N dimensions, and the DOA branch output is 2N dimensions, so that the bias execution unit bias respectively acts on an azimuth angle and a pitch angle;

6) sigmoid activation function is used for SED branch, and L initial activation function is used for DOA branch

7) And (4) upsampling, namely upsampling the finally output multi-channel data, and using a default mode, nearest.

Further, the specific network models in the ResNet network mentioned in step 1) are respectively as follows according to the parameters shown in fig. 3:

(1) the convolution layer 1 comprises 3 groups of convolution layers, wherein the number of channels of each group of convolution layers is 64, the size of a convolution kernel is 3 x 3, the step length is set to be 1, the filling mode is set to be SAME;

(2) the convolution layer 2 comprises 4 convolution layers, each convolution layer comprises 128 convolution kernels, the size of each convolution kernel is 3 x 3, the step size is set to be 1, the filling mode is set to be SAME, the activation function is set to be Re L U, the local response normalization is carried out, and a bias execution unit is not arranged;

(3) the convolution layer 3 comprises 256 convolution layers of each group of 6 convolution layers, the convolution kernel size is 3 x 3, the step size is set to be 1, the filling mode is set to be SAME, the activation function is set to be Re L U, the local response normalization is carried out, and no bias execution unit is arranged;

(4) the convolution layer 4, which contains 3 convolution layers, has 512 convolution layer channels in each group, 3 × 3 convolution kernel size, 1 step size, 1 filling mode, Re L U activation function, and local response normalization without bias unit.

Step S4: two-step training; firstly, training an SED task to obtain an optimal model and inputting a training result serving as a characteristic into the training of a DOA task; and then, carrying out DOA task training to obtain an optimal training model, and finally carrying out testing through a test set.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:声源定位方法及装置、计算机存储介质和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!