Sound source direction determining method, device, equipment and medium based on deep learning

文档序号:434844 发布日期:2021-12-24 浏览:2次 中文

阅读说明:本技术 基于深度学习的声源方向确定方法、装置、设备及介质 (Sound source direction determining method, device, equipment and medium based on deep learning ) 是由 陈文明 陈新磊 张洁 张世明 于 2021-09-01 设计创作,主要内容包括:本发明涉及深度学习技术领域,公开了一种基于深度学习的声源方向确定方法、装置、设备及介质,所述方法包括:根据目标混合声源信号得到相位谱信息;根据相位谱信息和预设长度帧序列信息生成对应的特征维度信息;根据预设卷积递归神经网络对特征维度信息进行预测,得到波达向量信息集合;根据波达向量信息集合确定目标混合声源的方向信息;本发明通过相位谱信息和预设长度帧序列信息生成特征维度信息,根据预设卷积递归神经网络对特征维度信息进行预测,基于预测得到的波达向量信息集合确定目标混合声源的方向信息,以实现对目标混合声源方向的确定,相较于现有技术通过传统的DOA算法进行声源方向的估计,能够有效提高确定声源方向的准确率。(The invention relates to the technical field of deep learning, and discloses a sound source direction determining method, a sound source direction determining device, sound source direction determining equipment and a sound source direction determining medium based on deep learning, wherein the method comprises the following steps: obtaining phase spectrum information according to the target mixed sound source signal; generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with the preset length; predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set; determining direction information of a target mixed sound source according to the information set of the arrival vectors; the method generates the characteristic dimension information through the phase spectrum information and the frame sequence information with the preset length, predicts the characteristic dimension information according to the preset convolution recurrent neural network, and determines the direction information of the target mixed sound source based on the information set of the wave arrival vectors obtained through prediction so as to determine the direction of the target mixed sound source.)

1. A sound source direction determining method based on deep learning, characterized by comprising the steps of:

acquiring a target mixed sound source signal, and acquiring corresponding phase spectrum information according to the target mixed sound source signal;

generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with preset length;

predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set;

and determining the direction information of the target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source.

2. The deep learning-based sound source direction determining method according to claim 1, wherein the obtaining a target mixed sound source signal from which corresponding phase spectrum information is obtained comprises:

acquiring a target mixed sound source signal, and framing the target mixed sound source signal;

carrying out Fourier transform on the framed target mixed sound source signal to obtain corresponding frequency spectrum information;

extracting real part information and imaginary part information in the frequency spectrum information;

and calculating the real part information and the imaginary part information through a first calculation formula to obtain corresponding phase spectrum information.

3. The method for determining the direction of a sound source based on deep learning according to claim 1, wherein the generating corresponding feature dimension information according to the phase spectrum information and the preset length frame sequence information comprises:

acquiring a sound source signal acquisition equipment set;

traversing and combining the sound source signal acquisition equipment set to obtain corresponding sound source signal acquisition equipment combination information;

calculating the phase spectrum information and the combined information of the sound source signal acquisition equipment by a second calculation formula to obtain IPD characteristic information;

and generating corresponding characteristic dimension information according to the IPD characteristic information and the preset length frame sequence information.

4. The method for determining the direction of a sound source based on deep learning according to claim 1, wherein the predicting the characteristic dimension information according to a preset convolutional recurrent neural network to obtain a set of arrival vector information comprises:

extracting convolutional neural network information, recurrent neural network information and fully-connected network information in a preset convolutional recurrent neural network;

convolving the characteristic dimension information according to the convolutional neural network information;

predicting the feature dimension information after convolution according to the recurrent neural network information to obtain corresponding DOA vector information;

and mapping the arrival vector information in sequence according to the full-connection network information to obtain an arrival vector information set.

5. The method for determining the sound source direction based on deep learning according to claim 4, wherein the predicting the convolved feature dimension information according to the recurrent neural network information to obtain corresponding arrival vector information comprises:

extracting bidirectional long-short term memory recurrent neural network information in the recurrent neural network information;

determining a corresponding characteristic dimension time sequence according to the convolved characteristic dimension information;

and predicting the characteristic dimension time sequence according to the bidirectional long-short term memory recurrent neural network information to obtain corresponding wave arrival vector information.

6. The deep learning based sound source direction determining method according to any one of claims 1 to 5, wherein the determining direction information of a target mixed sound source from the set of arrival vector information comprises:

acquiring regional information and preset angle information of a target mixed sound source signal;

dividing the region information according to the preset angle information to obtain region information of a target number;

and determining the direction information of the target mixed sound source according to the regional information and the arrival vector information sets of the target number.

7. The deep learning-based sound source direction determination method according to claim 6, wherein the determining direction information of a target mixed sound source from the set of the region information and the arrival vector information of the target number comprises:

obtaining corresponding directional probability information according to the arrival vector information set;

sorting the probability values corresponding to the directional probability information according to a preset sorting rule;

obtaining target direction probability information according to the sorted probability value;

and determining the direction information of the target mixed sound source according to the subscript values of the target number of the area information and the target direction probability information.

8. A deep learning-based sound source direction determination apparatus, characterized by comprising:

the acquisition module is used for acquiring a target mixed sound source signal and acquiring corresponding phase spectrum information according to the target mixed sound source signal;

the generating module is used for generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with the preset length;

the prediction module is used for predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set;

and the determining module is used for determining the direction information of the target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source.

9. A deep learning based sound source direction determining apparatus, characterized in that the deep learning based sound source direction determining apparatus comprises: a memory, a processor, and a deep learning based sound source direction determination program stored on the memory and executable on the processor, the deep learning based sound source direction determination program being configured to implement the deep learning based sound source direction determination method according to any one of claims 1 to 7.

10. A storage medium having stored thereon a deep learning based sound source direction determination program which, when executed by a processor, implements a deep learning based sound source direction determination method according to any one of claims 1 to 7.

Technical Field

The present invention relates to the field of deep learning technologies, and in particular, to a method, an apparatus, a device, and a medium for determining a sound source direction based on deep learning.

Background

The sound source Direction is also called Direction of Arrival (DOA), and the recording device is used as a reference system, the DOA aims to determine the emitting Direction of the speaker's sound source, which is usually used as a preprocessing of the speech system, and the determination of the emitting Direction of the speaker's sound source is applied more, for example, the beam forming algorithm needs to acquire the spatial information of the sound source in advance, and the sound source Direction also needs to be determined in the sound source localization and sound source tracking tasks, while the currently commonly used technical solution for determining the sound source Direction is to determine the Direction information of the sound source by mostly performing inference step by step through mathematical operations by the conventional DOA algorithm, but the above technical solution needs to limit assumptions and has higher requirements, for example, the multiple signal classification algorithm assumes that different sound sources are independent and irrelevant, and the number of sound sources is smaller than the number of microphones, and the generalized cross-phase transformation algorithm, the distance between different microphones in the array is required to be a certain distance, and a certain limiting condition is also provided for the distance of the sound source, but most sound sources in the real environment are mixed sound sources, namely, the sound sources comprise reverberant sound and noise, and the accuracy of the sound source direction determined by the technical scheme is low.

The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.

Disclosure of Invention

The invention mainly aims to provide a sound source direction determining method, a sound source direction determining device, sound source direction determining equipment and a sound source direction determining medium based on deep learning, and aims to solve the technical problem that the accuracy of determining the sound source direction cannot be effectively improved in the prior art.

In order to achieve the above object, the present invention provides a sound source direction determining method based on deep learning, comprising the steps of:

acquiring a target mixed sound source signal, and acquiring corresponding phase spectrum information according to the target mixed sound source signal;

generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with preset length;

predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set;

and determining the direction information of the target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source.

Optionally, the obtaining a target mixed sound source signal and obtaining corresponding phase spectrum information according to the target mixed sound source signal includes:

acquiring a target mixed sound source signal, and framing the target mixed sound source signal;

carrying out Fourier transform on the framed target mixed sound source signal to obtain corresponding frequency spectrum information;

extracting real part information and imaginary part information in the frequency spectrum information;

and calculating the real part information and the imaginary part information through a first calculation formula to obtain corresponding phase spectrum information.

Optionally, the generating corresponding feature dimension information according to the phase spectrum information and the preset length frame sequence information includes:

acquiring a sound source signal acquisition equipment set;

traversing and combining the sound source signal acquisition equipment set to obtain corresponding sound source signal acquisition equipment combination information;

calculating the phase spectrum information and the combined information of the sound source signal acquisition equipment by a second calculation formula to obtain IPD characteristic information;

and generating corresponding characteristic dimension information according to the IPD characteristic information and the preset length frame sequence information.

Optionally, the predicting the feature dimension information according to a preset convolutional recurrent neural network to obtain a wave arrival vector information set includes:

extracting convolutional neural network information, recurrent neural network information and fully-connected network information in a preset convolutional recurrent neural network;

convolving the characteristic dimension information according to the convolutional neural network information;

predicting the feature dimension information after convolution according to the recurrent neural network information to obtain corresponding DOA vector information;

and mapping the arrival vector information in sequence according to the full-connection network information to obtain an arrival vector information set.

Optionally, the predicting the feature dimension information after convolution according to the recurrent neural network information to obtain corresponding arrival vector information includes:

extracting bidirectional long-short term memory recurrent neural network information in the recurrent neural network information;

determining a corresponding characteristic dimension time sequence according to the convolved characteristic dimension information;

and predicting the characteristic dimension time sequence according to the bidirectional long-short term memory recurrent neural network information to obtain corresponding wave arrival vector information.

Optionally, the determining, according to the arrival vector information set, direction information of a target mixed sound source includes:

acquiring regional information and preset angle information of a target mixed sound source signal;

dividing the region information according to the preset angle information to obtain region information of a target number;

and determining the direction information of the target mixed sound source according to the regional information and the arrival vector information sets of the target number.

Optionally, the determining the direction information of the target mixed sound source according to the set of the region information and the arrival vector information of the target number includes:

obtaining corresponding directional probability information according to the arrival vector information set;

sorting the probability values corresponding to the directional probability information according to a preset sorting rule;

obtaining target direction probability information according to the sorted probability value;

and determining the direction information of the target mixed sound source according to the target number of the area information and the subscript value of the target direction probability information.

Further, to achieve the above object, the present invention also proposes a deep learning based sound source direction determination device including:

the acquisition module is used for acquiring a target mixed sound source signal and acquiring corresponding phase spectrum information according to the target mixed sound source signal;

the generating module is used for generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with the preset length;

the prediction module is used for predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set;

and the determining module is used for determining the direction information of the target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source.

Further, to achieve the above object, the present invention also proposes a deep learning-based sound source direction determining apparatus comprising: a memory, a processor and a deep learning based sound source direction determination program stored on the memory and executable on the processor, the deep learning based sound source direction determination program configured to implement the deep learning based sound source direction determination method as described above.

Furthermore, to achieve the above object, the present invention also proposes a storage medium having stored thereon a deep learning based sound source direction determination program which, when executed by a processor, implements the deep learning based sound source direction determination method as described above.

The invention provides a sound source direction determining method based on deep learning, which comprises the steps of obtaining a target mixed sound source signal and obtaining corresponding phase spectrum information according to the target mixed sound source signal; generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with preset length; predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set; determining direction information of a target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source; the method generates the characteristic dimension information through the phase spectrum information and the frame sequence information with the preset length, predicts the characteristic dimension information according to the preset convolution recurrent neural network, and determines the direction information of the target mixed sound source based on the information set of the wave arrival vectors obtained through prediction so as to determine the direction of the target mixed sound source.

Drawings

Fig. 1 is a schematic structural diagram of a deep learning-based sound source direction determination device of a hardware operating environment according to an embodiment of the present invention;

FIG. 2 is a schematic flow chart of a sound source direction determining method based on deep learning according to a first embodiment of the present invention;

FIG. 3 is a schematic diagram of region division according to an embodiment of the deep learning-based sound source direction determination method of the present invention;

FIG. 4 is a flowchart illustrating a sound source direction determining method based on deep learning according to a second embodiment of the present invention;

FIG. 5 is a flowchart illustrating a sound source direction determining method based on deep learning according to a third embodiment of the present invention;

fig. 6 is a functional block diagram of a sound source direction determining apparatus based on deep learning according to a first embodiment of the present invention.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.

Referring to fig. 1, fig. 1 is a schematic structural diagram of a deep learning-based sound source direction determining device in a hardware operating environment according to an embodiment of the present invention.

As shown in fig. 1, the depth learning-based sound source direction determining apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a Wireless interface (e.g., a Wireless-Fidelity (Wi-Fi) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.

Those skilled in the art will appreciate that the configuration shown in fig. 1 does not constitute a limitation of the deep learning based sound source direction determining apparatus, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.

As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a network communication module, a user interface module, and a deep learning-based sound source direction determination program.

In the sound source direction determining apparatus based on the deep learning shown in fig. 1, the network interface 1004 is mainly used for data communication with a network sound source direction determining program; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the deep learning based sound source direction determination device of the present invention may be provided in a deep learning based sound source direction determination device which calls a deep learning based sound source direction determination program stored in the memory 1005 through the processor 1001 and executes a deep learning based sound source direction determination method provided by an embodiment of the present invention.

Based on the hardware structure, the embodiment of the sound source direction determining method based on deep learning is provided.

Referring to fig. 2, fig. 2 is a schematic flowchart of a sound source direction determining method based on deep learning according to a first embodiment of the present invention.

In a first embodiment, the method for determining a sound source direction based on deep learning includes the steps of:

and step S10, acquiring a target mixed sound source signal, and obtaining corresponding phase spectrum information according to the target mixed sound source signal.

It should be noted that the execution subject of the present embodiment is a sound source direction determining device based on deep learning, and may also be other devices that can implement the same or similar functions, such as a sound source direction determining program.

It should be understood that the target mixed sound source signal refers to all sound source signals collected by the sound source collecting device, including noise signals, human voice signals, and other sound signals, and the sound signals are mixed to obtain the target mixed sound source signal, where the sound source collecting device may be a microphone or other sound source collecting devices, which is not limited in this embodiment and is described by taking a microphone as an example, where the target mixed sound source signal is obtained by calculating according to formula one, and specifically is:

where Sm is the mixed sound source signal collected by the mth microphone, Xi is the voice signal of the ith speaker, and N refers to noise.

It can be understood that the phase spectrum information refers to information presented by the characteristics of the spatial information of each sound source obtained from the arrival delay and the sampling offset, and after the target mixed sound source signal is obtained, the target mixed sound source signal is processed through a calculation formula, so that the corresponding phase spectrum information is obtained.

Further, step S10 includes: acquiring a target mixed sound source signal, and framing the target mixed sound source signal; carrying out Fourier transform on the framed target mixed sound source signal to obtain corresponding frequency spectrum information; extracting real part information and imaginary part information in the frequency spectrum information; and calculating the real part information and the imaginary part information through a first calculation formula to obtain corresponding phase spectrum information.

It can be understood that, after the target mixed sound source signal is obtained, the target mixed sound source signal is framed, where framing refers to dividing a sound source frame corresponding to the target mixed sound source signal into unit frames, and then performing fourier transform on the framed target mixed sound source signal, where the fourier transform refers to converting a form of the target mixed sound source signal, and the fourier transform includes continuous fourier transform and discrete fourier transform, where real part information and imaginary part information are both components of frequency spectrum information, and the frequency spectrum information is obtained by calculation using formula two, and specifically is:

Fm=STFT(Sm) (ii) a (formula two)

Wherein, Sm is a mixed sound source signal collected by the mth microphone, and Fm is corresponding frequency spectrum information.

It should be understood that, after the real part information and the imaginary part information of the spectrum information are extracted, the real part information and the imaginary part information of the spectrum are calculated according to a first calculation formula to obtain corresponding phase spectrum information, where the first calculation formula specifically is:

wherein, the angle is PmIndicating that the mth microphone signal acquired the phase spectrum of the mixed sound source signal,as the information of the real part of the frequency spectrum,is the imaginary part information of the frequency spectrum.

In a specific implementation, the sound source direction determining program obtains a target mixed sound source signal, and obtains corresponding phase spectrum information according to the target mixed sound source signal.

And step S20, generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with preset length.

It should be understood that the preset-length frame sequence information refers to continuous frame sequence length information in a mixed sound source signal after framing, and because correlation exists between continuous frames of a sound source in the mixed sound source signal, before the characteristic dimension information is input to the preset convolutional recurrent neural network, corresponding frame sequences in the preset-length frame sequence information are also continuous, and after the phase spectrum information and the preset-length frame sequence information are obtained, the corresponding characteristic dimension information is generated according to the phase spectrum information and the preset-length frame sequence information.

In a specific implementation, the sound source direction determining program generates corresponding characteristic dimension information according to the phase spectrum information and the preset length frame sequence information.

And step S30, predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set.

It should be understood that after the characteristic dimension information is obtained, the preset convolutional recurrent neural network needs to be optimized through the target function, and the criterion for the preset convolutional recurrent neural network to achieve the optimization is whether the target function is minimized, the target function can only be Binary Cross Entropy (BCE) loss, when the BCE loss function is converged, the preset convolutional recurrent neural network achieves the optimization, and at this time, the information set of the arrival vector predicted by the preset convolutional recurrent neural network is valid and reliable.

It can be understood that the predetermined convolutional Recurrent Neural Network is composed of a Convolutional Neural Network (CNN), a Neural Network model composed of a Recurrent Neural Network (RNN), and a fully connected Network, and after obtaining the characteristic dimension information, the characteristic dimension information is input to the predetermined convolutional Recurrent Neural Network model, so that the predetermined convolutional Recurrent Neural Network model predicts the characteristic dimension information to obtain a corresponding set of arrival vector information, for example, the set of arrival vector information predicted by the predetermined convolutional Recurrent Neural Network model is (0.01,0.4,0.01,0.03,0.02,0.3,0.02,0.1,0.01,0.03,0.04, 0.03).

In specific implementation, the sound source direction determining program predicts the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set.

And step S40, determining the direction information of the target mixed sound source according to the information set of the arrival vectors so as to determine the direction of the target mixed sound source.

It should be understood that, after obtaining the information set of the arrival vectors, the direction of the target mixed sound source is determined within a preset time according to the information set of the arrival vectors, where the preset time may be 160ms, and may also be other times, which is not limited in this embodiment, and is described by taking 160ms as an example.

Further, step S40 includes: acquiring regional information and preset angle information of a target mixed sound source signal; dividing the region information according to the preset angle information to obtain region information of a target number; and determining the direction information of the target mixed sound source according to the regional information and the arrival vector information sets of the target number.

It can be understood that the Area information of the target mixed sound source signal refers to circular Area information surrounded by a microphone array, the preset angle information refers to angle information for dividing a circular Area, for example, the preset angle information is 30 degrees, the number of the divided Area information is 12, referring to fig. 3, fig. 3 is a schematic diagram of Area division according to an embodiment of a sound source direction determining method based on deep learning, and the Area division schematic diagram is divided into Area [0] -Area [11] according to a counterclockwise direction, and a wave arrival vector information set at this time can be represented by a formula three, specifically:

wherein the content of the first and second substances,pi is the information of the ith divided region.

Further, determining the direction information of the target mixed sound source according to the set of the region information and the arrival vector information of the target number comprises: obtaining corresponding directional probability information according to the arrival vector information set; sorting the probability values corresponding to the directional probability information according to a preset sorting rule; obtaining target direction probability information according to the sorted probability value; and determining the direction information of the target mixed sound source according to the subscript values of the target number of the area information and the target direction probability information.

It should be understood that, in obtaining the set of arrival vector information, determining the directional probability information corresponding to each arrival vector information in the set of arrival vector information, sorting the direction probability information according to the sequence from small to large, selecting the target direction probability information with the maximum probability value from the sorted direction probability information, obtaining the direction information of the target mixed sound source according to the subscript value of the target direction probability information, for example, the information set of the arrival vectors is (0.01,0.4,0.01,0.03,0.02,0.3,0.02,0.1,0.01,0.03,0.04,0.03), the information set of the target direction probability is 0.4, and the subscript value corresponding to 0.4 is 1, the direction information of the target mixed sound source at this time is Area [1], if the information set of the arrival vectors is (0.01,0.4,0.01,0.003,0.0001,0.4,0.1,0.03,0.04,0.002,0.004,0.0009), the direction information of the target mixed sound source at this time is Area [1] and Area [5 ].

In a specific implementation, the sound source direction determining program determines the direction information of the target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source.

In the embodiment, by acquiring a target mixed sound source signal, corresponding phase spectrum information is obtained according to the target mixed sound source signal; generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with preset length; predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set; determining direction information of a target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source; the characteristic dimension information is generated through the phase spectrum information and the frame sequence information with the preset length, the characteristic dimension information is predicted according to the preset convolution recurrent neural network, the direction information of the target mixed sound source is determined based on the information set of the wave arrival vectors obtained through prediction, and therefore the direction of the target mixed sound source is determined.

In an embodiment, as shown in fig. 4, the second embodiment of the sound source direction determining method based on deep learning according to the present invention is proposed based on the first embodiment, and the step S20 includes:

step S201, a sound source signal acquisition device set is acquired.

It should be understood that the sound source signal collecting device set refers to a set composed of sound source signal collecting devices, the sound source signal collecting devices are arranged in a circle to collect the target mixed sound source signals in each direction, the number of the sound source signal collecting devices in the sound source signal collecting device set may be 4 or 8, and this embodiment is not limited to this, and the number of the sound source signal collecting devices is described as an example 4.

In a specific implementation, a sound source direction determining program obtains a set of sound source signal collection devices.

Step S202, traversing and combining the sound source signal acquisition equipment set to obtain corresponding sound source signal acquisition equipment combination information.

It can be understood that, after the sound source signal collecting device set is obtained, each sound source signal collecting device in the sound source signal collecting device set is subjected to traversing combination, that is, two sound source signal collecting devices are freely combined to obtain corresponding sound source signal collecting device combination information, and the combination of the sound source signal collecting devices at this time is 6 types, specifically:

ui∈Ω,Ω={(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)};

wherein u isiIs one of the sound source signal collecting device combinations, and Ω is all of the sound source signal collecting device combinations.

In specific implementation, the sound source direction determining program performs traversal combination on the sound source signal acquisition device set to obtain corresponding sound source signal acquisition device combination information.

And step S203, calculating the phase spectrum information and the combined information of the source signal acquisition equipment through a second calculation formula to obtain IPD characteristic information.

It should be understood that the IPD feature information refers to integrated feature information in the target mixed sound source signal, and after the phase spectrum information and the combination information of the sound source signal acquisition devices are obtained, the phase spectrum information and the combination information of the sound source signal acquisition devices are calculated by a second calculation formula to obtain the IPD feature information, where the second calculation formula specifically is:

wherein the content of the first and second substances,in order to be the IPD characteristic information,for the first sound source signal-collecting device,the number of the signal acquisition devices is M.

In specific implementation, the sound source direction determining program calculates the phase spectrum information and the combination information of the sound source signal acquisition equipment through a second calculation formula to obtain IPD characteristic information.

And step S204, generating corresponding characteristic dimension information according to the IPD characteristic information and the preset length frame sequence information.

It can be understood that after obtaining the IPD characteristic information and the preset length frame sequence information, corresponding characteristic dimension information is generated according to the IPD characteristic information and the preset length frame sequence information, for example, the sequence length corresponding to the preset length frame sequence information is 10,andthe superposition value along the frequency axis is 514, and the combination pairing mode of the signal acquisition equipment is 6, specifically: a 4-microphone array, i.e., a 4-microphone array composed of 4 signal acquisition devices, is used, the combined formula is 4 × 3/2 — 6, and finally feature dimension information is generated as (6,10, 514).

In specific implementation, the sound source direction determining program generates corresponding characteristic dimension information according to the IPD characteristic information and preset length frame sequence information.

The embodiment acquires a sound source signal acquisition equipment set; traversing and combining the sound source signal acquisition equipment set to obtain corresponding sound source signal acquisition equipment combination information; calculating the phase spectrum information and the combined information of the sound source signal acquisition equipment by a second calculation formula to obtain IPD characteristic information; generating corresponding characteristic dimension information according to the IPD characteristic information and the preset length frame sequence information; the method comprises the steps of obtaining combination information of sound source signal acquisition equipment by traversing and combining a set of the sound source signal acquisition equipment, calculating phase spectrum information and the combination information of the sound source signal acquisition equipment according to a second calculation formula, and generating corresponding characteristic dimension information based on preset length frame sequence information and IPD characteristic information obtained through calculation, so that the accuracy rate of obtaining the characteristic dimension information is effectively improved.

In an embodiment, as shown in fig. 5, the third embodiment of the sound source direction determining method based on deep learning according to the present invention is proposed based on the first embodiment, and the step S30 includes:

step S301, extracting convolutional neural network information, recursive neural network information and full-connection network information in a preset convolutional recursive neural network.

It is understood that the convolutional neural network information includes 6 convolutional blocks and 6 maximum pooling layers (max-pooling), each block has 2 convolutional layers, the convolutional layers all use 2-dimensional convolution, the convolutional kernel sizes are all 3x3, the number of convolution channels is 64, zero padding is used after each convolution to maintain the feature at the size specified on the right side of the figure, Linear rectification functions (Recommand Linear Unit, ReLU) are used after each convolution, and the kernel sizes of the maximum pooling layers are 1x4, 1x4, 1x2, 1x2, 1x2 and 1x2, respectively, from the input.

It should be understood that the Recurrent Neural Network information 2 Layer bidirectional Long and Short Term Memory Recurrent Neural Network (blstmrnn) is composed of 128 units in each Layer, an activation function of the 128 units is a hyperbolic tangent function (Tanh), and the Fully-Connected Network information is composed of Fully-Connected layers (FC), mainly Network information for mapping an output result.

In a specific implementation, the sound source direction determining program extracts convolutional neural network information, recursive neural network information and fully-connected network information in a preset convolutional recursive neural network.

And S302, performing convolution on the characteristic dimension information according to the convolutional neural network information.

It can be understood that after the characteristic dimension information is obtained, the characteristic dimension information is convolved by the convolution layer in the convolutional neural network information, so that the characteristic dimension information becomes the characteristic dimension information of the unit, even if more characteristic information appears in the characteristic dimension information.

In a specific implementation, the sound source direction determining program convolves the feature dimension information according to the convolutional neural network information.

Step S303, predicting the feature dimension information after convolution according to the recurrent neural network information to obtain corresponding DOA vector information.

It should be understood that four different gate control units exist in the BLSTM RNN in the recurrent neural network information, and the prediction result at this time is more accurate and efficient in predicting the convolved feature dimension information compared with the LSTM RNN, and since the preceding sequence in the feature dimension time sequence corresponding to the convolved feature dimension information affects the prediction result of the subsequent sequence, the BLSTM RNN only needs to predict the final time sequence when training the convolved feature dimension information, and after the prediction is completed, the corresponding wave arrival vector information is obtained.

Further, step S303 includes: extracting bidirectional long-short term memory recurrent neural network information in the recurrent neural network information; determining a corresponding characteristic dimension time sequence according to the convolved characteristic dimension information; and predicting the characteristic dimension time sequence according to the bidirectional long-short term memory recurrent neural network information to obtain corresponding wave arrival vector information.

It can be understood that after the feature dimension information after convolution is obtained, a corresponding feature dimension time sequence is determined according to the feature dimension information after convolution, corresponding arrival vector information can be obtained by predicting a last time sequence in the feature dimension information after convolution according to a gate control unit of the BLSTM RNN, and four gate control units of the BLSTM RNN exist.

In specific implementation, the sound source direction determining program predicts the feature dimension information after convolution according to the recurrent neural network information to obtain corresponding DOA vector information

And step S304, mapping the arrival vector information in sequence according to the full-connection network information to obtain an arrival vector information set.

It should be understood that, after the information of the vector of arrival predicted by the recurrent neural network information is obtained, the information of the fully-connected network maps the information of the vector of arrival in the predicted order, and after the mapping is completed, a set of information of the vector of arrival consisting of the information of the vector of arrival is obtained.

In a specific implementation, the sound source direction determining program sequentially maps the arrival vector information according to the full-connection network information to obtain an arrival vector information set.

The embodiment extracts convolutional neural network information, recurrent neural network information and fully-connected network information in a preset convolutional recurrent neural network; convolving the characteristic dimension information according to the convolutional neural network information; predicting the feature dimension information after convolution according to the recurrent neural network information to obtain corresponding DOA vector information; mapping the arrival vector information in sequence according to the full-connection network information to obtain an arrival vector information set; the characteristic dimension information is convoluted through the convolutional neural network information, the convoluted characteristic dimension information is predicted according to the convolutional neural network information, the predicted arrival vector information is mapped in sequence based on the fully-connected network information, and an arrival vector information set is obtained, so that the accuracy of the obtained arrival vector information set is effectively improved.

Furthermore, an embodiment of the present invention also proposes a storage medium having a deep learning based sound source direction determination program stored thereon, which when executed by a processor implements the steps of the deep learning based sound source direction determination method as described above.

Since the storage medium adopts all technical solutions of all the embodiments, at least all the beneficial effects brought by the technical solutions of the embodiments are achieved, and no further description is given here.

Further, referring to fig. 6, an embodiment of the present invention further proposes a deep learning based sound source direction determination apparatus, including:

the acquiring module 10 is configured to acquire a target mixed sound source signal and obtain corresponding phase spectrum information according to the target mixed sound source signal.

It should be understood that the target mixed sound source signal refers to all sound source signals collected by the sound source collecting device, including noise signals, human voice signals, and other sound signals, and the sound signals are mixed to obtain the target mixed sound source signal, where the sound source collecting device may be a microphone or other sound source collecting devices, which is not limited in this embodiment and is described by taking a microphone as an example, where the target mixed sound source signal is obtained by calculating according to formula one, and specifically is:

where Sm is the mixed sound source signal collected by the mth microphone, Xi is the voice signal of the ith speaker, and N refers to noise.

It can be understood that the phase spectrum information refers to information presented by the characteristics of the spatial information of each sound source obtained from the arrival delay and the sampling offset, and after the target mixed sound source signal is obtained, the target mixed sound source signal is processed through a calculation formula, so that the corresponding phase spectrum information is obtained.

Further, the obtaining module 10 is further configured to obtain a target mixed sound source signal, and perform framing on the target mixed sound source signal; carrying out Fourier transform on the framed target mixed sound source signal to obtain corresponding frequency spectrum information; extracting real part information and imaginary part information in the frequency spectrum information; and calculating the real part information and the imaginary part information through a first calculation formula to obtain corresponding phase spectrum information.

It can be understood that, after the target mixed sound source signal is obtained, the target mixed sound source signal is framed, where framing refers to dividing a sound source frame corresponding to the target mixed sound source signal into unit frames, and then performing fourier transform on the framed target mixed sound source signal, where the fourier transform refers to converting a form of the target mixed sound source signal, and the fourier transform includes continuous fourier transform and discrete fourier transform, where real part information and imaginary part information are both components of frequency spectrum information, and the frequency spectrum information is obtained by calculation using formula two, and specifically is:

Fm=STFT(Sm) (ii) a (formula two)

Wherein, Sm is a mixed sound source signal collected by the mth microphone, and Fm is corresponding frequency spectrum information.

It should be understood that, after the real part information and the imaginary part information of the spectrum information are extracted, the real part information and the imaginary part information of the spectrum are calculated according to a first calculation formula to obtain corresponding phase spectrum information, where the first calculation formula specifically is:

wherein, the angle is PmIndicating that the mth microphone signal acquired the phase spectrum of the mixed sound source signal,as the information of the real part of the frequency spectrum,is the imaginary part information of the frequency spectrum.

In a specific implementation, the sound source direction determining program obtains a target mixed sound source signal, and obtains corresponding phase spectrum information according to the target mixed sound source signal.

And a generating module 20, configured to generate corresponding feature dimension information according to the phase spectrum information and the preset length frame sequence information.

It should be understood that the preset-length frame sequence information refers to continuous frame sequence length information in a mixed sound source signal after framing, and because correlation exists between continuous frames of a sound source in the mixed sound source signal, before the characteristic dimension information is input to the preset convolutional recurrent neural network, corresponding frame sequences in the preset-length frame sequence information are also continuous, and after the phase spectrum information and the preset-length frame sequence information are obtained, the corresponding characteristic dimension information is generated according to the phase spectrum information and the preset-length frame sequence information.

In a specific implementation, the sound source direction determining program generates corresponding characteristic dimension information according to the phase spectrum information and the preset length frame sequence information.

And the prediction module 30 is configured to predict the characteristic dimension information according to a preset convolutional recurrent neural network, so as to obtain a wave arrival vector information set.

It should be understood that after the characteristic dimension information is obtained, the preset convolutional recurrent neural network needs to be optimized through the target function, and the criterion for the preset convolutional recurrent neural network to achieve the optimization is whether the target function is minimized, the target function can only be Binary Cross Entropy (BCE) loss, when the BCE loss function is converged, the preset convolutional recurrent neural network achieves the optimization, and at this time, the information set of the arrival vector predicted by the preset convolutional recurrent neural network is valid and reliable.

It can be understood that the predetermined convolutional Recurrent Neural Network is composed of a Convolutional Neural Network (CNN), a Neural Network model composed of a Recurrent Neural Network (RNN), and a fully connected Network, and after obtaining the characteristic dimension information, the characteristic dimension information is input to the predetermined convolutional Recurrent Neural Network model, so that the predetermined convolutional Recurrent Neural Network model predicts the characteristic dimension information to obtain a corresponding set of arrival vector information, for example, the set of arrival vector information predicted by the predetermined convolutional Recurrent Neural Network model is (0.01,0.4,0.01,0.03,0.02,0.3,0.02,0.1,0.01,0.03,0.04, 0.03).

In specific implementation, the sound source direction determining program predicts the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set.

And the determining module 40 is configured to determine direction information of the target mixed sound source according to the arrival vector information set, so as to determine a direction of the target mixed sound source.

It should be understood that, after obtaining the information set of the arrival vectors, the direction of the target mixed sound source is determined within a preset time according to the information set of the arrival vectors, where the preset time may be 160ms, and may also be other times, which is not limited in this embodiment, and is described by taking 160ms as an example.

Further, the determining module 40 is further configured to obtain region information and preset angle information of the target mixed sound source signal; dividing the region information according to the preset angle information to obtain region information of a target number; and determining the direction information of the target mixed sound source according to the regional information and the arrival vector information sets of the target number.

It can be understood that the Area information of the target mixed sound source signal refers to circular Area information formed by surrounding a microphone array, the preset angle information refers to angle information for dividing a circular Area, for example, the preset angle information is 30 degrees, the number of the divided Area information is 12, referring to fig. 3, fig. 3 is a schematic diagram of Area division according to an embodiment of the sound source direction determining method based on deep learning of the present invention, and the Area division is divided into Area [0] -Area [11] according to a counterclockwise direction, and a wave arrival vector information set at this time can be represented by a formula three, specifically:

wherein the content of the first and second substances,pi is the information of the ith divided region.

Further, determining the direction information of the target mixed sound source according to the set of the region information and the arrival vector information of the target number comprises: obtaining corresponding directional probability information according to the arrival vector information set; sorting the probability values corresponding to the directional probability information according to a preset sorting rule; obtaining target direction probability information according to the sorted probability value; and determining the direction information of the target mixed sound source according to the subscript values of the target number of the area information and the target direction probability information.

It should be understood that, in obtaining the set of arrival vector information, determining the directional probability information corresponding to each arrival vector information in the set of arrival vector information, sorting the direction probability information according to the sequence from small to large, selecting the target direction probability information with the maximum probability value from the sorted direction probability information, obtaining the direction information of the target mixed sound source according to the subscript value of the target direction probability information, for example, the information set of the arrival vectors is (0.01,0.4,0.01,0.03,0.02,0.3,0.02,0.1,0.01,0.03,0.04,0.03), the information set of the target direction probability is 0.4, and the subscript value corresponding to 0.4 is 1, the direction information of the target mixed sound source at this time is Area [1], if the information set of the arrival vectors is (0.01,0.4,0.01,0.003,0.0001,0.4,0.1,0.03,0.04,0.002,0.004,0.0009), the direction information of the target mixed sound source at this time is Area [1] and Area [5 ].

In a specific implementation, the sound source direction determining program determines the direction information of the target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source.

In the embodiment, by acquiring a target mixed sound source signal, corresponding phase spectrum information is obtained according to the target mixed sound source signal; generating corresponding characteristic dimension information according to the phase spectrum information and the frame sequence information with preset length; predicting the characteristic dimension information according to a preset convolution recurrent neural network to obtain a wave arrival vector information set; determining direction information of a target mixed sound source according to the arrival vector information set so as to determine the direction of the target mixed sound source; the characteristic dimension information is generated through the phase spectrum information and the frame sequence information with the preset length, the characteristic dimension information is predicted according to the preset convolution recurrent neural network, the direction information of the target mixed sound source is determined based on the information set of the wave arrival vectors obtained through prediction, and therefore the direction of the target mixed sound source is determined.

It should be noted that the above-described work flows are only exemplary, and do not limit the scope of the present invention, and in practical applications, a person skilled in the art may select some or all of them to achieve the purpose of the solution of the embodiment according to actual needs, and the present invention is not limited herein.

In addition, the technical details that are not described in detail in this embodiment may be referred to a sound source direction determining method based on deep learning provided in any embodiment of the present invention, and are not described herein again.

In an embodiment, the obtaining module 10 is further configured to obtain a target mixed sound source signal, and perform framing on the target mixed sound source signal; carrying out Fourier transform on the framed target mixed sound source signal to obtain corresponding frequency spectrum information; extracting real part information and imaginary part information in the frequency spectrum information; and calculating the real part information and the imaginary part information through a first calculation formula to obtain corresponding phase spectrum information.

In an embodiment, the generating module 20 is further configured to obtain a set of sound source signal collecting devices; traversing and combining the sound source signal acquisition equipment set to obtain corresponding sound source signal acquisition equipment combination information; calculating the phase spectrum information and the combined information of the sound source signal acquisition equipment by a second calculation formula to obtain IPD characteristic information; and generating corresponding characteristic dimension information according to the IPD characteristic information and the preset length frame sequence information.

In an embodiment, the prediction module 30 is further configured to extract convolutional neural network information, recurrent neural network information, and fully-connected network information in a preset convolutional recurrent neural network; convolving the characteristic dimension information according to the convolutional neural network information; predicting the feature dimension information after convolution according to the recurrent neural network information to obtain corresponding DOA vector information; and mapping the arrival vector information in sequence according to the full-connection network information to obtain an arrival vector information set.

In one embodiment, the prediction module 30 is further configured to extract bidirectional long-short term memory recurrent neural network information from the recurrent neural network information; determining a corresponding characteristic dimension time sequence according to the convolved characteristic dimension information; and predicting the characteristic dimension time sequence according to the bidirectional long-short term memory recurrent neural network information to obtain corresponding wave arrival vector information.

In an embodiment, the determining module 40 is further configured to obtain area information and preset angle information of the target mixed sound source signal; dividing the region information according to the preset angle information to obtain region information of a target number; and determining the direction information of the target mixed sound source according to the regional information and the arrival vector information sets of the target number.

In an embodiment, the determining module 40 is further configured to obtain corresponding directional probability information according to the arrival vector information set; sorting the probability values corresponding to the directional probability information according to a preset sorting rule; obtaining target direction probability information according to the sorted probability value; and determining the direction information of the target mixed sound source according to the subscript values of the target number of the area information and the target direction probability information.

Other embodiments or implementations of the apparatus for determining a direction of a sound source based on deep learning according to the present invention can refer to the embodiments of the above methods, which are not exhaustive herein.

Further, it is to be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.

The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (e.g. Read Only Memory (ROM)/RAM, magnetic disk, optical disk), and includes several instructions for enabling a terminal device (e.g. a mobile phone, a computer, a sound source direction determining program, or a network device) to execute the method according to the embodiments of the present invention.

The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种抗转发式干扰方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!