Audio processing method, device, equipment and storage medium

文档序号:170873 发布日期:2021-10-29 浏览:54次 中文

阅读说明:本技术 音频处理方法、装置、设备及存储介质 (Audio processing method, device, equipment and storage medium ) 是由 何丹 梁思 方远舟 王正 于 2021-07-08 设计创作,主要内容包括:本发明公开了一种音频处理方法、装置、设备及存储介质,涉及音频处理技术领域,方法包括:获取包括至少两种不同音色音频信号的第一音频数据;对第一音频数据进行特征提取,获得音频特征向量;根据音频特征向量、目标音频信号的声音样本特征以及生成对抗网络,获得第二音频数据,第二音频数据不包括目标音频信号,生成对抗网络用于生成目标音频信号的伪信号,并根据伪信号获得第二音频数据,伪信号与目标音频信号之间的差异值小于阈值。本发明解决了现有技术在消除乐曲音频中指定乐器的声音时,容易导致原音损失的问题,实现了获得更自然且更完整的输出音频的效果。(The invention discloses an audio processing method, an audio processing device, audio processing equipment and a storage medium, and relates to the technical field of audio processing, wherein the method comprises the following steps: acquiring first audio data comprising at least two different timbre audio signals; performing feature extraction on the first audio data to obtain an audio feature vector; and generating a pseudo signal used for generating the target audio signal by the countermeasure network according to the audio feature vector, the sound sample feature of the target audio signal and the generation countermeasure network to obtain second audio data, wherein the second audio data does not comprise the target audio signal, and the second audio data is obtained according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value. The invention solves the problem that the prior art easily causes the loss of original sound when eliminating the sound of a designated musical instrument in music audio, and realizes the effect of obtaining more natural and more complete output audio.)

1. A method of audio processing, the method comprising:

acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal;

performing feature extraction on the first audio data to obtain an audio feature vector;

and obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not comprise the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal, and the second audio data is obtained according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

2. The audio processing method of claim 1, wherein the generating a countermeasure network comprises a generator and a classifier;

the step of obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and the generation countermeasure network specifically includes:

training the classifier according to the audio feature vector and the sound sample features to obtain first training data and second training data, wherein the first training data are training data including the target audio signal, and the second training data are training data not including the target audio signal;

judging whether a difference value between the first training data and the sound sample characteristic is smaller than a preset difference value or not;

if the difference value between the first training data and the sound sample characteristic is not smaller than a preset difference value, training the generator according to the first training data and the second training data to generate the pseudo signal;

and inputting the pseudo signal and the second training data into the trained classifier, and circulating until the difference value between the obtained first training data and the sound sample characteristic is smaller than a preset difference value to obtain second audio data.

3. The audio processing method according to claim 1, wherein the step of performing feature extraction on the first audio data to obtain an audio feature vector specifically comprises:

and extracting features according to the distribution conditions of the first audio data at different frequencies to obtain audio feature vectors.

4. The audio processing method according to claim 3, wherein the step of performing feature extraction according to the distribution of the first audio data at different frequencies to obtain an audio feature vector specifically comprises:

preprocessing the first audio data to obtain time domain audio data;

performing fast Fourier transform on the time domain audio data to obtain frequency domain audio data;

performing triangular filtering processing on the frequency domain audio data through a triangular filter to obtain filtered frequency domain audio data, wherein the coverage area of the triangular filter is the frequency range of sound which can be heard by human ears;

and performing discrete cosine transform on the filtered frequency domain audio data, removing correlation among audio signals with different frequencies, and obtaining a Mel frequency cepstrum coefficient to obtain an audio characteristic vector.

5. The audio processing method of claim 1, wherein the step of obtaining second audio data based on the audio feature vector, the sound sample feature of the target audio signal, and the generation countermeasure network is preceded by the method further comprising:

performing dimensionality reduction on the audio feature vector to obtain a dimensionality-reduced audio feature vector;

the step of obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and the generation countermeasure network includes:

and obtaining second audio data according to the audio feature vector after the dimension reduction, the sound sample feature of the target audio signal and the generation countermeasure network.

6. The audio processing method according to claim 5, wherein the step of performing dimension reduction processing on the audio feature vector to obtain a dimension-reduced audio feature vector specifically includes:

acquiring the neighboring points of each feature point in the audio feature vector;

obtaining a local reconstruction weight matrix of each feature point according to each feature point and the corresponding adjacent point;

and obtaining the audio characteristic vector after the dimension reduction according to the characteristic values of the local reconstruction weight matrix and the characteristic vector corresponding to each characteristic value.

7. An audio processing apparatus, characterized in that the apparatus comprises:

the audio acquisition module is used for acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal;

the feature extraction module is used for extracting features of the first audio data to obtain audio feature vectors;

and the audio processing module is used for obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not include the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal and obtaining the second audio data according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

8. An audio processing apparatus, characterized in that the apparatus comprises a processor and a memory, in which an audio processing program is stored, which, when executed by the processor, implements the audio processing method according to any one of claims 1 to 6.

9. A computer program product, characterized in that the computer program product comprises an audio processing program stored on a non-transitory computer-readable storage medium, the audio processing program comprising program instructions which, when executed by a computer, cause the computer to carry out the audio processing method according to any one of claims 1 to 6.

10. A storage medium having stored thereon an audio processing program for execution by one or more processors to implement the audio processing method of any one of claims 1 to 6.

Technical Field

The present invention relates to the field of audio processing technologies, and in particular, to an audio processing method, apparatus, device, and storage medium.

Background

For music audio containing multiple musical instrument sounds, such as symphony music, pure music and other music, because the music is generally recorded in the same track by adopting multiple musical instruments, and is different from the song audio recorded in a track division way, one musical instrument sound cannot be eliminated or extracted in the track division way.

The existing method for eliminating or extracting a certain specified musical instrument sound in a music audio containing a plurality of musical instrument sounds recorded in the same track is easy to generate original sound loss.

Disclosure of Invention

The main purposes of the invention are as follows: the utility model provides an audio processing method, a device, equipment and a storage medium, which aims to solve the technical problem that the prior art easily causes the loss of original sound when eliminating the sound of a designated musical instrument in music audio.

In order to achieve the purpose, the invention adopts the following technical scheme:

in a first aspect, the present invention provides an audio processing method, comprising the steps of:

acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal;

performing feature extraction on the first audio data to obtain an audio feature vector;

and obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not comprise the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal, and the second audio data is obtained according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

Optionally, in the audio processing method, the generation countermeasure network includes a generator and a classifier;

the step of obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and the generation countermeasure network specifically includes:

training the classifier according to the audio feature vector and the sound sample features to obtain first training data and second training data, wherein the first training data are training data including the target audio signal, and the second training data are training data not including the target audio signal;

judging whether a difference value between the first training data and the sound sample characteristic is smaller than a preset difference value or not;

if the difference value between the first training data and the sound sample characteristic is not smaller than a preset difference value, training the generator according to the first training data and the second training data to generate the pseudo signal;

and inputting the pseudo signal and the second training data into the trained classifier, and circulating until the difference value between the obtained first training data and the sound sample characteristic is smaller than a preset difference value to obtain second audio data.

Optionally, in the audio processing method, the step of performing feature extraction on the first audio data to obtain an audio feature vector specifically includes:

and extracting features according to the distribution conditions of the first audio data at different frequencies to obtain audio feature vectors.

Optionally, in the audio processing method, the step of performing feature extraction according to distribution conditions of the first audio data at different frequencies to obtain an audio feature vector specifically includes:

preprocessing the first audio data to obtain time domain audio data;

performing fast Fourier transform on the time domain audio data to obtain frequency domain audio data;

performing triangular filtering processing on the frequency domain audio data through a triangular filter to obtain filtered frequency domain audio data, wherein the coverage area of the triangular filter is the frequency range of sound which can be heard by human ears;

and performing discrete cosine transform on the filtered frequency domain audio data, removing correlation among audio signals with different frequencies, and obtaining a Mel frequency cepstrum coefficient to obtain an audio characteristic vector.

Optionally, in the audio processing method, before the step of obtaining the second audio data according to the audio feature vector, the sound sample feature of the target audio signal, and the generation countermeasure network, the method further includes:

performing dimensionality reduction on the audio feature vector to obtain a dimensionality-reduced audio feature vector;

the step of obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and the generation countermeasure network includes:

and obtaining second audio data according to the audio feature vector after the dimension reduction, the sound sample feature of the target audio signal and the generation countermeasure network.

Optionally, in the audio processing method, the step of performing dimension reduction processing on the audio feature vector to obtain a dimension-reduced audio feature vector specifically includes:

acquiring the neighboring points of each feature point in the audio feature vector;

obtaining a local reconstruction weight matrix of each feature point according to each feature point and the corresponding adjacent point;

and obtaining the audio characteristic vector after the dimension reduction according to the characteristic values of the local reconstruction weight matrix and the characteristic vector corresponding to each characteristic value.

In a second aspect, the present invention provides an audio processing apparatus, the apparatus comprising:

the audio acquisition module is used for acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal;

the feature extraction module is used for extracting features of the first audio data to obtain audio feature vectors;

and the audio processing module is used for obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not include the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal and obtaining the second audio data according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

In a third aspect, the present invention provides an audio processing device comprising a processor and a memory, the memory having stored therein an audio processing program, the audio processing program, when executed by the processor, implementing an audio processing method as described above.

In a fourth aspect, the invention provides a computer program product comprising an audio processing program stored on a non-transitory computer readable storage medium, the audio processing program comprising program instructions which, when executed by a computer, cause the computer to perform the audio processing method as described above.

In a fifth aspect, the present invention provides a storage medium having stored thereon an audio processing program executable by one or more processors to implement an audio processing method as described above.

One or more technical solutions provided by the present invention may have the following advantages or at least achieve the following technical effects:

according to the audio processing method, the audio processing device, the audio processing equipment and the audio processing storage medium, the first audio data comprising at least two audio signals are obtained, feature extraction is carried out on the first audio data to obtain an audio feature vector, then according to the audio feature vector, the sound sample feature of a target audio signal and a generated countermeasure network, second audio data not comprising the target audio signal is obtained, and the purpose of eliminating the target audio signal in the first audio data is achieved; the method also can generate a pseudo signal of the target audio signal by generating the confrontation network, and can generate the pseudo signal closest to the real target audio signal by utilizing the characteristic of continuously optimizing the generated confrontation network, so that the effect of clearing the target audio signal is better when the pseudo signal is used for obtaining second audio data; and, by generating a loop process against the network, the missing timbre in the second audio data can be smoothly supplemented, so that the second audio data output subsequently is more natural and complete.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.

FIG. 1 is a flow chart of an audio processing method according to the present invention;

FIG. 2 is a diagram of a hardware structure of an audio processing device according to the present invention;

FIG. 3 is a schematic flow chart of an audio processing method according to the present invention;

FIG. 4 is a block flow diagram detailing the flow diagram of FIG. 3;

FIG. 5 is a functional block diagram of an audio processing apparatus according to the present invention.

The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

It should be noted that, in the present invention, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.

In the present invention, unless expressly stated or limited otherwise, the terms "connected," "secured," and the like are to be construed broadly, and for example, "connected" may be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium; either internally or in interactive relation. In addition, if there is a description of "first", "second", etc. in an embodiment of the present invention, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature.

In the following description, suffixes such as "module", "part", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in themselves. Thus, "module", "component" or "unit" may be used mixedly. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations. In addition, the technical solutions of the respective embodiments may be combined with each other, but must be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination of technical solutions should be considered to be absent and not be within the protection scope of the present invention.

Interpretation of terms:

MFCC: mel-frequency Cepstral Coefficients, Mel-frequency Cepstral Coefficients;

LLE: locally Linear Embedding, local Linear Embedding;

and (3) GAN: the general adaptive Networks generate a confrontation network, which is a deep learning network;

FFT: fast Fourier Transform, for time-frequency domain Transform analysis;

DCT: discrete Cosine Transform, used for compression of data or images.

Analysis of the prior art finds that recorded songs are generally divided into two soundtracks, namely an accompaniment track and a human soundtrack, and when mixed recording is performed, the human soundtrack regularly occupies a middle pitch range, and can be easily extracted and eliminated to obtain musical instrument sounds only reserved for the accompaniment track, and the sounds are used as accompaniment resources for singing songs. Such a track-divided recording of songs, in which only the accompaniment parts are retained, is easy to extract or eliminate the vocal sounds therein. However, for music audio containing multiple musical instrument sounds, such as symphony, orchestral music, pure music, etc., since such music is generally recorded in the same track by using multiple musical instruments, which is different from the song audio recorded in a track division manner, it is impossible to eliminate or extract a certain musical instrument sound in the music audio by means of track division. Meanwhile, the timbre of the musical instrument is not a single pure tone, but a partial tone group consisting of a plurality of kinds of mutually interfered tones even divides the strong and weak changes of the left and right sound fields, so that the difficulty in eliminating the sound of a certain musical instrument in music containing a plurality of musical instruments is high.

Currently, there are two general audio processing methods for eliminating or extracting a specific musical instrument sound from a music audio including a plurality of musical instrument sounds recorded in the same track. One is starting from the source, all the various musical instruments in the music audio to be processed are re-recorded in a split track way, such as midi files, so that different musical instruments are in different audio tracks, and the sound of one musical instrument is eliminated or extracted in a split track way, but the method has the problems of high cost, poor integrity and poor sound field relationship; the other is a mode of noise, namely, a specified instrument to be removed reproduces the timbre and the skill in the music audio to be processed as much as possible, plays and records again, and uses the recorded sound as a noise sample for removing the content in the original music audio.

In view of the technical problem that the prior art is easy to cause original sound loss when eliminating the sound of a designated instrument in music audio, the invention provides an audio processing method, which has the following general idea:

acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal; performing feature extraction on the first audio data to obtain an audio feature vector; and obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not comprise the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal, and the second audio data is obtained according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

According to the technical scheme, first audio data comprising at least two audio signals are obtained, feature extraction is carried out on the first audio data to obtain an audio feature vector, then second audio data not comprising the target audio signal is obtained according to the audio feature vector, the sound sample feature of the target audio signal and the generation countermeasure network, and the purpose of eliminating the target audio signal in the first audio data is achieved; the method also can generate a pseudo signal of the target audio signal by generating the confrontation network, and can generate the pseudo signal closest to the real target audio signal by utilizing the characteristic of continuously optimizing the generated confrontation network, so that the effect of clearing the target audio signal is better when the pseudo signal is used for obtaining second audio data; and, by generating a loop process against the network, the missing timbre in the second audio data can be smoothly supplemented, so that the second audio data output subsequently is more natural and complete.

Example one

Referring to fig. 1, a flowchart of a first embodiment of an audio processing method according to the invention is shown. The embodiment provides an audio processing method applicable to an audio processing device, and the method comprises the following steps:

acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal;

performing feature extraction on the first audio data to obtain an audio feature vector;

and obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not comprise the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal, and the second audio data is obtained according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

Specifically, the audio processing device refers to a terminal device or a network device capable of implementing network connection, and the audio processing device may be a terminal device such as a mobile phone, a computer, a tablet computer, and a portable computer, or may be a network device such as a server and a cloud platform.

Fig. 2 is a schematic diagram of a hardware structure of an audio processing device according to the present invention. The apparatus may include: a processor 1001, such as a CPU (Central processing unit), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005.

Those skilled in the art will appreciate that the hardware configuration shown in fig. 2 does not constitute a limitation of the audio processing device of the present invention, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.

Specifically, the communication bus 1002 is used for realizing connection communication among these components;

the user interface 1003 is used for connecting a client and performing data communication with the client, the user interface 1003 may include a display screen and an input unit such as a keyboard, and optionally, the user interface 1003 may further include a standard wired interface and a standard wireless interface;

the network interface 1004 is used for connecting to the backend server and performing data communication with the backend server, and the network interface 1004 may include a standard wired interface and a wireless interface, such as a Wi-Fi interface;

the memory 1005 is used for storing various types of data, which may include, for example, instructions of any application program or method in the device and application program-related data, and the memory 1005 may be a high-speed RAM memory, or a stable memory such as a disk memory, and optionally, the memory 1005 may be a storage device independent of the processor 1001;

specifically, with continued reference to fig. 2, the memory 1005 may include an operating system, a network communication module, a user interface module, and an audio processing program, wherein the network communication module is mainly used for connecting to a server and performing data communication with the server;

the processor 1001 is configured to call an audio processing program stored in the memory 1005, and perform the following operations:

acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal;

performing feature extraction on the first audio data to obtain an audio feature vector;

and obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not comprise the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal, and the second audio data is obtained according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

Based on the above audio processing device, the following proposes a first embodiment of the audio processing method of the present invention with reference to the flowchart shown in fig. 1, where the method may include the following steps:

step S20: acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal.

Specifically, the first audio data may be audio data including a plurality of different timbres, such as voices uttered by multiple persons, or may be music audios including sounds of multiple musical instruments, such as a symphony music recorded on the same track, a concert audio collected in the field, and the like. The target audio signal may be a designated sound to be removed from the audio data, for example, a sound of a certain person in speech, a sound of a certain musical instrument in a symphony.

In this embodiment, the audio processing method is implemented by an audio processing device, and the audio processing device takes a server as an example for description. When the server receives the audio processing request, the server obtains the music audio of which the user wants to remove one target musical instrument sound, such as a reverberant music of which the user wants to remove a violin sound, according to the audio processing request.

Step S40: and performing feature extraction on the first audio data to obtain an audio feature vector.

Specifically, feature extraction may be performed on the first audio data by using an MFCC extraction method to obtain an audio feature vector. The main role of the MFCC is to perform feature extraction on various audio signals in the first audio data, i.e. the distribution of the energy of the audio signals over different frequency ranges. The MFCC coefficients are obtained by preprocessing, fast Fourier transform, triangular filtering and discrete cosine transform of the first audio data, and the audio feature vector of the first audio data can be obtained.

In the specific implementation process, the audio feature vector can be subjected to dimension reduction processing, so that the calculation complexity is reduced, and the calculation power is saved. In this embodiment, the LLE algorithm may be used to perform dimension reduction processing on the audio feature vector, so that the original manifold structure of the data after dimension reduction is better maintained.

Step S60: and obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not comprise the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal, and the second audio data is obtained according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

Specifically, the extracted audio feature vector and the sound sample feature of the target audio signal are input to GAN together for machine learning. The GAN comprises a generator and a classifier, wherein audio characteristic vectors and sound sample characteristics enter the classifier to be trained to obtain first training data and second training data, the first training data are training data comprising the target audio signals, and the second training data are training data not comprising the target audio signals; judging a difference value between the first training data and the characteristics of the sound sample by using the judging function of the classifier, and judging whether the difference value is smaller than a preset difference value or not by judging whether an iterative convergence condition is met or not; and if not, the first training data and the second training data enter a generator, a pseudo signal of the first training data is obtained through training, the pseudo signal and the second training data enter the classifier again, and when the iterative convergence condition is met through cyclic training, the obtained second training data, namely the audio data without the target audio signal, is output, so that the second audio data is obtained.

In the specific implementation process, the generator and the classifier can be a fully-connected neural network, a deconvolution network and the like; a loss function, such as Cross Entropy (Cross Entropy), may be used in the classifier to perform a difference value calculation between the first training data and the second training data. And training the classifier firstly and then training the generator by training the GAN, and performing circular training until the classifier judges that the obtained first training data and second training data meet the iterative convergence condition, and finally outputting the second training data at the moment as final output audio, namely audio data not comprising the target audio signal.

In this embodiment, a music audio to be processed is uniquely depicted by an audio feature vector, and then the created generation countermeasure network is trained in combination with a sound sample of a target musical instrument, so that by training the generation countermeasure network including a generator and a classifier, the generator can generate a pseudo signal closest to a real target audio signal when generating a pseudo signal of the target audio signal, and thus the classifier has a better cleaning effect when obtaining audio data not including the target audio signal by using the pseudo signal; and, by training the generation countermeasure network, the missing tone can be smoothly supplemented, so that the output audio data not including the target audio signal is more natural and complete. The training generator ensures that a pseudo signal closest to a real target musical instrument is generated, and the training classifier ensures that the pseudo signal is more accurately removed when the target musical instrument sound in the music audio is removed, so that the original sound loss caused by damage to other sounds is prevented.

Referring to fig. 3 and 4, fig. 3 is another schematic flow chart of the present embodiment, and fig. 4 is a detailed flow chart based on fig. 3. Based on the above steps, the audio processing method provided in this embodiment is described in detail with reference to the flowchart shown in fig. 3 and the flowchart shown in fig. 4, where the method specifically includes the following steps:

step S200: acquiring first audio data, wherein the first audio signal comprises audio signals of at least two different timbres, and the audio signals of the at least two different timbres comprise a target audio signal.

The present embodiment will be described in detail with reference to a music piece audio including sounds of a plurality of musical instruments as first audio data, wherein the sounds of each musical instrument are taken as an audio signal, and the timbres of different musical instrument sounds are significantly different, taking as an example the removal of the sound of a specified musical instrument such as a violin from the music piece audio.

Step S400: and performing feature extraction on the first audio data to obtain an audio feature vector.

Specifically, feature extraction is performed on the first audio data according to the distribution conditions of the first audio data at different frequencies, so as to obtain an audio feature vector. Specifically, feature extraction may be performed on the first audio data by using a mel-frequency cepstrum coefficient extraction method. The mel frequency is extracted based on the auditory characteristics of human ears, and is in a nonlinear corresponding relation with the hertz (Hz) frequency. MFCC is the Hz spectral signature calculated by using the relationship between them.

In the specific implementation process, for each audio signal, the feature vector is calculated under a specific environment to obtain a training vector, the feature gravity center of the audio signal is obtained by a vector quantization method, and each audio signal can be uniquely characterized by the set of feature vectors, so that the audio feature vector can be obtained.

Further, the step S400 may include:

step S401: and preprocessing the first audio data to obtain time domain audio data.

Specifically, the preprocessing includes filtering, framing and windowing, and the filtering, framing and windowing are performed on the first audio data in sequence to obtain time-domain audio data.

In a specific embodiment, a noise signal in the first audio data is removed through filtering processing, so as to obtain denoised first audio data; optionally, performing high-frequency signal enhancement on the denoised first audio data through A/D conversion and emphasis processing; dividing the first audio data into multi-frame audio data through framing processing; and carrying out short-time signal interception and stable processing on the multi-frame audio data through windowing processing, and multiplying each frame by a window function so as to increase the continuity of the left end and the right end of the frame, reduce the influence of the Gibbs effect and finally obtain time domain audio data.

Step S402: and carrying out fast Fourier transform on the time domain audio data to obtain frequency domain audio data.

Specifically, the time domain audio data is converted into frequency domain audio data through FFT, that is, each frame obtains a corresponding frequency spectrum; optionally, the spectra are concatenated according to the time domain, and a spectral envelope is generated by inverse fourier transform to represent the timbre characteristics.

Step S403: and carrying out triangular filtering processing on the frequency domain audio data through a triangular filter to obtain the filtered frequency domain audio data, wherein the coverage range of the triangular filter is the frequency range of sound which can be heard by human ears.

Specifically, a masking effect of human ears is simulated, the frequency domain audio data are filtered through triangular filtering, specifically, a set of triangular filters which are linearly distributed on a Mel (Mel) frequency scale are used for smoothing a frequency spectrum and eliminating harmonic waves, and the filtered frequency domain audio data are obtained; the logarithmic energy of each filter output can be calculated by solving the logarithm (ln) to obtain the result similar to homomorphic transformation.

Step S404: and performing discrete cosine transform on the filtered frequency domain audio data, removing correlation among audio signals with different frequencies, and obtaining a Mel frequency cepstrum coefficient to obtain an audio characteristic vector.

Specifically, the DCT is used to remove correlation between signals in each dimension from the filtered frequency domain audio data obtained in step S403, and the signals are mapped to a low-dimensional space to obtain MFCC coefficients, that is, to obtain audio feature vectors of the audio data.

Optionally, data normalization, such as spectral weighting, cepstrum mean subtraction, and difference processing, may also be performed on the audio feature vector obtained in step S404.

Specifically, because the low-order parameters of the cepstrum are susceptible to the influence of channel characteristics and the like, and the resolution capability of the high-order parameters is low, the audio feature vector can be subjected to spectrum weighting processing to suppress the low-order and high-order parameters thereof, and Cepstral Mean Subtraction (CMS) processing to effectively reduce the influence of the channel on the feature parameters; and adding a differential parameter representing the dynamic characteristics of the audio frequency into the audio frequency feature vector.

The audio characteristic vector is subjected to data normalization, the numerical value is ensured in a certain range, and the performance of the audio characteristic vector can be improved.

In this embodiment, through the above steps, the feature extraction is performed on the music audio obtained in step S200, and an audio feature vector of the source music is obtained.

Step S500: and carrying out dimension reduction processing on the audio characteristic vector to obtain the audio characteristic vector after dimension reduction.

Specifically, the LLE algorithm is used to perform dimension reduction processing on the audio feature vector to obtain the audio feature vector after dimension reduction. The LLE algorithm is a nonlinear dimension reduction algorithm, and compared with a traditional dimension reduction method which focuses on sample variance, the LLE algorithm can keep the local linear characteristics of the samples during dimension reduction.

And reducing the dimension of the audio feature vector by using an LLE algorithm, so that the audio feature vector after dimension reduction can better keep the original manifold structure.

Further, the step S500 may include:

step S501: and acquiring the neighbor point of each feature point in the audio feature vector.

Specifically, k neighboring points of each feature point in the audio feature vector are obtained.

In this embodiment, n-dimensional audio feature vector D of each frame of audio data is set to { x ═ x1,x2,...,xnAnd taking the input, and setting preset given values, such as an adjacency number k and a dimensionality reduction target dimension d, wherein d is smaller than n. First, k neighbors of each feature point in the audio feature vector are calculated, e.g. with respect to the found feature point xiThe k feature points closest in distance (common euclidean distance) are defined as feature points xiK nearest neighbors (x)i1,xi2,...,xik)。

Step S502: and obtaining a local reconstruction weight matrix of each characteristic point according to each characteristic point and the corresponding adjacent points.

Specifically, a local reconstruction weight matrix of each feature point is calculated from k neighboring points of the feature point.

In this embodiment, the feature point x is calculatediThe local reconstruction weight matrix M is obtained by firstly solving a local covariance matrix Zi

Zi=(xi-xj)(xi-xj)T

Wherein x isjIndicating absence from the feature point xiSamples in the neighborhood, T represents the transpose of the matrix;

and find the corresponding weight coefficient vector Wi

Wherein 1 iskIs a k-dimensional all-1 vector, -1 represents the inversion of the matrix;

then by the weight coefficient vector WiAnd forming a weight coefficient matrix W, thereby calculating a local reconstruction weight matrix M:

M=(I-W)(I-W)T

wherein I represents a unit covariance matrix;

step S503: and obtaining the audio characteristic vector after the dimension reduction according to the characteristic values of the local reconstruction weight matrix and the characteristic vector corresponding to each characteristic value.

Specifically, the output value of the feature point is calculated from the local reconstruction weight matrix of the feature point and its neighboring points.

In this embodiment, the first d +1 eigenvalues of the local reconstruction weight matrix M are calculated, and the eigenvectors { y ] corresponding to the d +1 eigenvalues are calculated1,y2,...yd+1And then a matrix formed by the second eigenvector to the (D + 1) th eigenvector is an output value, and the D-dimensional audio eigenvector D' is obtained as the { y ═ y }2,y3,...yd+1And (4) the audio feature vector after dimension reduction.

And dimension reduction processing is carried out on the audio feature vector, so that the calculation complexity is reduced, and the calculation power of equipment is saved. And performing dimension reduction processing on the audio feature vector by utilizing an LLE algorithm so that the dimension-reduced data can better keep the original manifold structure.

In this embodiment, through the above steps, the audio feature vector obtained in step S400 is subjected to dimension reduction, so as to obtain the audio feature vector after dimension reduction.

Step S600: obtaining second audio data according to the audio feature vector after the dimension reduction, the sound sample feature of the target audio signal and the generation countermeasure network; the second audio data does not include the target audio signal, wherein the generation countermeasure network is used for generating a pseudo signal of the target audio signal, and obtaining the second audio data according to the pseudo signal, and a difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

Further, the step S600 may include:

step S601: a generative confrontation network is constructed, which includes a generator and a classifier.

Specifically, the GAN includes a generator (Generative Model) and a classifier (Discriminative Model), and the mutual game learning of the generator and the classifier can produce better output. By supplementing the missing information through the GAN, the more clear and more complete music audio frequency after the target musical instrument is removed can be obtained. The step is an optional step, and in the specific implementation process, the following steps can be directly performed on the preset initial generation countermeasure network, or the following steps can be performed after the network is temporarily constructed.

Step S602: and training the classifier according to the audio feature vector and the sound sample features to obtain first training data and second training data, wherein the first training data are training data including the target audio signal, and the second training data are training data not including the target audio signal.

Specifically, the audio feature vector and the sound sample feature of the target audio signal are input to a classifier of the GAN, and the classifier is trained to obtain training data including the target audio signal, that is, first training data, and training data not including the target audio signal, that is, second training data.

In a specific implementation, m samples { x ] are sampled from sound sample characteristics of a real target audio signal1,x2,...xmGet the true sample distribution pi={x1,x2,...xm-sampling m samples { z) from the audio feature vector1,z2,...zmAs a noise sample distribution; inputting into classifier, obtaining first training data, i.e. m samplesAs a distribution of classified samplesSecond training data obtained, i.e. m samplesAs an output sample distribution.

In this embodiment, for example, a violin sound sample and the extracted audio feature vector of the source music are input to a classifier of GAN, and the classifier is trained to obtain violin audio and audio that does not include violin sound.

Step S603: and judging whether the difference value between the first training data and the sound sample characteristic is smaller than a preset difference value.

Specifically, the classifier has a discrimination function, and can determine whether the training satisfies a convergence condition according to a difference value between the obtained first training data and the sound sample feature, that is, determine whether the difference value between the training data including the target audio signal and the sound sample feature of the target audio signal is smaller than a preset difference value.

In the specific implementation, cross entropy H (p) is usedi,qi) To determine the true sample distribution piAnd a classification sample distribution qiThe cross entropy calculation employed is as follows:

in the present case, the classifier is a binary problem, so that the basic cross entropy can be expanded more specifically to obtain the difference value, where the calculation formula of the binary cross entropy is as follows:

H((x1,y1),D)=-y1logD(x1)-(1-y1)log(1-D(x1)),

wherein, y1For each frame of the discrimination result, if the difference value is less than the preset difference value, the discrimination result is true, y1If the difference value is not less than the preset difference value, the judgment result is false, y1=0。

In this embodiment, the discrimination result is obtained by determining the difference value between the obtained violin audio and the violin sound sample, so as to determine whether the iterative training needs to be continued.

Step S604: and if the difference value between the first training data and the sound sample characteristic is not smaller than a preset difference value, training the generator according to the first training data and the second training data to generate the pseudo signal.

Specifically, according to the difference value between the obtained first training data and the sound sample characteristic, it is determined that the convergence condition is not satisfied, that is, the difference value between the training data including the target audio signal and the sound sample characteristic of the target audio signal is greater than or equal to a preset difference value, the obtained training data including the target audio signal and the audio training data not including the target audio signal are input to the generator together, and the generator is trained to generate the pseudo signal of the training data of the target audio signal.

In this embodiment, if the difference value between the violin audio and the violin sound sample is greater than or equal to the preset difference value, the violin audio and the audio not including the violin sound are input to the generator, and the generator is trained to generate the pseudo signal of the violin audio.

Step S605: and inputting the pseudo signal and the second training data into the trained classifier, and circulating until the difference value between the obtained first training data and the sound sample characteristic is smaller than a preset difference value to obtain second audio data.

Specifically, the pseudo signal and the obtained audio training data not including the target audio signal are input into the classifier after training, and the process is repeated until the difference value between the obtained training data including the target audio signal and the sound sample feature is smaller than a preset difference value, so as to obtain audio data not including the target audio signal, that is, second audio data. That is, the pseudo signal and the second training data are input into the classifier again, the first training data and the second training data are continuously obtained, the operation returns to step S603, whether the training of the classifier meets the convergence condition is continuously judged, and the process is repeated until the convergence condition is met, that is, when the difference value between the first training data obtained again and the sound sample feature is smaller than the preset difference value, the audio data of the target audio signal at this time is output as the finally output second audio data.

In a specific embodiment, the generator generates a pseudo signal of more real target audio signal training data as much as possible, so that the classifier reaches an ideal state, that is, it cannot be determined that there is a timbre difference between the input pseudo signal and the characteristics of the sound sample, and at the same time, the classifier also distinguishes the pseudo signal from the characteristics of the sound sample as much as possible. And generating a balanced and harmonious state until the audio training data can not be respectively obtained, and outputting the audio training data which are obtained by the classifier at the moment and do not comprise the target audio signal as final output audio data.

In this embodiment, the pseudo signal of the violin audio and the audio not including the violin sound are input to the classifier again, so that the violin audio and the audio not including the violin sound are obtained again, and the training is performed in a loop until the iteration condition is satisfied, the audio not including the violin sound at this time is output, and the music audio is output finally.

Compared with the prior art, the output music audio finally obtained by the embodiment has cleaner elimination of the target musical instrument and more natural and complete reserved part of the music audio.

According to the audio processing method provided by the embodiment, the first audio data comprising at least two audio signals is obtained, feature extraction is carried out on the first audio data to obtain an audio feature vector, and then the second audio data not comprising the target audio signal is obtained according to the audio feature vector, the sound sample feature of the target audio signal and the generation countermeasure network, so that the purpose of eliminating the target audio signal in the first audio data is achieved; the method also can generate a pseudo signal of the target audio signal by generating the confrontation network, and can generate the pseudo signal closest to the real target audio signal by utilizing the characteristic of continuously optimizing the generated confrontation network, so that the effect of clearing the target audio signal is better when the pseudo signal is used for obtaining second audio data; and, by generating a loop process against the network, the missing timbre in the second audio data can be smoothly supplemented, so that the second audio data output subsequently is more natural and complete.

Example two

Based on the same inventive concept, referring to fig. 5, which is a block diagram of an audio processing apparatus according to the present invention, the present embodiment provides an audio processing apparatus, which may be a virtual apparatus.

The following describes in detail the audio processing apparatus provided in this embodiment with reference to fig. 5, where the apparatus may include:

the audio acquisition module is used for acquiring first audio data, wherein the first audio data comprises audio signals with at least two different timbres, and the audio signals with the at least two different timbres comprise a target audio signal;

the feature extraction module is used for extracting features of the first audio data to obtain audio feature vectors;

and the audio processing module is used for obtaining second audio data according to the audio feature vector, the sound sample feature of the target audio signal and a generation countermeasure network, wherein the second audio data does not include the target audio signal, the generation countermeasure network is used for generating a pseudo signal of the target audio signal and obtaining the second audio data according to the pseudo signal, and the difference value between the pseudo signal and the target audio signal is smaller than a threshold value.

Further, the audio processing module may include:

a network construction unit for constructing a generative confrontation network, the generative confrontation network comprising a generator and a classifier;

a first training unit, configured to train the classifier according to the audio feature vector and the sound sample features to obtain first training data and second training data, where the first training data is training data including the target audio signal, and the second training data is training data not including the target audio signal;

the judging unit is used for judging whether a difference value between the first training data and the sound sample characteristic is smaller than a preset difference value or not;

the second training unit is used for training the generator according to the first training data and the second training data to generate the pseudo signal if the difference value between the first training data and the sound sample characteristic is not smaller than a preset difference value;

and the circulating training unit is used for inputting the pseudo signal and the second training data into the trained classifier, and circulating until the difference value between the obtained first training data and the sound sample characteristic is smaller than a preset difference value so as to obtain second audio data.

Further, the feature extraction module is specifically configured to perform feature extraction on the first audio data according to distribution conditions of the first audio data at different frequencies, so as to obtain an audio feature vector.

Still further, the feature extraction module may include:

the preprocessing unit is used for preprocessing the first audio data to obtain time domain audio data;

the frequency domain transformation unit is used for carrying out fast Fourier transformation on the time domain audio data to obtain frequency domain audio data;

the triangular filtering unit is used for carrying out triangular filtering processing on the frequency domain audio data through a triangular filter to obtain the filtered frequency domain audio data, and the coverage range of the triangular filter is the frequency range of sound which can be heard by human ears;

and the coefficient acquisition unit is used for performing discrete cosine transform on the filtered frequency domain audio data, removing the correlation among the audio signals with different frequencies, and acquiring a Mel frequency cepstrum coefficient so as to acquire an audio feature vector.

Further, the apparatus may further include:

the dimensionality reduction module is used for carrying out dimensionality reduction processing on the audio feature vector to obtain the audio feature vector after dimensionality reduction;

the audio processing module is further configured to obtain second audio data according to the audio feature vector after the dimension reduction, the sound sample feature of the target audio signal, and the generation countermeasure network.

Still further, the dimension reduction module may include:

a neighboring point obtaining unit, configured to obtain a neighboring point of each feature point in the audio feature vector;

a matrix obtaining unit, configured to obtain a local reconstruction weight matrix of each feature point according to each feature point and a corresponding neighboring point;

and the dimension reduction output unit is used for obtaining the audio characteristic vector after dimension reduction according to the characteristic values of the local reconstruction weight matrix and the characteristic vector corresponding to each characteristic value.

It should be noted that, for the functions that can be realized by each module in the audio processing apparatus and the corresponding achieved technical effects provided in this embodiment, reference may be made to the description of the specific implementation manner in the embodiment of the audio processing method of the present invention, and for the sake of brevity of the description, no further description is given here.

EXAMPLE III

Based on the same inventive concept, referring to fig. 2, a schematic diagram of a hardware structure of an audio processing apparatus according to embodiments of the present invention is shown. This embodiment provides an audio processing device, which may include a processor and a memory, where the memory stores an audio processing program, and when the audio processing program is executed by the processor, the audio processing program implements all or part of the steps of the various embodiments of the audio processing method of the present invention.

Specifically, the audio processing device refers to a terminal device or a network connection device capable of implementing network connection, and may be a terminal device such as a mobile phone, a computer, a tablet computer, and a portable computer, or may be a network device such as a server and a cloud platform.

It will be appreciated that the device may also include a communications bus, a user interface and a network interface.

The communication bus is used for realizing connection communication among the components;

the user interface is used for connecting the client and performing data communication with the client, and may include a display screen and an input unit such as a keyboard, and optionally, the user interface may further include a standard wired interface and a standard wireless interface;

the network interface is used for connecting the background server and carrying out data communication with the background server, and the network interface can comprise a standard wired interface and a standard wireless interface, such as a Wi-Fi interface;

the Memory is used for storing various types of data, which may include, for example, instructions of any application program or method in the device and application program related data, and may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk, and optionally, the Memory may also be a storage device independent from the processor;

the Processor is configured to call the audio Processing program stored in the memory, and perform all or part of the steps of the embodiments of the audio Processing method, and the Processor may be an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components.

Example four

Based on the same inventive concept, the present embodiments provide a computer program product comprising an audio processing program stored on a non-transitory computer-readable storage medium, the audio processing program comprising program instructions that, when executed by a computer, cause the computer to perform all or part of the steps of the various embodiments of the inventive audio processing method.

EXAMPLE five

Based on the same inventive concept, the present embodiment provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application store, etc., wherein the storage medium stores an audio processing program, the audio processing program is executable by one or more processors, and the audio processing program, when executed by the processors, may implement all or part of the steps of the various embodiments of the audio processing method of the present invention.

Through the above description of the specific embodiments, those skilled in the art will clearly understand that the method of the foregoing embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM, RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.

It should be noted that the above-mentioned serial numbers of the embodiments of the present invention are merely for description, and do not represent the merits of the embodiments.

The above description is only an alternative embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:语音分离方法、系统、装置和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!