Method and system for recognizing gender of disguised voice speaker

文档序号:1639662 发布日期:2019-12-20 浏览:24次 中文

阅读说明:本技术 一种伪装语音说话人性别识别的方法及系统 (Method and system for recognizing gender of disguised voice speaker ) 是由 张晓� 施正昱 蔡立明 董可欣 于 2019-10-10 设计创作,主要内容包括:本发明公开了一种伪装语音说话人性别识别的方法,其采集并清洗电子伪装语音的共振峰参数;接着利用构建的全连接神经网络模型,以共振峰参数作为输入矩阵,通过全连接的非线性变换堆叠层计算,确定电子伪装语音说话人性别分类。本发明提供的方案基于全连接神经网络对伪装语音说话人的性别进行识别,识别的准确率达95%以上,可有效克服现有技术所存在的问题。(The invention discloses a method for identifying speaker identity of a camouflage voice, which collects and cleans formant parameters of an electronic camouflage voice; and then, determining the classification of the electronic camouflage voice speakers by using the constructed fully-connected neural network model and taking the formant parameters as an input matrix through fully-connected nonlinear transformation stack layer calculation. The scheme provided by the invention identifies the gender of the disguised voice speaker based on the fully connected neural network, the identification accuracy rate is more than 95%, and the problems in the prior art can be effectively solved.)

1. A method for identifying the identity of a pretended voice speaker, comprising:

collecting and cleaning formant parameters of the electronic camouflage voice;

and determining the classification of the electronic camouflage voice speakers by using the constructed fully-connected neural network model and taking the formant parameters as an input matrix through fully-connected nonlinear transformation stack layer calculation.

2. The method of disguising voice speaker gender identification as claimed in claim 1 wherein the step of collecting and cleaning formant parameters of the electronic disguised voice comprises:

(1) extracting a formant parameter of a final part of each word in the electronic camouflage voice by an LPC method;

(2) and for the extracted parameter data of the formant of the disguised voice, sequentially carrying out data cleaning operations of formant depreciation cleaning, formant combination optimization and formant sequence adjustment.

3. The method for gender identification of a disguised voice speaker as claimed in claim 2, wherein in said step (1), the inputted electronic disguised voice signal is first deconvolved by a linear prediction method, the excitation component is substituted into the prediction residual to obtain a component, and then the parameters of the component are obtained, and then the spectral peak of the vocal tract response component is obtained, thereby obtaining the parameters of the formant.

4. The method of gender identification of a disguised voice speaker as claimed in claim 2, wherein the fully connected neural network model is comprised of an input layer, a hidden layer and an output layer, the hidden layer is at least one layer and is located between the input layer and the output layer; any neuron on the upper layer in each layer of the fully-connected neural network model is connected with all neurons on the lower layer; the fully-connected neural network model is also provided with a parameter list in a grid mode, and a parameter pool is provided for adaptive parameter adaptation of the model.

5. The method of masquerading speaker gender identification as claimed in claim 4, wherein the fully connected neural network model utilizes activation functions to introduce non-linear factors for hierarchical non-linear mapping learning.

6. The method for gender identification of the disguised voice speaker as claimed in claim 4, wherein the output layer in the fully connected neural network model uses a Softmax function to perform discretized classification on the operated data.

7. The method of disguised voice speaker gender identification as claimed in claim 4, wherein an L-BFGS algorithm is employed in the fully-connected neural network model to solve the corresponding parameters.

Technical Field

The invention relates to a voice processing and recognition technology, in particular to a technology for recognizing the gender of a speaker who pretends to be voice.

Background

Speech recognition is an important area in forensic authentication. With the popularization and development of conversion technology, once the electronic camouflage voice is utilized by lawless persons, the consequences are very serious. Speaker identification of electronically camouflaged speech has become a key issue for current speech recognition.

At present, in the process of disguised voice recognition, especially in the method of judging whether a section of voice is disguised, a Gaussian Mixture Model (GMM) and a Support Vector Machine (SVM) are widely applied.

GMM is the fastest algorithm in the hybrid model learning algorithm, which is a probabilistic model that assumes that all data points are gaussian distributed over a finite number of unknown parameters of a mixture. This algorithm simply maximizes the likelihood and does not bias the mean toward 0 or the cluster size toward a particular structure that may or may not be applicable. However, GMM has high requirements on data volume: when each mixture model does not have enough data points, estimating the covariance becomes difficult, while the algorithm diverges and finds a solution with infinite likelihood function values unless the covariance is artificially normalized. In actual judicial authentication, since the amount of speech sample data used as evidence is uncertain, it is difficult to realize accurate classification identification using GMM when there are not enough samples.

The SVM uses a hinge loss function to calculate empirical risks and adds a regularization term in a solution system to optimize structural risks, and the classifier has sparsity and robustness. The SVM has excellent generalization capability, and the trained classifier can obtain small errors when the training samples are reclassified and can also obtain small errors when the unknown samples are classified. However, when the problem classification is solved, quadratic programming of solution functions is required, which requires a large amount of storage space; and as the amount of data grows, the required space and time overhead increases dramatically.

Furthermore, the above two algorithms are currently only applied to recognizing whether the voice is a disguised voice, and the voice feature adopted in the application of the two techniques is Mel-scale frequency cepstral coefficients (MFCCs). In addition, when the two algorithms are used for identifying whether the voice is disguised, samples with large data volume need to be adopted in corresponding experiments, and therefore the practicability of the scheme is greatly influenced.

Although the existing automatic speaker recognition (ASV) system can already deal with the influence caused by the change of communication scene and sound track, the emotion and age of the speaker, etc., the existing ASV system still has the defect of speaker recognition of the disguised voice, and the average error rate (EERs) of speaker recognition of the disguised voice is higher than 40%.

Studies by zhuangling and zhao xiao (acoustic studies on electroacoustic camouflage voices) have verified that voice conversion by voice camouflage can be achieved for both men and women, that it is difficult to verify the gender of a speaker by hearing after conversion, and that the relationship between the camouflage voices of both men and women and the original voices was investigated by regression analysis, but no method for gender recognition of a speaker was proposed.

Zhang Gui Qing et al, in the study of sound change of electronic camouflage voices, compared the difference in acoustic characteristics (pitch rise, pitch intensity and formants) of male and female speakers after sound change by a telephone sound changer and an earphone sound changer. However, the study of the gender of the disguised voice speaker is still mainly limited to the traditional statistical methods (such as calculating the average value and the standard deviation), and the effective gender differentiation can not be realized by simply comparing the statistical values.

Therefore, the method for efficiently identifying the gender of the speaker with the disguised voice is a problem which needs to be solved urgently in the field.

Disclosure of Invention

Aiming at the problem that the gender of the speaker cannot be efficiently identified by the existing speaker identification technology for disguising the voice, a new speaker identification technology for disguising the voice is needed.

Therefore, the invention aims to provide a method for recognizing the gender of a disguised voice speaker so as to realize the gender recognition of the speaker of the disguised voice.

In order to achieve the above object, the present invention provides a method for recognizing the gender of a disguised voice speaker, comprising:

collecting and cleaning formant parameters of the electronic camouflage voice;

and determining the classification of the electronic camouflage voice speakers by using the constructed fully-connected neural network model and taking the formant parameters as an input matrix through fully-connected nonlinear transformation stack layer calculation.

Further, the method for collecting and cleaning the formant parameters of the electronic camouflage voice comprises the following steps:

(1) extracting a formant parameter of a final part of each word in the electronic camouflage voice by an LPC method;

(2) and for the extracted parameter data of the formant of the disguised voice, sequentially carrying out data cleaning operations of formant depreciation cleaning, formant combination optimization and formant sequence adjustment.

Further, in the step (1), the input electronic camouflage voice signal is first deconvolved by a linear prediction method, the excitation component is substituted into the prediction residual to obtain a component, and then parameters of the component are obtained, and then, the spectral peak of the vocal tract response component is obtained, so as to obtain each parameter of the formant.

Furthermore, the fully-connected neural network model is composed of an input layer, a hidden layer and an output layer, wherein the hidden layer is at least one layer and is positioned between the input layer and the output layer; any neuron on the upper layer in each layer of the fully-connected neural network model is connected with all neurons on the lower layer; the fully-connected neural network model is also provided with a parameter list in a grid mode, and a parameter pool is provided for adaptive parameter adaptation of the model.

Furthermore, nonlinear factors are introduced into the fully-connected neural network model by using an activation function so as to perform layered nonlinear mapping learning.

Further, an output layer in the fully-connected neural network model performs discretization classification on the operated data by adopting a Softmax function.

Furthermore, an L-BFGS algorithm is adopted in the fully-connected neural network model to solve corresponding parameters.

The scheme provided by the invention identifies the gender of the disguised voice speaker based on the fully connected neural network, the identification accuracy rate is more than 95%, and the problems in the prior art can be effectively solved.

Moreover, the scheme can be suitable for various different voice camouflage means, can realize speaker identification of camouflage voice under a small sample, reduces the dependence of the identification technology on data, and has good practicability.

Drawings

The invention is further described below in conjunction with the appended drawings and the detailed description.

FIG. 1 is a flow chart illustrating speaker identification for pretending speech in an embodiment of the present invention;

FIG. 2 is a schematic diagram of a fully-connected neural network according to an embodiment of the present invention.

Detailed Description

In order to make the technical means, the creation characteristics, the achievement purposes and the effects of the invention easy to understand, the invention is further explained below by combining the specific drawings.

Aiming at the problem of speaker gender identification of the electronic camouflage voice, the example determines the gender of the electronic camouflage voice speaker by constructing a fully connected neural network model and taking formant parameters of the electronic camouflage voice as an input matrix through the stacked layer calculation of fully connected nonlinear transformation.

Referring to fig. 1, a speaker-ID recognition process for disguised speech based on the above principle is shown in this example.

As can be seen from the figure, the process of speaker identification for the disguised voice mainly comprises the following steps:

1. and (6) collecting and cleaning data.

And (1.1) extracting formant parameters of the final part of each character in the camouflage voice by an LPC method. The formant parameters here include center frequency, bandwidth and intensity. The input speech signal is first deconvolved by a linear prediction method, i.e. the excitation component is substituted into the prediction residual to obtain a component, and then the parameter of the component is obtained, so that the spectral peak of the vocal tract response component is obtained, i.e. the parameters of the formant are obtained.

(1.2) false peak value and formant combination exist in the extraction of formant parameter values by LPC method. Therefore, the example further performs data cleaning operations of formant depreciation cleaning, formant merging optimization and formant sequence adjustment on the data of the false voice formant parameters extracted by the LPC method in sequence to reduce the interference caused by the merging of the false peak value and the formant.

2. The speaker with the voice pretended is identified.

In this example, a grid-type parameter list is first designed to provide a parameter pool for adaptive model parameters. The parameter pool comprises a series of parameters such as the number of hidden layers, a hidden layer structure, regularization item parameters, activation function categories, iteration rate and the like. And then, constructing a full-connection neural network model according to the data, wherein the neural network is composed of an input layer, a hidden layer and an output layer, the hidden layer is positioned between the input layer and the output layer, and the number of the layers can be more than one. Meanwhile, the fully-connected neural network model has a fully-connected relationship between layers, that is, any neuron in the previous layer has a connection relationship with all neurons in the next layer (as shown in fig. 2).

Specifically, the input layer in the fully-connected neural network model is used for receiving formant characteristic data after a series of cleaning operations of formant depreciation cleaning, formant merging optimization and formant sequence adjustment;

the hidden layer in the fully-connected neural network model is used for deepening the neural network, so that the fitting capacity of the neural network to data is better;

the output layer in the fully-connected neural network model is used for carrying out discretization classification on the data result and finally outputting the speaker gender result of the disguised voice, namely male/female.

Furthermore, in each layer of the fully-connected neural network model, a process of 'processing data by an activation function and then performing linear combination' exists; after receiving input data in the input layer, the process is carried out, and the result is output to the adjacent hidden layer; after receiving the output result of the previous layer, the hidden layer performs the process and outputs the result to the next hidden layer (if the next hidden layer is not the hidden layer, the result is output to the output layer); and the output layer receives the output result of the last hidden layer, then performs the process, performs discretization classification processing on the calculation result through a Softmax function, and outputs the final speaker gender classification recognition result.

Furthermore, the fully-connected neural network model has a fully-connected relation between layers, and each node of the fully-connected layer is connected with all nodes of the previous layer to integrate the extracted features. Thereby realizing the linear transformation of formant feature data of the disguised voice from one feature space to another.

As the speaker identification of the disguised voice is realized, fundamentally, the high-dimensional formant characteristic data is divided into two parts in the characteristic space: one corresponding to male speakers and the other corresponding to female speakers. However, in general, it is not easy to directly classify formant feature data in a high dimension. Therefore, the fully-connected mode is adopted in the fully-connected neural network model, and the feature space is converted equivalently, so that the converted high-dimensional data becomes easier to segment, and the speaker identification of the disguised voice can be helped.

On the basis, in order to enable the neural network to better solve the voice problem, the nonlinear factors are introduced into the recognition model by using the activation function, so that the recognition model has layered nonlinear mapping learning capability. The activation functions employed here may be Sigmoid, tanh, and ReLU.

The nonlinear function is introduced into the neural network as the excitation function, so that the expression capability of the deep neural network is effectively improved, and the deep neural network is not a linear combination of the input any more, but can approximate to any function.

If the activation function is not used, then the input of each layer node is a linear function of the upper layer output. In this case, the approximation capability of the neural network is quite limited, no matter how many layers the neural network has, and the output result is a linear combination of the input data.

Furthermore, the output layer in the fully-connected neural network model adopts a Softmax function to carry out discretization classification on the operated data.

The Softmax function is adopted, and is mainly used for mapping an N-dimensional arbitrary real number vector into an N-dimensional vector with values of all elements in (0,1), and vector normalization is achieved. The method mainly solves the problem of multi-classification based on probability in the identification of the speaker of the disguised voice, namely the gender judgment finally output by the model is generated by a Softmax function.

In addition, during specific implementation, an L-BFGS algorithm is preferably adopted in the fully-connected neural network model as a parameter optimization solving algorithm for solving the fully-connected neural network. Based on the algorithm, the optimization calculation efficiency is high, and the algorithm can adapt to the performance of data prediction processing of small samples, so that the connection coefficient and the bias coefficient of each layer in the fully-connected neural network model are obtained.

In the embodiment, after the originally acquired formant characteristic data is subjected to data cleaning, the training data is put into the fully-connected neural network model, and simultaneously, the real gender label of the speaker corresponding to each formant characteristic data is also put into the fully-connected neural network model. The training data and the labels are sequentially subjected to fitting and supervised classification through an input layer, a hidden layer and an output layer. That is, the linear transformation of the feature space generated in the full concatenation process is not random, but is purposefully transformed according to the real gender label of the speaker. And the solution of the parameters corresponding to the linear transformation of the feature space is very complex, so the L-BFGS algorithm is adopted to solve the corresponding parameters in the embodiment, therefore, after the training of the fully-connected neural network model is finished, the method can be used for the gender identification of the disguised voice speaker, namely, only other characteristic test data of the disguised voice formants are input, and the output (male/female) of the speaker gender identification result can be obtained.

Therefore, based on the fully-connected neural network model, the formant parameters of the electronic camouflage voice are collected and cleaned to serve as an input matrix, and the result of gender identification of the camouflage voice speaker can be output through calculation of the fully-connected nonlinear transformation stack layer, namely, a male or female is judged.

The fully-connected neural network model formed based on the scheme has nonlinear mapping and universality, and for the nonlinear mapping, nonlinear factors are introduced by using an activation function, so that the classification and identification performance of the model is improved; for universality, experimental verification shows that the model shows good adaptability to 53 different electronic camouflage voices converted by 3 electronic camouflage modes of tempo, rate and pitch, and has high accuracy rate of speaker identification of the camouflage voices.

For example, the following describes a process for performing gender recognition of a disguised voice speaker based on the fully connected neural network model.

The whole process mainly comprises the following steps:

(1) data preprocessing (i.e. data cleaning)

And (3) carrying out formant breaking cleaning, formant merging optimization and formant sequence adjustment on the acquired original formant characteristic data, eliminating abnormal values and filling missing values in the process, and preparing for next model calculation fitting.

(2) Model training:

inputting: the washed characteristic data of the camouflage voice formant and the real gender label of the corresponding speaker;

training: taking the real gender label of the speaker as a training supervision item (namely a training target), and fitting and calculating specific modes and parameters such as linear feature space mapping by adopting an L-BFGS algorithm;

and (3) outputting: a speaker gender identification result; the number of neural network layers, the number of neurons, the number of iterations, the activation function, etc. of the model are adjusted by comparing the result with the input true gender tag of the speaker.

(3) Speaker identification:

using the model trained in the step (2) (all required parameters are obtained through the model training process), the speaker identification of the disguised voice can be realized;

inputting: other test data of the characteristic of the formant of the disguised voice;

and (3) identification calculation: performing fitting calculation by adopting the model trained in the step 2 (with all required specific parameter values);

and (3) outputting: the result of the speaker identification is disguised.

With respect to the above protocol, the present example further validated the performance of the protocol through a series of experiments.

This example was used to electronically disguise the natural voice of both men and women's voice using SoundTouch three basic variable function tones (pitch), tempo (tempo) and speed (rate) in the construction of experiments to verify the present scheme. And the verification is respectively carried out from three angles of neural network layer number construction, activation function type, universality of different voice camouflage means and the like.

From the experimental results it can be determined that:

1. for neural networks, the network structure can have a significant effect on the results it generates. Too deep network structure not only causes large time overhead, but also easily generates phenomena such as overfitting and the like. The gender identification accuracy rate of 97.89% can be achieved in the test set by the scheme of the embodiment under the condition of less neural network layer number (2-4 hidden layers).

2. The overall gender identification of the 3 activation functions Sigmoid, tanh and ReLU were compared in the experiment. In the case of two hidden layers, the optimal recognition results of the 3 activation functions Sigmoid, tanh and ReLU on the test set are 96.96%, 93.03% and 96.73% in sequence.

3. The embodiment has good stability to different voice camouflage means, can be suitable for speaker identification of various electronic camouflage means, and can obtain good identification and classification results. For example, the scheme of the embodiment is more sensitive to the disguise means of tempo, the accuracy of the test set is as high as 0.9937, and no error is basically realized; the sensitivity of this example scheme to pitch is inferior; the lowest is rate, and the accuracy of the test set can reach 0.9330.

According to the embodiment, the neural network-based disguised voice speaker recognition scheme provided by the embodiment is used for recognizing the gender of the disguised voice speaker from parameters such as the center frequency, the bandwidth and the sound intensity of the formants. The model takes a neural network as a framework, obtains a recognition result through fully-connected nonlinear stack calculation, and solves optimization parameters by adopting L-BFGS in a training stage of the model. Experimental results show that the scheme of the embodiment can efficiently realize the identification of the electronic camouflage voice speaker, and the accuracy of the identification of the gender on the test set can reach 97.89 percent at most. In addition, the embodiment scheme has good universality for different camouflage means, in an experiment, three camouflage means of pitch, rate and tempo are respectively adopted to make camouflage voice, and the gender identification accuracy on the test set of the embodiment scheme is 93.30% at the lowest and can reach 99.37% at the highest.

Finally, it should be noted that the method of the present invention, or a specific system unit, or some of its units, is a pure software structure, and can be distributed on a physical medium such as a hard disk, an optical disk, or any electronic device (e.g. a smart phone, a computer readable storage medium) through a program code, and when the program code is loaded and executed by a machine (e.g. loaded and executed by a smart phone), the machine becomes an apparatus for implementing the present invention. The methods and apparatus of the present invention may also be embodied in the form of program code transmitted over some transmission medium, such as electrical cable, fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as a smart phone, the machine becomes an apparatus for practicing the invention.

The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:音频处理单元、由音频处理单元执行的方法和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!