Language identification method and device and readable storage medium

文档序号:1629495 发布日期:2020-01-14 浏览:30次 中文

阅读说明:本技术 一种语种识别方法、装置及可读存储介质 (Language identification method and device and readable storage medium ) 是由 邹学强 包秀国 袁庆升 韩纪庆 郑贵滨 郑铁然 于 2019-10-28 设计创作,主要内容包括:本发明公开了一种语种识别方法、装置及可读存储介质,所述方法包括如下步骤:获取训练语音数据,并根据所述训练语音数据构建识别模型;获取待检测的语音数据,并提取所述待检测的语音数据的特征信息;通过识别模型对所述特征信息进行识别;在连续给定数量的特征信息均识别为同种语言的情况下,则将连续段的特征信息判定为识别种类的语音。本发明方法在采用所构建的识别模型对所述特征信息进行识别,由此解决了现有的识别结果会受到说话人本身音色特性的影响、通用背景等模型中没有考虑信道影响导致识别不准确的问题。(The invention discloses a language identification method, a language identification device and a readable storage medium, wherein the method comprises the following steps: acquiring training voice data, and constructing a recognition model according to the training voice data; acquiring voice data to be detected, and extracting characteristic information of the voice data to be detected; identifying the characteristic information through an identification model; when a predetermined number of successive pieces of feature information are recognized in the same language, the feature information of successive segments is determined as a recognized speech type. The method of the invention adopts the established recognition model to recognize the characteristic information, thereby solving the problem that the existing recognition result is influenced by the tone characteristic of the speaker, and the influence of a channel is not considered in models such as a general background and the like, so that the recognition is inaccurate.)

1. A language identification method, comprising the steps of:

acquiring training voice data, and constructing a recognition model according to the training voice data;

acquiring voice data to be detected, and extracting characteristic information of the voice data to be detected;

identifying the characteristic information through an identification model;

when a predetermined number of successive pieces of feature information are recognized in the same language, the feature information of successive segments is determined as a recognized speech type.

2. The method of claim 1, wherein obtaining training speech data and constructing a recognition model from the training speech data comprises:

acquiring training voice data of an existing language;

extracting acoustic features of the training voice data;

and performing parameter training on the extracted acoustic features through a neural network to obtain a trained recognition model.

3. The method of claim 2, wherein the acquiring the voice data to be detected and extracting the feature information of the voice data to be detected comprises:

acquiring voice data to be detected, and preprocessing the voice data to be detected;

and performing framing processing on the preprocessed voice data to obtain a feature vector sequence of the voice data to be detected.

4. The method of claim 3, wherein framing the pre-processed speech data comprises:

carrying out weighting processing on the voice data after preprocessing through a moving window;

and calculating a perceptual linear pre-coefficient PLP for the voice data after the weighting processing to obtain a feature vector sequence of the voice data to be detected.

5. The method of claim 3, wherein after framing the pre-processed speech data, the method further comprises:

on the basis of each frame, adding a specified number of frames before and after the frame to the current frame to obtain a frame segment.

6. The method of claim 5, wherein identifying the feature information by an identification model comprises: and identifying the characteristic vector sequence of the frame segment through the identification model.

7. The method according to claim 6, wherein in a case where a given number of successive pieces of feature information are recognized as the same language, determining feature information of successive pieces as a recognized kind of speech includes:

after the language identification is started, continuously identifying the characteristic vector sequence of the frame segment of the voice to be detected;

under the condition that the feature vector sequence of the current frame segment is judged not to belong to the target language, the number of the frame segments which are continuously identified at present and belong to the target language is recorded;

and under the condition that the number of the continuously recognized frame segments belonging to the target language is greater than 50% of the total number of the frame segments, determining the whole frame segment as the voice information of the target language.

8. A language identification device, comprising:

the voice data acquisition module is used for acquiring training voice data and acquiring voice data to be detected;

the characteristic extraction module is used for extracting the characteristic information of the voice data to be detected;

and the recognition module is used for recognizing the feature information through the recognition model, and judging the feature information of the continuous segments as the recognized type of voice under the condition that the given continuous number of feature information are recognized as the same language.

9. A computer-readable storage medium, characterized in that it has stored thereon a program for implementing the transfer of information, which program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 7.

Technical Field

The present invention relates to the field of audio recognition technologies, and in particular, to a language recognition method, a language recognition device, and a readable storage medium.

Background

Language identification technology is one of the key components of automatic speech recognition technology, and refers to the process of automatically determining the language category of a given speech segment. The purpose of language identification is to allow a computer to autonomously determine which language a test utterance is. As an important part of speech signal processing, speech recognition technology gradually becomes a front-end technology for speech recognition of contents and a back-end artificial intelligence guarantee technology including natural language processing.

In recent years, although language identification technology has advanced sufficiently, many problems have not been solved well so far. At present, most research results are obtained on the basis of a traditional general background model, a recognition vector method or an acoustic model-based method, and mainly refer to the same method in speaker recognition, so that the recognition result is inaccurate due to the influence of the tone characteristics of a speaker in the recognition process.

Disclosure of Invention

The embodiment of the invention provides a language identification method, a language identification device and a readable storage medium, which are used for solving the problem that in the prior art, the identification result is inaccurate due to the influence of the tone characteristic of a speaker in the identification process.

In a first aspect, an embodiment of the present invention provides a language identification method, where the method includes the following steps:

acquiring training voice data, and constructing a recognition model according to the training voice data;

acquiring voice data to be detected, and extracting characteristic information of the voice data to be detected;

identifying the characteristic information through an identification model;

when a predetermined number of successive pieces of feature information are recognized in the same language, the feature information of successive segments is determined as a recognized speech type.

Optionally, obtaining training speech data, and constructing a recognition model according to the training speech data, includes:

acquiring training voice data of an existing language;

extracting acoustic features of the training voice data;

and performing parameter training on the extracted acoustic features through a neural network to obtain a trained recognition model.

Optionally, the acquiring the voice data to be detected and extracting the feature information of the voice data to be detected includes:

acquiring voice data to be detected, and preprocessing the voice data to be detected;

and performing framing processing on the preprocessed voice data to obtain a feature vector sequence of the voice data to be detected.

Optionally, performing framing processing on the preprocessed voice data, including:

carrying out weighting processing on the voice data after preprocessing through a moving window;

and calculating a perceptual linear pre-coefficient PLP for the voice data after the weighting processing to obtain a feature vector sequence of the voice data to be detected.

Optionally, after performing framing processing on the preprocessed voice data, the method further includes:

on the basis of each frame, adding a specified number of frames before and after the frame to the current frame to obtain a frame segment.

Optionally, identifying the feature information through an identification model includes: and identifying the characteristic vector sequence of the frame segment through the identification model.

Optionally, when a given number of consecutive pieces of feature information are all recognized as the same language, determining feature information of consecutive segments as recognized speech includes:

after the language identification is started, continuously identifying the characteristic vector sequence of the frame segment of the voice to be detected;

under the condition that the feature vector sequence of the current frame segment is judged not to belong to the target language, the number of the frame segments which are continuously identified at present and belong to the target language is recorded;

and under the condition that the number of the continuously recognized frame segments belonging to the target language is greater than 50% of the total number of the frame segments, determining the whole frame segment as the voice information of the target language.

In a second aspect, an embodiment of the present invention provides a language identification apparatus, including:

the voice data acquisition module is used for acquiring training voice data and acquiring voice data to be detected;

the characteristic extraction module is used for extracting the characteristic information of the voice data to be detected;

and the recognition module is used for recognizing the feature information through the recognition model, and judging the feature information of the continuous segments as the recognized type of voice under the condition that the given continuous number of feature information are recognized as the same language.

In a third aspect, an embodiment of the present invention provides a computer-readable storage medium, on which an implementation program for information transfer is stored, and when the program is executed by a processor, the method implements the steps of the foregoing method.

According to the embodiment of the invention, a recognition model is constructed according to the training voice data; the constructed recognition model is adopted to recognize the characteristic information, so that the problem that the recognition is inaccurate because the existing recognition result is influenced by the tone characteristic of the speaker and the influence of a channel is not considered in models such as a general background and the like is solved, and a positive technical effect is achieved.

The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.

Drawings

Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:

FIG. 1 is a flow chart of a first embodiment of the present invention;

FIG. 2 is a flowchart illustrating a wiki recognition process according to a second embodiment of the present invention;

FIG. 3 is a graph comparing the performance of the second embodiment of the present invention with that of the conventional model.

Detailed Description

Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

A first embodiment of the present invention provides a language identification method, as shown in fig. 1, the method includes the following steps:

acquiring training voice data, and constructing a recognition model according to the training voice data;

acquiring voice data to be detected, and extracting characteristic information of the voice data to be detected;

identifying the characteristic information through an identification model;

when a predetermined number of successive pieces of feature information are recognized in the same language, the feature information of successive segments is determined as a recognized speech type.

According to the embodiment of the invention, a recognition model is constructed according to the training voice data; the constructed recognition model is adopted to recognize the characteristic information, so that the problem of inaccurate recognition caused by the fact that the existing recognition result is influenced by the tone characteristic of the speaker and the influence of channels is not considered in models such as a general background and the like is solved.

Optionally, obtaining training speech data, and constructing a recognition model according to the training speech data, includes:

acquiring training voice data of an existing language;

extracting acoustic features of the training voice data;

and performing parameter training on the extracted acoustic features through a neural network to obtain a trained recognition model.

Specifically, the method includes the steps of constructing a recognition model, including using an existing language data set, performing feature extraction on audio of a training set according to certain acoustic features, such as perceptual linear prediction features, and performing network parameter training by using a deep neural network, which may be a multi-layer perceptron network in this embodiment, to obtain a trained recognition network model.

In this embodiment, the feature extraction of the existing language data set can be completed by the following steps,

sampling quantization and pre-emphasis processing are carried out on a speech signal s (n) of an existing language, and the signal is assumed to be stable in short time, so the signal can be subjected to framing processing, wherein the specific framing method is realized by adopting a movable finite-length window to carry out weighting, and a perceptual linear prediction western tree (PLP) is calculated on the weighted speech signal sw (n), so that a feature vector sequence X is obtained, wherein the feature vector sequence X is { X1, X2, …, xm }.

Optionally, the acquiring the voice data to be detected and extracting the feature information of the voice data to be detected includes:

acquiring voice data to be detected, and preprocessing the voice data to be detected;

and performing framing processing on the preprocessed voice data to obtain a feature vector sequence of the voice data to be detected.

Specifically, for example, the audio signal of the speech segment to be detected is s (n), and the input audio signal is subjected to feature extraction to obtain a feature vector sequence X ═ { X ═ X1,x2,…,xsWhere S represents a natural number.

Optionally, performing framing processing on the preprocessed voice data, including:

carrying out weighting processing on the voice data after preprocessing through a moving window;

and calculating a perceptual linear pre-coefficient PLP for the voice data after the weighting processing to obtain a feature vector sequence of the voice data to be detected.

The above scheme may also be performed by the same scheme as that for feature extraction of an existing language data set, that is, sampling quantization and pre-emphasis processing are performed on a speech signal s (n) to be detected, and assuming that the signal is stationary for a short time, the signal may be subjected to framing, specifically, the framing method is implemented by weighting a movable finite-length window, and perceptual linear prediction western tree (PLP) is calculated on the weighted speech signal sw (n), so as to obtain a feature vector sequence X ═ X1, X2, …, xs.

Optionally, in an optional embodiment of the present invention, after performing framing processing on the preprocessed voice data, the method further includes:

on the basis of each frame, adding a specified number of frames before and after the frame to the current frame to obtain a frame segment.

Specifically, on the basis of framing, a front T frame and a rear T frame of each frame in the audio are taken as a section, wherein T represents a positive integer, and therefore the framing is expanded so as to be identified more accurately.

Optionally, identifying the feature information through an identification model includes: and identifying the characteristic vector sequence of the frame segment through the identification model.

Optionally, when a given number of consecutive pieces of feature information are all recognized as the same language, determining feature information of consecutive segments as recognized speech includes:

after the language identification is started, continuously identifying the characteristic vector sequence of the frame segment of the voice to be detected;

under the condition that the feature vector sequence of the current frame segment is judged not to belong to the target language, the number of the frame segments which are continuously identified at present and belong to the target language is recorded;

and under the condition that the number of the continuously recognized frame segments belonging to the target language is greater than 50% of the total number of the frame segments, determining the whole frame segment as the voice information of the target language.

Specifically, in this embodiment, the parameter M may be set in advance according to the total number of the subframe segments before the recognition is started, and if consecutive M (or more) segments are recognized as the target language, the time segments corresponding to these consecutive segments are the time segments detected as the target language.

The second embodiment of the present invention provides a specific example of a language identification method, in which the identification of a wiki is taken as an example, and the method includes the following steps:

firstly, extracting the characteristics of the audio of a training set by using an existing language data set according to specified acoustic characteristics, such as perceptual linear prediction characteristics, and performing network parameter training by using a multi-layer perceptron network, wherein an input layer is perceptual linear prediction characteristics of an input frame and previous and next T frames (T is 10), and the input layer has 39 multiplied by 21 which is 819 dimensions in total; the output layer is a softmax layer and represents the probability that the output is a dimension language or not; the hidden layer has 5 layers, and each layer has 1280 nodes; the loss function adopts a cross entropy function, the optimization method adopts an asynchronous random gradient descent method, and L2 regularization is adopted to prevent overfitting, so that a trained recognition network model is obtained.

Step two, inputting an audio signal of a voice section to be detected as S (n), and performing feature extraction on the input audio signal to obtain a feature vector sequence X { X1, X2, …, xs }, wherein S represents a natural number;

taking the front and rear T frames of each frame in the audio as a section on the basis of framing, wherein T represents a positive integer and is generally 10;

step four, using a trained deep neural network (multilayer perceptron network) to identify each section;

and step five, if the continuous M (or more) sections of the set parameter M are identified as dimension languages, detecting the time sections corresponding to the continuous sections as the time sections of the dimension languages.

In this embodiment, the feature extraction process in the first and second steps is as follows: and performing sampling quantization and pre-emphasis processing on the speech signal s (n), wherein the signal is assumed to be stable for a short time, so that the signal can be subjected to framing, specifically, the framing method is realized by adopting a method of weighting a movable finite-length window, and a perceptual linear prediction western tree (PLP) is calculated on the weighted speech signal sw (n), so that a feature vector sequence X is { X1, X2, …, xs }.

In this embodiment, the extraction process of the PLP parameter is as follows:

(1) the input audio signal is subjected to framing and windowing, and then discrete Fourier transform is performed to obtain spectrum distribution information.

Let DFT of the audio signal be

Figure BDA0002249669700000071

Where x (N) is the input audio signal and N represents the number of points in the fourier transform.

(2) And (3) calculating a frequency spectrum: after the voice signal is processed at the front end and subjected to discrete Fourier transform, the square sum of a real part and an imaginary part of a short-time voice spectrum is taken to obtain a short-time power spectrum, namely:

P(ω)=Re[X(ω)]2+Im[X(ω)]2

(3) critical band analysis: mapping the frequency axis omega of the spectrum P (omega) to the Bark frequency omega, have

Ω(ω)=6ln{ω/1200π+[(ω/1200π)2+1]1/2}

Omega is transformed according to the critical zone curve to obtain

Figure BDA0002249669700000081

Discrete convolution of ψ (Ω) and P (ω) will produce a critical band power spectrum, i.e.

Figure BDA0002249669700000082

Generally, theta (omega) is sampled at each Bark interval, and the whole analysis frequency band can be ensured to be completely covered by integer sampling values through proper sampling intervals. In this example, a 0.994Bark interval is taken, covering a bandwidth of 0-16.9Bark (0-5kHz) with 18 spectral samples of θ (Ω).

(4) Equal loudness pre-emphasis

The pre-emphasis of theta omega (omega) is carried out according to a simulated equal loudness curve, i.e.

Ψ[Ω(ω)]=E(ω)θ[Ω(ω)]

The function E (ω) satisfies:

the function may approximately reflect the different degrees of sensitivity of the human ear to high and low frequencies.

(5) Intensity loudness conversion

I.e. compressing the loudness amplitude, in order to approximate and simulate a non-linear model between the loudness perceived by the human ear and the intensity of the sound itself. The method comprises the following steps:

Φ(Ω)=Ψ(Ω)0.33

(6) solving linear prediction coefficients using an all-pole model

Before the step is carried out, inverse Fourier transform is needed, a Levenson-Durbin (Levinson-Durbin) recursion algorithm is used for solving the linear prediction coefficient, and the final result is the PLP characteristic parameter. The procedure of the algorithm is as follows:

calculating the autocorrelation coefficient:

Rn(j),j=0,1,…,p

1、E(0)=Rn(0),i=1

2. the recurrence formula is as follows:

Figure BDA0002249669700000091

Figure BDA0002249669700000093

Figure BDA0002249669700000094

i ═ i + 1. If i > p the algorithm stops. If i is not more than p, returning to the step 3 and continuing to calculate by using a recursion formula.

In the algorithm of the present embodiment, the superscript represents the order of the predictor.

Figure BDA0002249669700000095

J-th prediction coefficient, E, representing the ith order predictor(i)The solution of all the predictors of each order in the p order can be obtained after the prediction residual energy of the predictor of the ith order is subjected to recursion of a recursion formula. The final solution is the operation result of the p-th order, as follows:

Figure BDA0002249669700000096

prediction residual energy E due to each order of predictor(i)Are all non-negative. Thus, the parameter k is known from the above formulaiIt must satisfy:

|ki|≤1,i=1,2,…,p

and from this can be deduced that(i)Decreases as the predictor order increases. Parameter kiAlso referred to herein as the reflection coefficient, i.e., the PARCOR coefficient. The above is the whole process of PLP feature extraction in this embodiment.

In this embodiment, as shown in fig. 2, taking the wiki recognition as an example, the step five specifically includes the following steps:

step 51, clearing a counter n, wherein n is a natural number;

step 52, taking the voice to be tested, dividing the voice into frames according to the method in the step three, and dividing the voice into N sections in total

Step 53, according to the method of the step four, judging whether each segment belongs to a dimension language, if the segment belongs to the dimension language, adding 1 to a counter, and repeating the step 53; otherwise go to step 54;

step 54, judging whether the counter value is larger than N/2, if so, not being a language speech segment of the Weiyu language; otherwise, the voice segment is in the language of the Wei language, the result is output, and the machine is stopped.

Experiments prove that the method has better performance than the traditional method based on a Gaussian mixture model-universal background model as shown in figure 3.

In conclusion, compared with the traditional model, the deep neural network technology has strong deep information extraction capability and nonlinear model construction capability, greatly helps in the process of carrying out feature extraction and recognition classification on large-scale voice data, and has succeeded in voice related fields including voice recognition, voice synthesis, speaker recognition and the like, so that the method has better effect when being used for constructing a new language recognition model.

In a third aspect, an embodiment of the present invention provides a language identification device, including:

the voice data acquisition module is used for acquiring training voice data and acquiring voice data to be detected;

the characteristic extraction module is used for extracting the characteristic information of the voice data to be detected;

and the recognition module is used for recognizing the feature information through the recognition model, and judging the feature information of the continuous segments as the recognized type of voice under the condition that the given continuous number of feature information are recognized as the same language.

In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which an implementation program for information transfer is stored, and the program implements the steps of the method of the first or second embodiment when executed by a processor.

It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.

Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.

While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:语音识别方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!