Training method of voice classification model, voice classification method and related device

文档序号:139043 发布日期:2021-10-22 浏览:20次 中文

阅读说明:本技术 语音分类模型的训练方法、语音分类方法及相关装置 (Training method of voice classification model, voice classification method and related device ) 是由 张军伟 李�诚 于 2021-07-06 设计创作,主要内容包括:本申请公开语音分类模型的训练方法、语音分类方法及相关装置、设备、存储介质,其中,训练方法包括:获取至少一个类别的语音数据,同一类别的语音数据构成一个语音数据集;提取语音数据集中每个语音数据的语音特征;利用语音数据集中的语音特征对语音分类模型中的子分类模型进行训练;语音分类模型包括至少一个子分类模型,子分类模型与语音数据集一一对应。通过对语音数据进行类别分类,形成对应语音数据集,利用语音特征训练对应的子分类模型,从而得到识别所需类别语音数据的语音分类模型。本申请仅利用新类别的语音数据来进行训练,即可使得语音分类模型实现对新类别的分类。(The application discloses a training method of a speech classification model, a speech classification method, a related device, equipment and a storage medium, wherein the training method comprises the following steps: acquiring at least one type of voice data, wherein the voice data of the same type form a voice data set; extracting the voice characteristics of each voice data in the voice data set; training a sub-classification model in the voice classification model by utilizing voice features in the voice data set; the speech classification model comprises at least one sub-classification model, and the sub-classification models correspond to the speech data sets one by one. The voice data are classified into different categories to form a corresponding voice data set, and the corresponding sub-classification models are trained by utilizing the voice characteristics, so that the voice classification model for identifying the required category of voice data is obtained. The method and the device only use the speech data of the new category for training, and the speech classification model can realize classification of the new category.)

1. A training method of a speech classification model is characterized by comprising the following steps:

acquiring at least one type of voice data, wherein the voice data of the same type form a voice data set;

extracting voice features of each voice data in the voice data set;

training sub-classification models in the voice classification model by utilizing voice features in the voice data set; the speech classification model comprises at least one sub-classification model, and the sub-classification models correspond to the speech data sets one to one.

2. The training method of claim 1, further comprising:

determining a class characteristic of the voice data set based on at least a portion of the voice data in the voice data set;

processing the voice characteristics of each voice data in the voice data set by using the category characteristics of the voice data set;

the training of the sub-classification models in the speech classification model by using the speech features in the speech data set includes:

and training the sub-classification models in the voice classification model by utilizing the voice features processed in the voice data set.

3. The training method of claim 2, wherein the class features of the speech data set comprise audio loudness features and voicing features of the speech data set.

4. The training method of claim 3, wherein the determining the class characteristic of the voice data set based on at least a portion of the voice data in the voice data set comprises:

calculating a root mean square of speech energy of at least a portion of the speech data in the set of speech data to obtain the audio loudness characteristic;

calculating a zero-crossing characteristic of at least a portion of the speech data in the set of speech data to obtain the pitch change characteristic.

5. The training method according to claim 4, wherein the processing the speech feature of each speech data in the speech data set by using the class feature of the speech data set comprises:

dividing the speech feature by the audio loudness feature and adding the pitch change feature.

6. The training method according to any one of claims 1-5, wherein the extracting the speech feature of each speech data in the speech data set comprises:

and extracting the voice feature of each voice data in the voice data set, and performing dimension reduction processing on the voice feature.

7. Training method according to any of claims 1-5, characterized in that the training method comprises:

presenting an entry indication, the entry indication corresponding to entry of voice data of one category;

the acquiring of the at least one category of voice data includes: and acquiring voice data according to the recording indication.

8. A method of speech classification, the method comprising:

acquiring voices to be classified;

extracting the voice features to be classified of the voice to be classified;

inputting the speech features to be classified into a speech classification model, and determining the class of the speech to be classified, wherein the speech classification model is obtained by training according to the training method of any one of claims 1-7.

9. The speech classification method according to claim 8, further comprising:

determining loudness characteristics of the voice to be classified and tone characteristics of the voice to be classified;

processing the voice features to be classified by utilizing the loudness features to be classified and the tone features to be classified;

the inputting the voice features to be classified into a voice classification model comprises the following steps:

and inputting the processed voice features to be classified into the voice classification model.

10. The method according to claim 8 or 9, wherein the extracting the features of the speech to be classified comprises:

and extracting the voice features to be classified of the voice to be classified, and performing dimension reduction processing on the voice features to be classified.

11. The method according to claim 8 or 9, wherein the obtaining the speech to be classified comprises:

acquiring control voice for the fan as the voice to be classified;

the determining the category of the speech to be classified comprises:

and determining the type of the voice to be classified as one of starting, stopping, accelerating, decelerating, turning left and turning right.

12. A terminal device, characterized in that the terminal device comprises a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the method of any one of claims 1 to 11.

13. A computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the method of any one of claims 1 to 11.

Technical Field

The present application relates to the field of speech recognition, and in particular, to a method for training a speech classification model, a speech classification method, and related apparatus, device, and storage medium.

Background

The speech recognition technology is to make the intelligent device understand human speech. It is a science that involves many disciplines such as digital signal processing, artificial intelligence, linguistics, mathematical statistics, acoustics, affective science and psychology alternately. In recent years, with the rise of artificial intelligence, speech recognition technology makes a breakthrough in both theory and application, starts to go from the laboratory to the market, and gradually enters our daily life.

Speech recognition is a relatively large application area of artificial intelligence technology, and is divided into speech meaning recognition and speech type recognition. For recognition of voice categories, in current artificial intelligence products capable of realizing voice recognition, a trained voice classification model is generally integrated, and when recognition of new categories needs to be added, the current scheme cannot be realized.

Disclosure of Invention

The application provides a training method of a voice classification model, a voice classification method, a related device, equipment and a storage medium.

The first aspect of the present application provides a training method for a speech classification model, where the training method includes: acquiring at least one type of voice data, wherein the voice data of the same type form a voice data set; extracting the voice characteristics of each voice data in the voice data set; training a sub-classification model in the voice classification model by utilizing voice features in the voice data set; the speech classification model comprises at least one sub-classification model, and the sub-classification models correspond to the speech data sets one by one.

Therefore, the proposed voice classification model comprises sub-classification models, one sub-classification model corresponds to one class of voice data set, when the voice classification model is trained, the voice data of each class is acquired, the voice data of each class forms one voice data set, and the sub-classification models in the voice classification model are trained by using the voice data sets, so that the voice classification model can realize voice classification. Based on the training method, the speech classification model can increase the classification of new speech classes at any time.

Wherein, the training method further comprises: determining a class characteristic of the voice data set based on at least a portion of the voice data in the voice data set; processing the voice characteristics of each voice data in the voice data set by utilizing the category characteristics of the voice data set; training sub-classification models in the speech classification model by using speech features in the speech data set, wherein the training comprises the following steps: and training the sub-classification models in the voice classification model by utilizing the voice features processed in the voice data set.

Therefore, the class characteristics of the voice data set can be obtained by using at least part of the voice data in the voice data set, namely, the class of the voice data set is highlighted by the class characteristics, and the voice characteristics are processed by using the class characteristics, so that the training effect is better, and the class recognition by the sub-classification model is more facilitated.

Wherein the class characteristics of the speech data set include an audio loudness characteristic and a voicing characteristic of the speech data set.

Thus, the class characteristics of a speech data set are mainly reflected in the variation of loudness and pitch of speech.

Wherein determining a class characteristic of the speech data set based on at least part of the speech data in the speech data set comprises: calculating the root mean square of the voice energy of at least part of the voice data in the voice data set to obtain the audio loudness characteristic; zero-crossing characteristics of at least a portion of the speech data in the speech data set are calculated to obtain pitch change characteristics.

Therefore, for the difference of the basic audio loudness of each class, the root mean square of the energy of each voice data can be obtained, so that the audio loudness features in the class features can be obtained. And obtaining the audio zero-crossing feature of each voice data according to the different tone variation of each category, thereby obtaining the tone variation feature in the category feature.

Wherein the processing the voice feature of each voice data in the voice data set by using the category feature of the voice data set includes: dividing the speech feature by the audio loudness feature and adding the pitch change feature.

Therefore, the processed voice features can be obtained based on the class features of different voice data, so that the difference of different classes is further strengthened, and the subsequent training of the voice classification model is facilitated.

Wherein, the voice feature of every voice data in the extraction voice data set includes: and extracting the voice characteristics of each voice data in the voice data set, and performing dimensionality reduction on the voice characteristics.

Therefore, the dimension reduction processing is carried out on the voice features, the calculation amount in the subsequent training can be reduced, and the training of the classification model is realized in the terminal.

The training method comprises the following steps: presenting an input indication, wherein the input indication corresponds to the input of voice data of one category; obtaining at least one category of speech data, comprising: and acquiring voice data according to the input indication.

Thus, it is convenient to guide the user to enter voice data.

A second aspect of the present application provides a speech classification method, including: acquiring voices to be classified; extracting the voice features to be classified of the voice to be classified; and inputting the characteristics of the voice to be classified into a voice classification model, and determining the category of the voice to be classified, wherein the voice classification model is obtained by the training of the training method.

Therefore, the voice to be classified can be recognized and classified efficiently and accurately, and the voice category to be classified which can be recognized and classified can be trained in advance.

The voice classification method further comprises the following steps: determining loudness characteristics of the voice to be classified and tone characteristics of the voice to be classified; processing the voice features to be classified by utilizing the loudness features to be classified and the tone features to be classified; inputting the voice features to be classified into a voice classification model, comprising: and inputting the processed voice features to be classified into a voice classification model.

Therefore, loudness characteristics of the speech to be classified and pitch characteristics of the speech to be classified of different users are different. The method can distinguish the sounds of different users through the loudness characteristic of the voice to be classified and the tone characteristic of the voice to be classified so as to extract and optimize the voice characteristic to be classified. And optimizing the voice features to be classified based on the loudness features of the voice features to be classified and the tone features to be classified as classification dimensions, so as to realize accurate classification of different users.

Wherein, the speech characteristics of waiting to classify of speech of waiting to classify of extraction includes: and extracting the voice features to be classified of the voice to be classified, and performing dimension reduction processing on the voice features to be classified.

Therefore, the dimension reduction processing of the speech features to be classified can be realized, and the operation amount is reduced.

Wherein, obtain the pronunciation of waiting to classify, include: acquiring control voice for the fan as voice to be classified; determining the class of the speech to be classified includes: and determining the type of the voice to be classified as one of starting, stopping, accelerating, decelerating, turning left and turning right.

Thus, voice control of the fan can be achieved.

A third aspect of the present application provides a terminal device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the training method in the first aspect and the speech classification method in the second aspect.

A fourth aspect of the present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the training method in the first aspect described above and the speech classification method in the second aspect described above.

According to the scheme, the voice data are subjected to class classification to form a corresponding voice data set, the voice characteristics of different classes of voice data are extracted and optimized, and the corresponding sub-classification models are trained by utilizing the voice characteristics, so that the voice classification models for identifying the needed classes of voice data are obtained. The voice classification model comprises at least one sub-classification model, and the sub-classification models are arranged in one-to-one correspondence with the voice data sets. Therefore, the voice data set of each category of the voice recognition method and the voice recognition device can be used for training one sub-classification model independently correspondingly, when the number of the categories is required to be increased, the whole voice classification model is not required to be retrained, only one sub-classification model is required to be newly trained, and the recognizable voice categories are increased. Therefore, the training amount is reduced, the training efficiency is improved, and a universal language recognition scheme is realized. Furthermore, the training method is low in calculation amount, can achieve the purpose of completing the voice classification training task on the robot with limited computing power, and can be suitable for being used as an artificial intelligence teaching aid in the application field of the robot.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:

FIG. 1 is a flow chart illustrating an embodiment of a method for training a speech classification model according to the present application;

FIG. 2 is a schematic flow chart illustrating optimization of speech features in an embodiment of the speech classification model training method of the present application;

FIG. 3 is a flow chart illustrating an embodiment of a speech classification method of the present application;

FIG. 4 is a schematic flowchart illustrating optimization of speech features to be classified in an embodiment of the speech classification method according to the present application;

FIG. 5 is a block diagram of an embodiment of a training apparatus for a speech classification model according to the present application;

FIG. 6 is a block diagram of an embodiment of the speech classification apparatus of the present application;

FIG. 7 is a block diagram of an embodiment of a terminal device according to the present application;

FIG. 8 is a block diagram of an embodiment of a computer-readable storage medium of the present application.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.

Referring to fig. 1 and fig. 2, fig. 1 is a schematic flowchart illustrating an embodiment of a method for training a speech classification model according to the present application; FIG. 2 is a flowchart illustrating optimization of speech features in an embodiment of the method for training a speech classification model according to the present application. The training method of the speech classification model in the embodiment of the application is executed by electronic Equipment such as intelligent Equipment or terminal Equipment, the terminal Equipment may be User Equipment (UE), mobile Equipment, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), handheld Equipment, computing Equipment, vehicle-mounted Equipment, wearable Equipment, and the like, the intelligent Equipment may include an intelligent education robot, an intelligent mobile robot, and the like, and the method may be implemented in a manner that a processor of the electronic Equipment calls a computer readable instruction stored in a memory.

The application provides a method for training a speech classification model, which specifically comprises the following steps:

step S11: at least one category of voice data is obtained, and the voice data of the same category forms a voice data set.

The categories may be based on general domain classifications, including gender classification, numeric classification, and directional classification, and/or on user classification. Specifically, the gender classification includes gender classification categories of male and female; the numerical classifications include a numerical classification category from 0 to 9; the direction classification comprises classification categories in the front, back, left and right directions; the user classification includes user classification categories based on the individuals of different users.

In some embodiments, when obtaining voice data of each category, the user may be guided to record voice data for multiple times according to the instruction, and the voice data may be clustered to form a voice data set.

Specifically, the obtaining of the at least one category of voice data may include: presenting an entry indication, the entry indication corresponding to entry of the voice data of one category. The device can present input instructions for guiding a user to record voice data, and can present the input instructions in a screen display and/or voice broadcast mode, wherein each input instruction corresponds to the input of voice data of one category.

It should be noted that the specific content of the recording indication may be adjusted according to the application scenario and the recording requirement.

For example, the application scenario is voice control of the fan, the recording requirement is to control the fan to start, stop, accelerate, decelerate, turn left, turn right, and the like, and the recording instruction may be to guide the user to repeat voices such as "start the fan", "stop the fan", "increase the fan speed", "decrease the fan speed", "turn left the fan", "turn right the fan", and the like in a screen display and/or voice broadcast manner, so as to obtain voice data of a corresponding category.

For example, the application scenario is voice control of a walking trolley, the recording requirement is to control the walking trolley to move forward, backward, rightward, leftward and the like, and the recording indication may be in a form of screen display and/or voice broadcast to guide the user to repeat voice of direction categories such as "forward walking", "backward walking", "leftward walking", "rightward walking", and the like, voice of number categories such as "1", "2", and voice of length unit categories such as "meter", and other required voices. Thereby acquiring voice data of a corresponding category.

Further, acquiring at least one category of voice data includes acquiring voice data according to the entry indication. Optionally, the duration of the voice data of a single entry is 3-10s, for example, 3s, 5s, 8s, or 10s, etc., according to the entry indication. Within the range, the duration of the voice data is beneficial to extracting the voice features, smaller calculated amount is kept, the subsequent data processing speed is increased, and the training efficiency is further improved.

How to obtain speech data of at least one category in particular is illustrated in different categories:

when voice data of a category based on user classification is acquired, the user can be guided to record voice data of 'hello' and the like for a plurality of times according to the input instruction, and a voice data set of the user classification category with the user ID is formed.

When acquiring voice data of a category classified based on a direction, a user may be guided to record voice data of "walk forward", "walk right", and the like for a plurality of times in accordance with an entry instruction, and constitute a voice data set of a direction classification category of a corresponding direction. Voice data in four different directions of front, back, left and right are recorded, and a voice data set of four direction classification categories of front, back, left and right can be formed.

When acquiring voice data of a category based on numeric classification, a user may be guided to record voice data of numeric relevance such as "0", "1" and the like for a plurality of times according to an entry instruction, and a voice data set of a numeric classification category corresponding to a number is formed. Usually, "0-9" voice data in ten different directions are recorded, and a voice data set of ten numeric classification categories can be formed.

When the voice data of the gender classification-based category is obtained, the user can be guided to record the phrase-type voice data for multiple times according to the instruction, and the gender of the user can be classified by combining some auxiliary means such as face recognition and the like, and a voice data set of the gender classification category corresponding to the gender is formed.

To reduce the amount of training operations, each entry indicates entry of speech data corresponding to a category. Certainly, under the condition that the technology allows, the user can be guided to record voice data similar to forward walking for 1 meter for a plurality of times according to the input instruction, the voice data sets respectively form the voice data sets of the direction classification category of front and the voice data sets of the digital classification category of 1 according to different sound segments, and therefore the voice data amount recorded by the user is reduced, and the user experience is improved.

It should be noted that, if the speech classification model only trains recognition in the general field, speech data of the user classification category does not need to be acquired. The speech data of all classes of the required general field can be obtained only according to the requirement, and the speech classification model recognized in the general field is trained. If the voice classification model needs to train voice recognition based on each user, firstly, acquiring voice data of user categories to form a voice data set of the user classification categories with user IDs; and further acquiring voice data of other general field types required by each user to form voice data sets of other various types.

In some embodiments, obtaining the voice data is generally achieved by recording the voice of the user, and the robot product generally carries a sound card, and the sound card is configured to normally achieve a recording function. In some cases, when the user records the voice, the voice recorded by the robot is very small and is very close to the robot, and at this time, the microphone can be configured with voice enhancement, so that the microphone is slightly enhanced, and the user can conveniently record voice data. The specific enhanced configuration parameters are adjusted according to the situation when the robot records the user voice, and are not limited herein.

In some embodiments, voice data may also be obtained by communicating with other devices, such as by downloading from a cloud server or obtaining from other mobile devices.

Step S12: speech features of each of the speech data in the speech data set are extracted.

Most of the schemes for completing speech recognition in the prior art utilize a neural network model to complete speech training and recognition through word vector (wordemtwodding) classification, the training calculation amount is large, expensive calculation operation cannot be completed on robot hardware with limited calculation power, the training process consumes long time, and the training efficiency is low.

The embodiment of the application realizes better speech recognition by optimizing the speech classification model based on the speech characteristics of different types of speech data. The voice feature of the extracted voice data may be implemented by Mel-frequency cepstral coefficients (MFCCs) voice features. The MFCC speech features are briefly described as follows:

mel-frequency cepstral coefficients (MFCCs) are the coefficients that make up the mel-frequency cepstrum. Cepstrum differs from mel-frequency cepstrum in that the band division of the mel-frequency cepstrum is equally spaced on the mel scale, which more closely approximates the human auditory system than the linearly spaced bands used in the normal log cepstrum.

Since there are also a lot of unwanted information in the energy spectrum, especially the human ear cannot distinguish the frequency variation of high frequency, so passing the spectrum through the mel-filter can solve the problem. The mel filter is a group of triangular band-pass filters with a preset number of nonlinear distributions, and the logarithmic energy output by each filter can be obtained. The preset number may be 20, etc. It must be noted that: this predetermined number of triangular band pass filters is evenly distributed over the frequency of the "mel scale". The mel frequency represents the sensitivity of the general human ear to the frequency, and thus it can be seen that the sensitivity of the human ear to the frequency f is logarithmically changed.

Specifically, a general flow of extracting MFCCs speech features for each speech data in a speech data set includes the following methods:

pre-emphasis

Usually, the high frequency energy is smaller than the low frequency energy, and the pre-emphasis filter is mainly used for amplifying the high frequency, eliminating the vocal cords and lips effect during the voice production process, compensating the high frequency part of the voice signal restrained by the voice production system, and highlighting the formants of the high frequency. This can be achieved by using a high pass filter.

Framing

Speech signals are short stationary signals, so feature extraction operations are typically performed within short time frame windows. And in order to avoid too large difference between the continuous frames, the two extracted adjacent frames have overlapping parts.

Window with window

After framing, each frame is typically multiplied by a window function, such as a Hamming window, to smooth the signal. The purpose is to increase the continuity of both ends of the frame and reduce the leakage of the frequency spectrum by the subsequent operation.

Frequency domain conversion

The frequency domain transform is a fourier transform. Referred to herein as Short-time Fourier Transform (STFT), in order to convert a signal from the time domain to the frequency domain.

Power spectrum

And (4) performing a modular square on the frequency spectrum of the voice signal to obtain the spectral line energy of the voice signal.

Extracting mel scale

A Mel filter bank is computed and the power spectrum is passed through a set of triangular filters (tringular filters) in Mel scales (40 filters are typically taken, nfilt ═ 40) to extract the bands.

The purpose of the Mel scale is to simulate the non-linear perception of sound by the human ear, more discriminative at lower frequencies and less discriminative at higher frequencies.

The calculation method comprises the following steps: and respectively multiplying and accumulating the amplitude spectrum obtained by Fast Fourier Transform (FFT) with each filter to obtain a value, namely the energy value of the frame data in the corresponding frequency band of the filter.

Obtaining MFCCs

The filter bank coefficients computed in the above steps are highly correlated and a Discrete Cosine Transform (DCT) may be applied to decorrelate the filter bank coefficients and produce a compressed representation of the filter bank. And substituting the energy logarithm obtained in the last step into a discrete cosine transform formula to obtain MFCCs:

wherein s (m) is the energy value of the filter obtained in the step of extracting the mel scale; l is the order of MFCC coefficient, and is usually 12-16; m is the number of the triangular filters; n is the size of each frame in the framing step, and a predetermined number of sample points are usually grouped into an observation unit, called a frame, where the predetermined number is usually 256 or 512, i.e. N is usually 256 or 512.

By the above method, the MFCCs speech features of each speech data in the speech data set can be extracted.

In some embodiments, extracting the speech feature of each of the speech data in the speech data set comprises: and extracting the voice characteristics of each voice data in the voice data set, and performing dimensionality reduction on the voice characteristics. Since the extracted original MFCC features may have different dimensions due to different audio time lengths, when a speech classification model is trained using a speech data set, the classification model requires that the feature dimensions of the speech features of the speech data in the speech data set are the same, and therefore, the speech features need to be subjected to dimension reduction processing, which is suitable for training of the classification model.

Specifically, before the dimension reduction processing is performed on the voice features, all the voice data in the voice data set, which are shorter than a preset time length, are removed. For example, the preset time period is 0.5s or the like. Therefore, invalid voice data which are too short are removed, the calculated amount is reduced, and the training precision and the training efficiency are improved.

Further, performing dimensionality reduction processing on the speech features comprises: the latitude of the extracted mfcc feature is determined by two parts of the feature vector dimension and the framing number, and is respectively marked as [ n _ mfcc, n _ frames ], and the feature vector dimension n _ mfcc can be set to be 16 according to empirical parameters; the number of the sub-frames n _ frame is related to the time length of the audio, the minimum value of the number of the sub-frames can be taken, and then the two-dimensional feature is leveled into a one-dimensional feature, so that the dimension reduction processing of the voice feature is realized, and the operation amount is reduced.

For speech data in the speech data set based on the generic domain classification category, the speech features used for training the classification model can be extracted by using the method provided by the contents. In addition, since the speech data in the speech data set classified by the user have different factors such as the basic loudness of the sound of each user, the speech data in the speech data sets of different user classes have different class characteristics. Therefore, when processing the voice data in the voice data set based on the user classification category, in addition to extracting the voice feature by using the method provided by the above contents, it is necessary to further optimize the voice feature in the voice data set, which specifically includes:

step S121: based on at least a portion of the speech data in the speech data set, a class characteristic of the speech data set is determined.

The class characteristics of the voice data set can be obtained by utilizing at least part of voice data in the voice data set, namely, the class of the voice data set is highlighted through the class characteristics, and the voice characteristics are processed by utilizing the class characteristics, so that the training effect is better, and the class can be identified by a sub-classification model more conveniently.

In some embodiments, the category characteristics of a speech data set composed of speech data of the same user category include: an audio loudness characteristic and a voicing characteristic of the speech dataset. The voice of different users can be distinguished through the audio loudness characteristic and the voice modulation characteristic, so that the characteristic extraction and optimization of the voice data set are realized.

Specifically, determining the class characteristics of the voice data sets based on at least part of the voice data in each voice data set comprises:

and calculating the root mean square of the voice energy of at least part of the voice data in the voice data set to obtain the audio loudness characteristic. For the difference of the basic audio loudness of each category, the root mean square of the energy of each voice data can be obtained, so that the audio loudness features in the category features can be obtained.

Zero-crossing characteristics of at least a portion of the speech data in the speech data set are calculated to obtain pitch change characteristics. And obtaining the audio zero-crossing feature of each voice data according to the different tone variation of each category, thereby obtaining the tone variation feature in the category feature.

In the above embodiment, the speech features are optimized based on the audio loudness features and the sound modulation features in the category features as classification dimensions, so as to realize accurate classification of speech data sets of different users. In other embodiments, classification of different user categories may also be implemented based on other ones of the category features as a classification dimension.

Step S122: the speech features of each speech data in the speech data set are processed using the class features of the speech data set.

The speech features of each speech data in the speech data set are processed by using the determined class features of the speech data set, i.e. the audio loudness features and the sound modulation features obtained in step S121.

Specifically, the processing the voice feature of each voice data in the voice data set by using the category feature of the voice data set includes: the speech features of each user category are divided by the corresponding audio loudness features and added with the corresponding pitch change features to obtain the speech features of the speech data set for each user category.

The voice feature extraction and optimization scheme adopted by the embodiment of the application can obtain more generalized voice features and is suitable for more voice classification models.

Step S13: and training sub-classification models in the voice classification model by using voice features in the voice data set, wherein the voice classification model comprises at least one sub-classification model, and the sub-classification models are in one-to-one correspondence with the voice data set.

The embodiment of the application realizes better speech recognition by optimizing the speech classification model based on the speech characteristics of different types of speech data. The voice feature extraction of the voice data can be realized through the steps, the voice classification model of the embodiment of the application comprises at least one sub-classification model, and the sub-classification models are arranged in one-to-one correspondence with the voice data sets. Therefore, the voice data set of each category of the embodiment of the application is corresponding to a sub-classification model which is trained independently, when the number of the categories is required to be increased, the whole voice classification model is not required to be retrained, and only one sub-classification model is required to be newly trained so as to increase the recognizable voice categories. Therefore, the training amount is reduced, the training efficiency is improved, and a universal language recognition scheme is realized.

In the embodiment of the present application, a Gaussian Mixed Model (GMM) may be used as the speech classification Model. The gaussian mixture model can be regarded as a model formed by combining K gaussian sub-models, and the K single models are Hidden variables (Hidden variables) of the mixture model. In the training of the voice classification model, the number of the classes of the voice data to be classified is K, and the sub-classification model is a Gaussian sub-model. For example, for the four different directional classification categories of "front-back, left-right," the GMM model will train 4 gaussian sub-models. And for the numeric classification category of ten different numbers "0-9", the GMM model will train 10 gaussian sub-models.

Different models may have different parameters, and we can determine the model parameters using the Expectation-maximization (EM) algorithm, which is an iterative algorithm for Maximum likelihood estimation of probabilistic model parameters containing Hidden variables (Hidden variables).

Each iteration comprises two steps:

e-step: to make expectations

E(γjk|X,θ)for all j=1,2,...,N

M-step: and (5) calculating the maximum, and calculating the model parameter of a new iteration.

Wherein θ is a model parameter of each sub-classification model; x is a voice feature; gamma rayjkIs the expected output; n is the total quantity of the voice data in each voice data set; j is the sequence number of each voice data.

And training the mean and variance parameters of each sub-classification model through an EM algorithm to obtain a sub-classification model for identifying the corresponding class of voice data.

It should be noted that, if the speech classification model only trains recognition in the general field, speech data of the user classification category does not need to be acquired. And (4) directly utilizing the voice characteristics in each voice data set to train the corresponding sub-classification models in the voice classification models.

If the speech classification model training is based on the recognition of the user classification category and the general field classification category, firstly, the speech characteristics after the speech data of one user category are processed in a centralized manner are required to be utilized to train the corresponding sub-classification models in the speech classification model; and then training each corresponding sub-classification model in the voice classification model by utilizing the voice characteristics in each voice data set of other general fields of the user. And then sequentially training language classification models of other user categories according to the same method.

By the training method, each user has the corresponding sub-classification model, and the language classification model obtained by training can be used for identifying the voices of different users in a targeted manner, so that the accuracy of the voice classification model is improved.

The voice classification method and the voice classification device have the advantages that the voice data are subjected to class classification to form the corresponding voice data set, the voice characteristics of different classes of voice data are extracted and optimized, and the corresponding sub-classification models are trained by utilizing the voice characteristics, so that the voice classification model for identifying the needed class of voice data is obtained. The voice classification model provided by the embodiment of the application comprises sub-classification models, wherein one sub-classification model corresponds to one class of voice data set, so that when the voice classification model is trained, voice data of each class is obtained, the voice data of each class form one voice data set, and the sub-classification models in the voice classification model are trained by using the voice data sets, so that the voice classification model can realize voice classification. Based on the training method, the speech classification model in the embodiment of the application can be added with new classification of the speech category at any time. Therefore, the training amount is reduced, the training efficiency is improved, and a universal language recognition scheme is realized. The training method is low in calculation amount, can achieve the purpose that the voice classification training task is completed on the robot with limited computing power, and can be suitable for being used as an artificial intelligence teaching aid in the robot application field. The training method of the embodiment of the application can realize the whole voice recognition process through python programming.

Referring to fig. 3 and 4, fig. 3 is a schematic flowchart illustrating a speech classification method according to an embodiment of the present application; fig. 4 is a schematic flowchart of optimizing a speech feature to be classified in an embodiment of the speech classification method according to the present application. The speech classification method of the embodiment of the application is executed by electronic Equipment such as intelligent Equipment or terminal Equipment, the terminal Equipment may be User Equipment (UE), mobile Equipment, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), handheld Equipment, computing Equipment, vehicle-mounted Equipment, wearable Equipment, and the like, the intelligent Equipment may include an intelligent education robot, an intelligent mobile robot, and the like, and the method may be implemented by a way that a processor of the electronic Equipment calls a computer readable instruction stored in a memory.

Another embodiment of the present application provides a speech classification method, including:

step S21: and acquiring the voice to be classified.

In some embodiments, the voice to be classified is obtained, and the voice to be classified may include a wake-up voice and an instruction voice. The awakening voice is used for awakening the equipment, the corresponding user can be identified by the voice classification model, and the instruction voice is used for controlling the equipment.

Taking an example of a scheme of performing fan control by using voice, acquiring the voice to be classified includes:

and acquiring a control voice for the fan as a voice to be classified. It should be noted that the voice category to be classified that can be recognized by the fan may be preset or obtained by a user directly training the fan, and specifically may include turning on, stopping, accelerating, decelerating, turning left, turning right, and the like. The instruction voice is only some common instruction voices listed, and other instruction voices with similar meanings can be used for replacing the instruction voice, for example, deceleration can be also used for turning down, and acceleration can be also used for turning up; the starting may be opening, and the stopping may be closing, which is not limited herein.

Step S22: and extracting the voice features to be classified of the voice to be classified.

The speech features to be classified of the speech to be classified may be implemented based on MFCC speech features. The MFCC speech features are briefly described as follows:

mel-frequency cepstral coefficients (MFCCs) are the coefficients that make up the mel-frequency cepstrum. Cepstrum differs from mel-frequency cepstrum in that the band division of the mel-frequency cepstrum is equally spaced on the mel scale, which more closely approximates the human auditory system than the linearly spaced bands used in the normal log cepstrum.

Since there are also a lot of unwanted information in the energy spectrum, especially the human ear cannot distinguish the frequency variation of high frequency, so passing the spectrum through the mel-filter can solve the problem. The mel filter is a group of triangular band-pass filters with a preset number of nonlinear distributions, and the logarithmic energy output by each filter can be obtained. The preset number may be 20, etc. It must be noted that: this predetermined number of triangular band pass filters is evenly distributed over the frequency of the "mel scale". The mel frequency represents the sensitivity of the general human ear to the frequency, and thus it can be seen that the sensitivity of the human ear to the frequency f is logarithmically changed.

Specifically, the general process of extracting the MFCCs voice features of the voice features to be classified includes pre-emphasis, framing, windowing, frequency domain conversion, power spectrum, mel scale extraction and MFCCs obtaining, and the MFCCs voice features of the voice to be classified can be extracted through the above processes. The steps of specifically extracting the MFCCs voice features of the voice to be classified are similar to the corresponding steps of the above embodiments, and are not described herein again.

In some embodiments, extracting the feature of the speech to be classified comprises: and extracting the voice features to be classified of the voice to be classified, and performing dimension reduction processing on the voice features to be classified, so that the operation amount is reduced, and the recognition efficiency is improved.

Specifically, the method comprises the step of removing the voice to be classified shorter than the preset time before the dimension reduction processing is carried out on the voice to be classified. For example, the preset time period is 0.5s or the like. Therefore, the short invalid speech to be classified is removed, and the recognition error is avoided.

Further, the dimension reduction processing of the speech features to be classified comprises: the latitude of the extracted mfcc feature is determined by two parts, namely a feature vector dimension and a frame number, and is respectively marked as [ n _ mfcc, n _ frames ], and the n _ mfcc can be set as 16 according to empirical parameters; the n _ frame is related to the time length of the audio, the minimum value of the number of the sub-frames can be taken, and then the two-dimensional feature is leveled into a one-dimensional feature, so that the dimension reduction processing of the voice feature to be classified is realized, and the operation amount is reduced.

For the voice to be classified of the general field classification category, the voice to be classified can be extracted by the method provided by the content. In addition, for the voices to be classified of the user classification category, the basic loudness of the voice of each user and other factors are different, so that the category characteristics of the voices to be classified of different users are different. Therefore, when processing the speech to be classified, besides extracting the speech features to be classified by using the method provided by the above contents, the speech features to be classified need to be further optimized, which specifically includes:

step S221: and determining loudness characteristics of the speech to be classified and tone characteristics of the speech to be classified.

The loudness characteristic of the speech to be classified of different users is different from the tone characteristic to be classified. The method can distinguish the sounds of different users through the loudness characteristic of the voice to be classified and the tone characteristic of the voice to be classified so as to extract and optimize the voice characteristic to be classified.

Specifically, determining the loudness characteristic of the speech to be classified and the pitch characteristic of the speech to be classified includes:

and calculating the root mean square of the voice energy of the voice to be classified so as to obtain the loudness characteristic of the voice to be classified. And aiming at the difference of the basic audio loudness of each voice to be classified, the root mean square of the energy of each voice to be classified can be obtained, so that the loudness characteristic of the voice to be classified is obtained.

And calculating the zero-crossing feature of the speech to be classified to obtain the tone feature to be classified. And aiming at different tone changes of each voice to be classified, obtaining the audio zero-crossing feature of each voice to be classified, thereby obtaining the tone feature to be classified of the voice to be classified.

In the above embodiment, the loudness feature of the speech to be classified and the tone feature to be classified based on the speech feature to be classified are used as the classification dimension, and the speech feature to be classified is optimized, so that accurate classification of different users is realized. In other embodiments, classification of different users may also be implemented based on other features as classification dimensions.

Step S222: and processing the voice features to be classified by utilizing the loudness features to be classified and the tone features to be classified.

And processing the voice features to be classified by using the determined loudness features to be classified and tone features to be classified of the voice to be classified, namely the loudness features to be classified and the tone features to be classified obtained in the step S221.

Specifically, the processing of the voice features to be classified by using the loudness features to be classified and the tone features to be classified of the voice to be classified comprises the following steps: and dividing each voice feature to be classified by the corresponding loudness feature to be classified, and adding the corresponding tone feature to be classified to obtain the voice feature to be classified of each user.

The voice feature extraction and optimization scheme for classification can obtain more generalized voice features for classification, and is suitable for more voice classification models.

Step S23: and inputting the characteristics of the voice to be classified into the voice classification model, and determining the category of the voice to be classified.

The speech classification model of this embodiment is obtained by training using the training method in any of the above embodiments.

The speech classification model of the embodiment of the application comprises at least one sub-classification model, and each sub-classification model identifies the speech features to be classified in one category. Embodiments of the present application may employ a gaussian mixture model (GMM model) as the speech classification model. The gaussian mixture model can be regarded as a model formed by combining K gaussian sub-models, and the K single models are Hidden variables (Hidden variables) of the mixture model. In the GMM voice classification model, the number of classes of voice data to be classified is K, and the sub-classification model is a Gaussian sub-model. For example, for the four different directional classification categories of "front-back, left-right," the GMM model will train 4 gaussian sub-models. And for the numeric classification category of ten different numbers "0-9", the GMM model will train 10 gaussian sub-models.

It should be noted that, if the speech classification model is only used for recognition in the general field, the speech to be classified is directly input into the speech classification model to obtain the classification result.

Optionally, all the sub-classification models in the speech classification model are called, the probability that the speech to be classified belongs to each sub-classification model is calculated and stored, and the category corresponding to the sub-classification model to which the maximum probability belongs is selected as the classification result.

If the speech classification model is used for recognition based on user classification categories and general field classification categories, firstly, the user category to which the speech to be classified belongs needs to be recognized, and inputting the speech features to be classified into the speech classification model comprises the following steps: and inputting the processed voice features to be classified into a voice classification model to obtain a user classification result. And then, identifying the classification result of the speech to be classified in the general field by using other sub-classification models related to the user. Optionally, all sub-classification models for identifying the user classes in the speech classification models are called, the probability that the speech to be classified belongs to each sub-classification model is calculated and stored, and the user class corresponding to the sub-classification model to which the maximum probability belongs is selected as the user class classification result. And then calling other sub-classification models related to the user, calculating and storing the probability that the voice to be classified belongs to each sub-classification model, and selecting the category corresponding to the sub-classification model to which the maximum probability belongs as a classification result.

The user category is firstly identified and used as a similar login entry, and other corresponding sub-classification models are adopted to further identify the voice to be classified of the user, so that the voice of the user can be identified in a targeted manner, and the identification efficiency and accuracy are improved. Especially for the users with dialects or accents, the recognition accuracy can be effectively improved, and the user experience is improved. The voice classification method in the embodiment can efficiently and accurately recognize and classify the voice to be classified, and the voice category to be classified which can be recognized and classified can be trained in advance, so that a general language recognition and classification scheme can be realized.

Still take the example of a scheme of using voice to control the fan, the fan has a pre-trained voice classification model, or the user directly trains the fan to obtain the voice classification model. The step of determining the category of the voice to be classified by the voice classification model comprises the following steps: and determining the type of the voice to be classified as one of starting, stopping, accelerating, decelerating, turning left and turning right.

It should be noted that the instruction voices are only some listed common instruction voices, and other instruction voices with similar meanings can be used for training the voice classification model of the fan and for recognition, for example, deceleration can be used for turning down, and acceleration can be used for turning up; the starting may be opening, and the stopping may be closing, which is not limited herein.

Besides the fan, the voice classification method can be used for other types of educational robots such as lighting devices and walking trolleys.

Referring to fig. 5, fig. 5 is a block diagram illustrating an embodiment of a training apparatus for a speech classification model according to the present application.

Another embodiment of the present application provides a training apparatus 300 for a speech classification model, including: a voice acquisition module 31, a feature extraction module 32 and an operation module 33. The voice acquiring module 31 is configured to acquire at least one category of voice data, where the voice data of the same category form a voice data set. The feature extraction module 32 is configured to extract a voice feature of each voice data in the voice data set. The operation module 33 is configured to train a sub-classification model in the speech classification model by using speech features in the speech data set; the speech classification model comprises at least one sub-classification model, and the sub-classification models correspond to the speech data sets one by one. The training device 300 of the embodiment of the application forms a corresponding voice data set by performing class classification on voice data, extracts and optimizes voice features of different classes of voice data, and trains corresponding sub-classification models by using the voice features, thereby obtaining a voice classification model for identifying the required class of voice data. The voice classification model of the embodiment of the application comprises at least one sub-classification model, and the sub-classification models are arranged in one-to-one correspondence with the voice data sets. Therefore, the voice data set of each category of the embodiment of the application is corresponding to a sub-classification model which is trained independently, when the number of the categories is required to be increased, the whole voice classification model is not required to be retrained, and only one sub-classification model is required to be newly trained so as to increase the recognizable voice categories. Therefore, the training amount is reduced, the training efficiency is improved, and a universal language recognition scheme is realized. The training method is low in calculation amount, can achieve the purpose that the voice classification training task is completed on the robot with limited computing power, and can be suitable for being used as an artificial intelligence teaching aid in the robot application field. The training apparatus 300 of the embodiment of the present application can implement the whole speech recognition process by python programming.

Referring to fig. 6, fig. 6 is a schematic diagram of a frame of an embodiment of a speech classification apparatus according to the present application.

Another embodiment of the present application provides a speech classification apparatus 400, including: a speech acquisition module 41, a feature extraction module 42 and a classification module 43. The voice obtaining module 41 is configured to obtain a voice to be classified. The feature extraction module 42 is configured to extract a to-be-classified voice feature of the to-be-classified voice. The classification module 43 is configured to input the features of the speech to be classified into a speech classification model, and determine the category of the speech to be classified, where the speech classification model in this embodiment is obtained by training with the training device in the above embodiment. The speech classification device 400 of the embodiment of the application has high recognition efficiency and accuracy of speech to be classified, and the speech category to be classified which can be recognized and classified can be trained in advance, so that universal language recognition and classification can be realized.

Referring to fig. 7, fig. 7 is a schematic diagram of a framework of an embodiment of a terminal device according to the present application.

A further embodiment of the present application provides a terminal device 700, which includes a memory 701 and a processor 702 coupled to each other, where the processor 702 is configured to execute program instructions stored in the memory 701 to implement the training method of any of the above embodiments and the speech classification method of any of the above embodiments. In one specific implementation scenario, the terminal device 700 may include, but is not limited to: mobile devices such as microcomputers, servers, laptops, tablet computers and the like. In addition, the terminal device 700 may further include a fan, a lighting device, a walking cart, and the like.

In particular, the processor 702 is configured to control itself and the memory 701 to implement the steps of any of the above-described embodiments of the training method or to implement the steps of any of the above-described embodiments of the speech classification method. Processor 702 may also be referred to as a CPU (Central Processing Unit). The processor 702 may be an integrated circuit chip having signal processing capabilities. The Processor 702 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 702 may be collectively implemented by an integrated circuit chip.

Through the scheme, the voice classification can be accurately and efficiently realized.

Referring to fig. 8, fig. 8 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application.

Yet another embodiment of the present application provides a computer readable storage medium 800 having stored thereon program instructions 801, the program instructions 801 when executed by a processor implementing any of the training methods described above and any of the language classification methods. Through the scheme, the voice classification can be accurately and efficiently realized.

Embodiments of the present application also provide a computer program product, which includes computer readable code or a non-volatile computer readable storage medium carrying computer readable code, when the computer readable code runs in a processor of an electronic device, the processor in the electronic device executes the above method.

In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.

Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium 800. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or make a contribution to the prior art, or all or part of the technical solutions may be implemented in the form of a software product stored in a storage medium 800, and includes several instructions to enable a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium 800 includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:端到端语音识别模型训练方法、语音识别方法及相关装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!