Method and device for outputting mixed deformation value, storage medium and electronic device

文档序号:192768 发布日期:2021-11-02 浏览:33次 中文

阅读说明:本技术 混合变形值的输出方法及装置、存储介质、电子装置 (Method and device for outputting mixed deformation value, storage medium and electronic device ) 是由 司马华鹏 廖铮 唐翠翠 于 2021-08-06 设计创作,主要内容包括:本申请实施例提供了一种混合变形值的输出方法及装置、存储介质、电子装置,所述方法包括:对获取的目标音频数据进行特征提取,得到目标音频特征向量;将目标音频特征向量和目标标识输入音频驱动动画模型;将目标音频特征向量输入包含多层卷积层的音频编码层,根据上一层t时刻和t-n时刻之间的输入特征向量确定下一层(2t-n)/2时刻的输入特征向量,将与上一层的输入特征向量有因果关系的特征向量确定为有效特征向量,依次根据每一层的有效特征向量输出目标音频编码特征,并将目标标识输入独热编码层进行二进制向量编码,得到目标标识编码特征;根据目标音频编码特征和目标标识编码特征,通过音频驱动动画模型输出目标音频数据对应的混合变形值。(The embodiment of the application provides a method and a device for outputting a mixed deformation value, a storage medium and an electronic device, wherein the method comprises the following steps: extracting the characteristics of the obtained target audio data to obtain a target audio characteristic vector; inputting the target audio feature vector and the target identification into an audio-driven animation model; inputting a target audio feature vector into an audio coding layer comprising a plurality of convolutional layers, determining an input feature vector of the next layer (2t-n)/2 time according to the input feature vector between the t time and the t-n time of the previous layer, determining a feature vector which has a causal relationship with the input feature vector of the previous layer as an effective feature vector, sequentially outputting a target audio coding feature according to the effective feature vector of each layer, and inputting a target identifier into a one-hot coding layer for binary vector coding to obtain a target identifier coding feature; and outputting a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding feature and the target identification coding feature.)

1. A method for outputting a mixed deformation value, comprising:

extracting the characteristics of the obtained target audio data to obtain a target audio characteristic vector;

inputting the target audio feature vector and a target identifier into an audio-driven animation model, wherein the target identifier is an identifier selected from preset identifiers, the preset identifiers are used for indicating a preset speaking style, and the audio-driven animation model comprises: an audio encoding layer and a one-hot encoding layer;

inputting the target audio feature vector into the audio coding layer comprising a plurality of convolutional layers, determining an input feature vector of the next layer (2t-n)/2 time according to the input feature vector between the t time and the t-n time of the previous layer, determining a feature vector which has a causal relationship with the input feature vector of the previous layer as an effective feature vector, sequentially outputting a target audio coding feature according to the effective feature vector of each layer, and inputting the target identifier into the one-hot coding layer for binary vector coding to obtain a target identifier coding feature, wherein n is less than t;

and outputting a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding features and the target identification coding features, wherein the mixed deformation value is used for indicating the mouth shape animation and the face expression of the virtual object, and the mixed deformation value corresponds to the target identification.

2. The method of claim 1, wherein prior to inputting the audio feature vectors and target identification into an audio-driven animated model, the method further comprises:

training the audio-driven animation model by using sample data, wherein the sample data comprises collected audio data of a speaking object, facial data of the speaking object, which is collected synchronously with the audio data, and a mixed deformation sample value corresponding to the facial data, and the facial data comprises a mouth shape and a facial expression.

3. The method of claim 2, wherein the training the audio-driven animation model using sample data comprises:

extracting local feature vectors of the audio data through an automatic speech recognition model;

inputting the local feature vectors into the audio coding layer comprising a plurality of convolutional layers, determining input feature vectors at the next (2t-n)/2 moments according to the input feature vectors between the t moment and the t-n moment of the previous layer, determining the feature vectors which have causal relation with the input feature vectors of the previous layer as effective local feature vectors, and sequentially outputting audio coding features corresponding to the audio data according to the effective local feature vectors of each layer;

inputting an identification corresponding to the face data of the speaking object into a one-hot coding layer to obtain an identification coding characteristic corresponding to the identification, wherein different face data correspond to different speaking styles, and the identification is used for indicating the speaking styles;

splicing the audio coding features and the identification coding features, then coding and decoding the spliced audio coding features and the identification coding features, and outputting a mixed deformation predicted value corresponding to the audio data, wherein the mixed deformation predicted value corresponds to the identification;

and training model parameters of the audio driving animation model by using a loss function according to errors of the mixed deformation sample value and the mixed deformation predicted value.

4. The method of claim 3, wherein the training model parameters of the audio-driven animation model using a loss function based on the errors of the mixed deformation sample values and the mixed deformation prediction values comprises:

acquiring a reconstruction error, a speed error and an acceleration error of the mixed deformation sample value and the mixed deformation predicted value by using an L2loss function;

and training model parameters of the audio driving animation model according to the reconstruction error, the speed error and the acceleration error.

5. The method of claim 3, wherein the splicing the audio coding features and the identification coding features for encoding and decoding comprises:

splicing the audio coding features and the identification coding features and then inputting the spliced audio coding features and the identification coding features into a coding layer to obtain a spliced feature code, wherein the coding layer comprises three fully-connected network layers;

inputting the splicing feature code into a decoding layer, and outputting a mixed deformation predicted value corresponding to the identifier through the decoding layer, wherein the decoding layer comprises three fully-connected network layers.

6. The method of claim 1, wherein outputting, by the audio-driven animation model, a mixed deformation value corresponding to the target audio data according to the target audio coding feature and the target identification coding feature comprises:

and splicing the target audio coding features and the target identification coding features, then coding and decoding, and outputting a mixed deformation value corresponding to the target audio data.

7. The method of claim 1, wherein after outputting the corresponding mixed-deformation value for the target audio data, the method further comprises:

and displaying a video picture corresponding to the mixed deformation value on a display screen according to the mixed deformation value corresponding to the target audio data and the three-dimensional scene corresponding to the target identification.

8. An output device for mixed deformation values, comprising:

the feature extraction module is configured to perform feature extraction on the acquired target audio data to obtain a target audio feature vector;

an input module configured to input the target audio feature vector and a target identifier into an audio-driven animation model, wherein the target identifier is an identifier selected from preset identifiers, the preset identifiers are used for indicating a preset speaking style, and the audio-driven animation model includes: an audio encoding layer and a one-hot encoding layer;

the encoding module is configured to input the target audio feature vector into the audio encoding layer comprising a plurality of convolutional layers, determine an input feature vector at the next (2t-n)/2 moment according to the input feature vector between the t moment and the t-n moment of the previous layer, determine a feature vector which has causal relation with the input feature vector of the previous layer as an effective feature vector, sequentially output target audio encoding features according to the effective feature vector of each layer, and input the target identifier into the one-hot encoding layer for binary vector encoding to obtain target identifier encoding features, wherein n is less than t;

and the output module is configured to output a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding features and the target identification coding features, wherein the mixed deformation value is used for indicating the mouth shape animation and the face expression of the virtual object, and the mixed deformation value corresponds to the target identification.

9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is arranged to carry out the method of any one of claims 1 to 7 when executed.

10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.

Technical Field

The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for outputting a hybrid deformation value, a storage medium, and an electronic apparatus.

Background

The research of voice-driven three-dimensional human face animation is an important content in the field of natural human-computer interaction. The voice-driven three-dimensional face animation synthesis is To preprocess voice synthesized by a real person recording or Text To Speech (TTS) technology, so as To drive a virtual three-dimensional face avatar To synthesize a lip animation (lip animation) and a facial expression (facial expressions) corresponding To the voice.

In the related technology, the research of driving the three-dimensional human face animation by voice mainly focuses on synthesizing synchronous and accurate mouth shape animation and realizing the classification of facial expressions through voice analysis, and at present, no better method is available for realizing that the voice drives the mouth shape animation and the facial expressions of the virtual human simultaneously. The lack of facial expressions can make the expressions of the voice-driven virtual human being rather stuttering and dull, without richer information feedback, and reduce the intelligibility and cognition of human-computer interaction.

Aiming at the technical problem that the speech can not be effectively used for simultaneously driving the mouth shape animation and the facial expression of the virtual object in the related technology, an effective solution is not provided yet.

Disclosure of Invention

The embodiment of the application provides a method and a device for outputting a mixed deformation value, a storage medium and an electronic device, so as to at least solve the technical problem that the mouth shape animation and the face expression of a virtual object cannot be driven by voice at the same time in the related art.

In one embodiment of the present application, there is provided a method of outputting a mixed deformation value, including: extracting the characteristics of the obtained target audio data to obtain a target audio characteristic vector; inputting the target audio feature vector and a target identifier into an audio-driven animation model, wherein the target identifier is an identifier selected from preset identifiers, the preset identifiers are used for indicating a preset speaking style, and the audio-driven animation model comprises: an audio encoding layer and a one-hot encoding layer; inputting the target audio feature vector into the audio coding layer comprising a plurality of convolutional layers, determining an input feature vector of the next layer (2t-n)/2 time according to the input feature vector between the t time and the t-n time of the previous layer, determining a feature vector which has a causal relationship with the input feature vector of the previous layer as an effective feature vector, sequentially outputting a target audio coding feature according to the effective feature vector of each layer, and inputting the target identifier into the one-hot coding layer for binary vector coding to obtain a target identifier coding feature, wherein n is less than t; and outputting a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding features and the target identification coding features, wherein the mixed deformation value is used for indicating the mouth shape animation and the face expression of the virtual object, and the mixed deformation value corresponds to the target identification.

In an embodiment of the present application, an output device of a mixed deformation value is further provided, including a feature extraction module configured to perform feature extraction on the obtained target audio data to obtain a target audio feature vector; an input module configured to input the target audio feature vector and a target identifier into an audio-driven animation model, wherein the target identifier is an identifier selected from preset identifiers, the preset identifiers are used for indicating a preset speaking style, and the audio-driven animation model includes: an audio encoding layer and a one-hot encoding layer; the encoding module is configured to input the target audio feature vector into the audio encoding layer comprising a plurality of convolutional layers, determine an input feature vector at the next (2t-n)/2 moment according to the input feature vector between the t moment and the t-n moment of the previous layer, determine a feature vector which has causal relation with the input feature vector of the previous layer as an effective feature vector, sequentially output target audio encoding features according to the effective feature vector of each layer, and input the target identifier into the one-hot encoding layer for binary vector encoding to obtain target identifier encoding features, wherein n is less than t; and the output module is configured to output a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding features and the target identification coding features, wherein the mixed deformation value is used for indicating the mouth shape animation and the face expression of the virtual object, and the mixed deformation value corresponds to the target identification.

In an embodiment of the present application, a computer-readable storage medium is also proposed, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.

In an embodiment of the present application, there is further proposed an electronic device comprising a memory and a processor, wherein the memory stores a computer program, and the processor is configured to execute the computer program to perform the steps of any of the above method embodiments.

According to the embodiment of the application, the characteristics of the obtained target audio data are extracted to obtain the characteristic vector of the target audio; inputting the target audio characteristic vector and the target identification into an audio driving animation model, wherein the target identification is an identification selected from preset identifications, the preset identifications are used for indicating a preset speaking style, and the audio driving animation model comprises: an audio encoding layer and a one-hot encoding layer; inputting a target audio feature vector into an audio coding layer comprising a plurality of convolutional layers, determining an input feature vector of the next layer (2t-n)/2 time according to the input feature vector between the t time and the t-n time of the previous layer, determining a feature vector which has a causal relationship with the input feature vector of the previous layer as an effective feature vector, sequentially outputting a target audio coding feature according to the effective feature vector of each layer, inputting a target identifier into a one-hot coding layer, and performing binary vector coding to obtain a target identifier coding feature, wherein n is less than t; and outputting a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding feature and the target identification coding feature, wherein the mixed deformation value is used for indicating the mouth shape animation and the face expression of the virtual object, and the mixed deformation value corresponds to the target identification. Compared with the existing encoding mode of a convolutional neural network, the encoding mode used by the method has the advantages of high calculation speed and low consumption, greatly improves the animation generation speed, can generate speaking animation in real time according to audio, can generate the speaking animation with the appointed character style by combining with target identification encoding, and is suitable for various application scenes.

Drawings

The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:

FIG. 1 is a flow chart of an alternative method for outputting a mixed deformation value according to an embodiment of the present application;

FIG. 2 is a schematic diagram of an alternative audio feature encoding scheme according to an embodiment of the present application;

FIG. 3 is a schematic diagram of an alternative training data preprocessing flow according to an embodiment of the present application;

FIG. 4 is a schematic diagram of an alternative process for training an audio-driven animated model according to an embodiment of the present application;

FIG. 5 is a block diagram of an alternative output device for mixed deformation values according to an embodiment of the present application;

fig. 6 is a schematic structural diagram of an alternative electronic device according to an embodiment of the present application.

Detailed Description

The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.

It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.

As shown in fig. 1, an embodiment of the present application provides a method for outputting a mixed deformation value, including:

step S102, extracting the characteristics of the obtained target audio data to obtain a target audio characteristic vector;

step S104, inputting the target audio characteristic vector and a target identification into an audio driving animation model, wherein the target identification is an identification selected from preset identifications, the preset identifications are used for indicating a preset speaking style, and the audio driving animation model comprises: an audio encoding layer and a one-hot encoding layer;

step S106, inputting a target audio feature vector into an audio coding layer comprising a plurality of convolutional layers, determining an input feature vector at the next (2t-n)/2 moment according to the input feature vector between the t moment and the t-n moment of the previous layer, determining a feature vector which has a causal relationship with the input feature vector of the previous layer as an effective feature vector, sequentially outputting a target audio coding feature according to the effective feature vector of each layer, inputting a target identifier into a one-hot coding layer, and carrying out binary vector coding to obtain a target identifier coding feature, wherein n is less than t;

and S108, outputting a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding feature and the target identification coding feature, wherein the mixed deformation value is used for indicating the mouth shape animation and the face expression of the virtual object, and the mixed deformation value corresponds to the target identification.

It should be noted that, the network architecture according to the technical solution of the embodiment of the present application includes: the system comprises an audio receiving device, an animation display device and an artificial intelligence server, wherein the output method of the mixed deformation value is realized on the artificial intelligence server. In the embodiment of the application, the audio receiving device and the animation display device are not limited to independent devices, and can also be integrated into other hardware devices with a sound pickup function and animation display, such as an LED large screen or a mobile phone terminal with a voice recognition function. The embodiment of the present application does not limit this.

The preset identification related in the embodiment of the application is used for indicating the preset speaking style, and can be understood as correspondingly indicating the facial expression, mouth shape and the like of speakers with different styles during speaking.

It should be noted that, in the embodiment of the present application, a process of encoding a target audio feature through an audio encoding layer is shown in fig. 2. The audio coding layer may include an input layer, a plurality of convolutional layers, and an output layer, and in order to visually illustrate the coding process, 2 convolutional layers are illustrated as an example in fig. 2. Determining an input feature vector of a next layer (2t-n)/2 time according to an input feature vector between the time t and the time t-n of a previous layer, in the embodiment of the application, taking n as 2 as an example, determining an input feature vector of a first convolution layer at the time t-1 and t-2 according to the input feature vectors of the input layers at the time t, t-1 and t-2, determining the input feature vector as an effective feature vector, then determining an effective feature vector of a second convolution layer according to the effective feature vector of the first convolution layer, and further outputting a target audio coding feature through an output layer. As can be seen from fig. 2, compared with the conventional recurrent neural network structure, the audio coding layer of the embodiment of the present application can obtain the sequence information of the sequence and has the advantages of fast calculation speed and low consumption, and the more the upper layer is, the more idle convolution channels are in the convolution layer, the larger the convolution window formed by the idle convolution channels is.

In the actual calculation process of the audio coding layer, for a certain convolutional layer, the existing convolutional window may be used at the current timing to learn the output corresponding to the previous timing of the previous convolutional layer (the previous timing may be one or more, and may be set according to requirements, for example, the outputs corresponding to the previous three timings of the previous convolutional layer are learned in each calculation process), that is, the output at the current time is calculated by the current timing convolutional layer according to the input of the previous timing in an integrated manner. Therefore, in the calculation process of the audio coding layer in the embodiment, it is not necessary to additionally provide a convolution layer to calculate the output corresponding to the previous time sequence of the previous convolution layer, and the effect can be achieved through the convolution window of the convolution layer itself.

For example, for a network structure of an audio coding layer consisting of three convolutional layers, it is set that the last convolutional layer needs to learn the outputs corresponding to the previous three time sequences during the calculation process, and then, by adopting the conventional manner in the related art, the last two convolutional layers in the audio coding layer need to be respectively extended with three convolutional layers, and the last convolutional layer needs to learn the outputs corresponding to the previous three time sequences, respectively, and the extended audio coding layer includes nine convolutional layers, so that the network volume is obviously increased. In contrast, the audio coding layer in this embodiment does not need to additionally extend the convolutional layers, and the effect can be achieved by the convolution windows of the last two convolutional layers of the three convolutional layers of the audio coding layer. Therefore, the audio coding layer in the embodiment can significantly control the volume of the model while improving the model effect through the learning of the causal relationship.

In an embodiment, before inputting the audio feature vectors and the target identification into the audio-driven animated model, the method further comprises:

training an audio-driven animation model by using sample data, wherein the sample data comprises collected audio data of a speaking object, facial data of the speaking object, which is collected synchronously with the audio data, and a mixed deformation sample value corresponding to the facial data, and the facial data comprises a mouth shape and a facial expression.

It should be noted that, before training an audio-driven animation model using sample data, the embodiment of the present application further includes a process of preprocessing audio data, where the preprocessing process includes three parts, i.e., data acquisition, data screening, and data optimization, and an overall flowchart is shown in fig. 3.

Taking the iphone as the data acquisition device as an example, the iphone as the data acquisition device can realize real-time face capture with lower cost by virtue of the iphone structured light and the built-in arkit technology. In the actual data collection, 40 sentences of fixed words can be adopted, and 20 actors speak with specific expressions towards the iphone under the same environment. The audio of the actor when speaking is recorded by iphone, and 52 mixed shapes (bs) change values carried by each frame of arket. And then, data screening is carried out, data with better quality are screened out manually, and various environmental reasons or errors of actors are eliminated. And finally, considering the accuracy of the data, enabling an animator to perform manual optimization on the data, and aiming at the inaccurate part in the data, performing optimization by using a manual animation making mode. The optimized data is training data which can be used later.

In an embodiment, training the audio-driven animation model using sample data comprises:

extracting local feature vectors of the audio data through an automatic speech recognition model;

inputting the local feature vectors into an audio coding layer comprising a plurality of convolutional layers, determining the input feature vectors of the next layer (2t-n)/2 time according to the input feature vectors between the t time and the t-n time of the previous layer, determining the feature vectors with causal relation with the input feature vectors of the previous layer as effective local feature vectors, and sequentially outputting audio coding features corresponding to audio data according to the effective local feature vectors of each layer;

inputting an identification corresponding to face data of a speaking object into the one-hot coding layer to obtain identification coding characteristics corresponding to the identification, wherein different face data correspond to different speaking styles, and the identification is used for indicating the speaking styles;

splicing the audio coding features and the identification coding features, then coding and decoding, and outputting a mixed deformation predicted value corresponding to the audio data, wherein the mixed deformation predicted value corresponds to the identification;

model parameters of the audio-driven animation model are trained using the loss function according to errors of the mixed deformation sample values and the mixed deformation prediction values.

It should be noted that, in practical use, in consideration of the variety of sound receiving devices and sound sources, a generalized audio feature extraction method is required, and therefore, an automatic speech recognition model, such as a masr model or a decapspeech model, is selected to perform feature extraction on speech. And performing feature extraction on the audio by acquiring features of the middle layer. The advantage of doing so is that the speech recognition model has gone through the training of a large amount of linguistic data, and the audio frequency characteristic that obtains all has better generalization to different languages, different receiving equipment, different speakers.

The training module adopts a deep learning network, inputs audio features and user id (equivalent to identification) when recording data, and outputs a blendshape value of corresponding time. In order to keep the facial expression when speaking, the speaking styles of different recorders are coded, and the direct coding is carried out by adopting a one-hot coding onehot mode. The audio coding contains both general pronunciation information and a small amount of personalized speaking style information.

In one embodiment, training model parameters of an audio-driven animation model using a loss function based on errors of mixed deformation sample values and mixed deformation prediction values comprises:

acquiring reconstruction errors, speed errors and acceleration errors of the mixed deformation sample values and the mixed deformation predicted values by using an L2loss function;

and training model parameters of the audio driving animation model according to the reconstruction error, the speed error and the acceleration error.

It should be noted that, during training, an l2loss function may be adopted, in order to reproduce the captured blenshape coefficient, an l2 error between the real blenshape coefficient and the predicted blenshape coefficient is calculated, in order to make the prediction effect more accurate and more stable, an error of 1-3 orders may be calculated, and the physical meanings correspond to the reconstruction error, the speed error, and the acceleration error, respectively.

In one embodiment, encoding and decoding after splicing the audio coding features and the identification coding features comprises:

splicing the audio coding features and the identification coding features and then inputting the spliced audio coding features and the identification coding features into a coding layer to obtain spliced feature codes, wherein the coding layer comprises three fully-connected network layers;

and inputting the splicing characteristic code into a decoding layer, and outputting a mixed deformation predicted value corresponding to the identifier through the decoding layer, wherein the decoding layer comprises three fully-connected network layers.

In one embodiment, outputting a mixed deformation value corresponding to target audio data through an audio-driven animation model according to the target audio coding feature and the target identification coding feature includes:

and splicing the target audio coding features and the target identification coding features, then coding and decoding, and outputting a mixed deformation value corresponding to the target audio data.

As shown in FIG. 4, the training process of the audio-driven animation model includes feature encoding, feature splicing, and output of mixed-deformation values. In order to realize more real three-dimensional face animation, user codes (equivalent to target identification codes) and audio codes are spliced, and personalized character information is added while ensuring that pronunciation information is sufficiently generalized, so that better reproduction of mouth shape animation and facial expressions is realized. The spliced features are sent into an encoder and decoder architecture network, the output of a decoder module is the final blenshape coefficient, and the encoder and the decoder can be formed by three layers of fully-connected networks.

In an embodiment, after outputting the mixing deformation value corresponding to the target audio data, the method further includes:

and displaying a video picture corresponding to the mixed deformation value on a display screen according to the mixed deformation value corresponding to the target audio data and the three-dimensional scene corresponding to the target identification.

In actual driving, firstly, audio is obtained through an audio receiving device, then, an audio preprocessing module is adopted to perform feature extraction on the audio, a user id is preset to be an id of a desired speaking style, the user id and the id are input into a pre-trained audio-driven three-dimensional human face animation model together, a bs value of a corresponding frame is output, the bs value is transmitted to ue4 (a ghost engine), various scenes and required blenshapes are built in ue4, and the scenes and the required blenshapes are rendered on various terminal devices through ue 4.

An embodiment of the present application further provides an output device for a mixed deformation value, as shown in fig. 5, including:

the feature extraction module 502 is configured to perform feature extraction on the acquired target audio data to obtain a target audio feature vector;

an input module 504 configured to input the target audio feature vector and a target identifier into an audio-driven animation model, where the target identifier is an identifier selected from preset identifiers, and the preset identifiers are used to indicate a preset speaking style, and the audio-driven animation model includes: an audio encoding layer and a one-hot encoding layer;

the encoding module 506 is configured to input a target audio feature vector into the audio encoding layer comprising a plurality of convolutional layers, determine an input feature vector at the next (2t-n)/2 moment according to the input feature vector between the t moment and the t-n moment of the previous layer, determine a feature vector having a causal relationship with the input feature vector of the previous layer as an effective feature vector, sequentially output a target audio encoding feature according to the effective feature vector of each layer, and input a target identifier into the one-hot encoding layer for binary vector encoding to obtain a target identifier encoding feature, wherein n is less than t;

and the output module 508 is configured to output a mixed deformation value corresponding to the target audio data through the audio-driven animation model according to the target audio coding feature and the target identification coding feature, where the mixed deformation value is used for indicating the mouth shape animation and the facial expression of the virtual object, and the mixed deformation value corresponds to the target identification.

In an exemplary embodiment, a user simulates user speaking of a mobile phone terminal as an example, the user performs sound reception through a mobile phone terminal program, an audio device obtains audio, then a feature extraction is performed on the audio by using an audio preprocessing module, a user id is preset as an id of a speaking style that we want, the user id and the id are input into a pre-trained audio-driven three-dimensional face animation model together, a bs value of a corresponding frame is output, the bs value is transmitted to ue4 (illusion engine), various scenes and required blenshape are built in ue4, and the bs value is rendered on a mobile phone through ue 4.

In another exemplary embodiment, taking a large-screen playing advertisement as an example, obtaining audio through built-in recording or internal Text-To-Speech (TTS) system, then performing feature extraction on the audio by using an audio preprocessing module, presetting user id as id of a desired speaking style, inputting the two into a pre-trained audio-driven three-dimensional face animation model, outputting bs values of corresponding frames, transmitting the bs values To a ue4 (virtual engine), building various scenes and a required blendshape in a ue4, and rendering the scenes and the required blendshape on a large screen through a ue 4.

The audio-driven animation model disclosed by the embodiment of the application replaces the conventional RNN used in the field by an audio coding mode, so that the animation generation speed is greatly increased, and the speaking animation can be generated in real time according to the audio. The audio coding mode of the embodiment of the application is combined with speaker coding, so that better reappearance of mouth shape animation and facial expression can be realized simultaneously. The audio driving animation model of the embodiment of the application can generate the speaking animation with the appointed character style by encoding the speaker, and is suitable for various application scenes. Meanwhile, the system can receive the speaking voice frequencies of different languages, different sound receiving devices and different speakers, supports TTS and is suitable for various application scenes.

According to still another aspect of the embodiments of the present application, there is also provided an electronic device for implementing the output method of the mixed deformation value, which may be, but is not limited to, applied in a server. As shown in fig. 6, the electronic device comprises a memory 602 and a processor 604, wherein the memory 602 stores a computer program, and the processor 604 is configured to execute the steps of any of the above method embodiments by the computer program.

Optionally, in this embodiment, the electronic apparatus may be located in at least one network device of a plurality of network devices of a computer network.

Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:

s1, extracting the characteristics of the obtained target audio data to obtain a target audio characteristic vector;

s2, inputting the target audio characteristic vector and the target identification into the audio driving animation model, wherein the target identification is an identification selected from preset identifications, the preset identifications are used for indicating a preset speaking style, and the audio driving animation model comprises: an audio encoding layer and a one-hot encoding layer;

s3, inputting the target audio feature vector into an audio coding layer containing a plurality of convolutional layers, determining the input feature vector of the next layer (2t-n)/2 time according to the input feature vector between the t time and the t-n time of the previous layer, determining the feature vector which has causal relation with the input feature vector of the previous layer as an effective feature vector, sequentially outputting the target audio coding feature according to the effective feature vector of each layer, inputting the target identifier into a one-hot coding layer for binary vector coding, and obtaining the target identifier coding feature, wherein n is less than t;

and S4, outputting a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding feature and the target identification coding feature, wherein the mixed deformation value is used for indicating the mouth shape animation and the face expression of the virtual object, and the mixed deformation value corresponds to the target identification.

Alternatively, it can be understood by those skilled in the art that the structure shown in fig. 6 is only an illustration, and the electronic device may also be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, a Mobile Internet Device (MID), a PAD, and the like. Fig. 6 is a diagram illustrating a structure of the electronic device. For example, the electronic device may also include more or fewer components (e.g., network interfaces, etc.) than shown in FIG. 6, or have a different configuration than shown in FIG. 6.

The memory 602 may be used to store software programs and modules, such as program instructions/modules corresponding to the method and apparatus for outputting a hybrid deformation value in the embodiment of the present application, and the processor 604 executes various functional applications and data processing by executing the software programs and modules stored in the memory 602, so as to implement the method for outputting a hybrid deformation value. The memory 602 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 602 may further include memory located remotely from the processor 604, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof. The memory 602 may specifically be, but is not limited to, program steps of an output method for the mixed deformation value.

Optionally, the transmitting device 606 is used for receiving or sending data via a network. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 606 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmitting device 606 is a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.

In addition, the electronic device further includes: a display 608 for displaying the training process; and a connection bus 610 for connecting the respective module parts in the above-described electronic apparatus.

Embodiments of the present application further provide a computer-readable storage medium having a computer program stored therein, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.

Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:

s1, extracting the characteristics of the obtained target audio data to obtain a target audio characteristic vector;

s2, inputting the target audio characteristic vector and the target identification into the audio driving animation model, wherein the target identification is an identification selected from preset identifications, the preset identifications are used for indicating a preset speaking style, and the audio driving animation model comprises: an audio encoding layer and a one-hot encoding layer;

s3, inputting the target audio feature vector into an audio coding layer containing a plurality of convolutional layers, determining the input feature vector of the next layer (2t-n)/2 time according to the input feature vector between the t time and the t-n time of the previous layer, determining the feature vector which has causal relation with the input feature vector of the previous layer as an effective feature vector, sequentially outputting the target audio coding feature according to the effective feature vector of each layer, inputting the target identifier into a one-hot coding layer for binary vector coding, and obtaining the target identifier coding feature, wherein n is less than t;

and S4, outputting a mixed deformation value corresponding to the target audio data through the audio driving animation model according to the target audio coding feature and the target identification coding feature, wherein the mixed deformation value is used for indicating the mouth shape animation and the face expression of the virtual object, and the mixed deformation value corresponds to the target identification.

Optionally, the storage medium is further configured to store a computer program for executing the steps included in the method in the foregoing embodiment, which is not described in detail in this embodiment.

Alternatively, in this embodiment, a person skilled in the art may understand that all or part of the steps in the methods of the foregoing embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.

The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.

The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.

In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.

In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于神经网络的动作生成方法、装置及计算设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!