Speech classification using audiovisual data

文档序号:538887 发布日期:2021-06-01 浏览:6次 中文

阅读说明:本技术 使用视听数据进行说话分类 (Speech classification using audiovisual data ) 是由 S.乔杜里 O.克莱杰奇 J.E.罗思 于 2019-10-03 设计创作,主要内容包括:用于生成针对目标人物在视频的部分期间是否正在说话的预测的方法、系统和装置,包括编码在计算机存储介质上的计算机程序。在一个方面,一种方法包括:获得每个图像描绘在相应时间点给定人物的嘴部的一个或多个图像。使用图像嵌入神经网络处理图像,以生成图像的潜在表示。使用音频嵌入神经网络处理对应于图像的音频数据,以生成音频数据的潜在表示。使用循环神经网络处理图像的潜在表示和音频数据的潜在表示,以生成针对给定人物是否正在说话的预测。(Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a prediction of whether a target person is speaking during a portion of a video. In one aspect, a method comprises: one or more images are obtained, each image depicting the mouth of a given person at a respective point in time. The image is processed using an image-embedding neural network to generate a potential representation of the image. Audio data corresponding to the image is processed using an audio embedded neural network to generate a potential representation of the audio data. The potential representation of the image and the potential representation of the audio data are processed using a recurrent neural network to generate a prediction of whether a given person is speaking.)

1. A method performed by one or more data processing apparatus, the method comprising:

obtaining one or more images each depicting a mouth of a given person at respective points in time, wherein each of the respective points in time is different;

processing the one or more images using an image-embedded neural network to generate potential representations of the one or more images;

obtaining audio data corresponding to the one or more images;

processing the representation of the audio data using an audio embedding neural network to generate a potential representation of the audio data; and

processing the potential representations of the one or more images and the potential representation of the audio data using a recurrent neural network to generate an output defining a prediction of whether one or more of the respective points in time are speaking for a given person;

wherein the image-embedded neural network, the audio-embedded neural network, and the recurrent neural network are trained by an end-to-end optimization process.

2. The method of claim 1, wherein obtaining one or more images each depicting a mouth of a given person at a respective point in time comprises:

obtaining one or more video frames from a video, wherein each of the video frames depicts a given character;

determining a respective location of a given person in each of the one or more video frames; and

for each of the one or more video frames, based on the position of the given character in the video frame, a corresponding portion of the video frame depicting the mouth of the given character is cropped.

3. The method of claim 2, wherein obtaining audio data corresponding to the one or more images comprises:

audio data corresponding to the one or more video frames of the video is obtained.

4. The method of any of claims 1-3, wherein each of the one or more images depicts a face or body of a given character in addition to a mouth of the given character.

5. The method of any of claims 1 to 4, wherein the representation of the audio data comprises Mel-frequency cepstral coefficients of the audio data.

6. The method of any of claims 1 to 5, wherein processing the potential representation of the one or more images and the potential representation of the audio data using a recurrent neural network to generate an output defining a prediction of whether one or more of the respective points in time are speaking for a given person comprises:

processing the potential representation of the one or more images and the potential representation of the audio data to update a current internal state of the recurrent neural network to generate a new internal state of the recurrent neural network; and

processing the new internal state of the recurrent neural network to generate an output defining a prediction of whether one or more of the respective points in time are speaking for a given person.

7. The method of any of claims 1 to 6, wherein the image embedded neural network and the audio embedded neural network each comprise one or more convolutional neural network layers.

8. The method of any of claims 1 to 7, wherein the recurrent neural network comprises a plurality of Gated Recurrent Units (GRUs).

9. A system, comprising:

a data processing device;

a memory in data communication with a data processing apparatus and storing instructions that cause the data processing apparatus to perform operations comprising:

obtaining one or more images each depicting a mouth of a given person at respective points in time, wherein each of the respective points in time is different;

processing the one or more images using an image-embedded neural network to generate potential representations of the one or more images;

obtaining audio data corresponding to the one or more images;

processing the representation of the audio data using an audio embedding neural network to generate a potential representation of the audio data; and

processing the potential representations of the one or more images and the potential representation of the audio data using a recurrent neural network to generate an output defining a prediction of whether one or more of the respective points in time are speaking for a given person;

wherein the image-embedded neural network, the audio-embedded neural network, and the recurrent neural network are trained by an end-to-end optimization process.

10. The system of claim 9, wherein obtaining one or more images each depicting a mouth of a given person at a respective point in time comprises:

obtaining one or more video frames from a video, wherein each of the video frames depicts a given character;

determining a respective location of a given person in each of the one or more video frames; and

for each of the one or more video frames, based on the position of the given character in the video frame, a corresponding portion of the video frame depicting the mouth of the given character is cropped.

11. The system of claim 10, wherein obtaining audio data corresponding to the one or more images comprises:

audio data corresponding to the one or more video frames of the video is obtained.

12. The system of any of claims 9 to 11, wherein each of the one or more images depicts a face or body of a given character in addition to a mouth of the given character.

13. The system of any of claims 9 to 12, wherein the representation of the audio data comprises mel-frequency cepstral coefficients of the audio data.

14. The system of any of claims 9 to 13, wherein processing the potential representations of the one or more images and the potential representation of the audio data using a recurrent neural network to generate an output defining a prediction of whether one or more of the respective points in time are speaking for a given person comprises:

processing the potential representation of the one or more images and the potential representation of the audio data to update a current internal state of the recurrent neural network to generate a new internal state of the recurrent neural network; and

processing the new internal state of the recurrent neural network to generate an output defining a prediction of whether one or more of the respective points in time are speaking for a given person.

15. The system of any of claims 9 to 14, wherein the image embedded neural network and the audio embedded neural network each comprise one or more convolutional neural network layers.

16. The system of any of claims 9 to 15, wherein the recurrent neural network comprises a plurality of Gated Recurrent Units (GRUs).

17. One or more non-transitory computer storage media storing instructions that, when executed by one or more computers, cause the one or more computers to perform operations comprising:

obtaining one or more images each depicting a mouth of a given person at respective points in time, wherein each of the respective points in time is different;

processing the one or more images using an image-embedded neural network to generate potential representations of the one or more images;

obtaining audio data corresponding to the one or more images;

processing the representation of the audio data using an audio embedding neural network to generate a potential representation of the audio data; and

processing the potential representations of the one or more images and the potential representation of the audio data using a recurrent neural network to generate an output defining a prediction of whether one or more of the respective points in time are speaking for a given person;

wherein the image-embedded neural network, the audio-embedded neural network, and the recurrent neural network are trained by an end-to-end optimization process.

18. The non-transitory computer storage medium of claim 17, wherein obtaining one or more images each depicting a mouth of a given person at a respective point in time comprises:

obtaining one or more video frames from a video, wherein each of the video frames depicts a given character;

determining a respective location of a given person in each of the one or more video frames; and

for each of the one or more video frames, based on the position of the given character in the video frame, a corresponding portion of the video frame depicting the mouth of the given character is cropped.

19. The non-transitory computer storage medium of claim 18, wherein obtaining audio data corresponding to the one or more images comprises:

audio data corresponding to the one or more video frames of the video is obtained.

20. The non-transitory computer storage medium of any of claims 17 to 19, wherein each of the one or more images depicts a face or body of a given character in addition to a mouth of the given character.

Background

This specification relates to processing data using machine learning models.

The machine learning model receives input and generates output, such as predicted output, based on the received input. Some machine learning models are parametric models that generate an output based on received inputs and values of model parameters.

Some machine learning models are depth models that employ multiple layers of the model to generate output for received input. For example, a deep neural network is a deep machine learning model that includes an output layer and one or more hidden layers, each applying a nonlinear transformation to a received input to generate an output.

Disclosure of Invention

This specification describes a system, implemented as a computer program on one or more computers at one or more locations, that generates a prediction as to whether a target person is speaking during a portion of a video.

According to a first aspect, there is provided a method comprising: one or more images are obtained, each image depicting a mouth of a given person at respective points in time, wherein each of the respective points in time is different. The one or more images are processed using an image-embedded neural network to generate a latent representation (latent representation) of the one or more images. Audio data corresponding to the one or more images is obtained. The representation of the audio data is processed using an audio embedding neural network to generate a potential representation of the audio data. The potential representation of the one or more images and the potential representation of the audio data are processed using a recurrent neural network to generate an output defining a prediction of whether a given person is speaking at one or more of the respective points in time. The image-embedded neural network, the audio-embedded neural network, and the recurrent neural network are trained through an end-to-end optimization process.

In some implementations, obtaining one or more images, wherein each image depicts a mouth of a given person at a respective point in time comprises: one or more video frames are obtained from the video, where each of the video frames depicts the given character. A respective location of a given person in each of the one or more video frames is determined. For each of the one or more video frames, based on the position of the given character in the video frame, a corresponding portion of the video frame depicting a mouth of the given character is cropped.

In some implementations, the audio data corresponds to one or more video frames of a video.

In some implementations, each of the one or more images depicts a face or body of the given character in addition to the mouth of the given character.

In some implementations, the representation of the audio data includes mel-frequency cepstral coeffients (mel-frequency cepstral coeffients) of the audio data.

In some implementations, the potential representation of the one or more images and the potential representation of the audio data are processed to update a current internal state of the recurrent neural network to generate a new internal state of the recurrent neural network. The new internal state of the recurrent neural network is processed to generate an output that defines a prediction of whether a given person is speaking at one or more of the various points in time.

In some embodiments, the image-embedded neural network and the audio-embedded neural network each comprise one or more convolutional neural network layers.

In some embodiments, the Recurrent neural network includes a plurality of Gated Recurrent Units (GRUs).

According to a second aspect, there is provided a system comprising: (i) a data processing apparatus, and (ii) a memory in data communication with the data processing apparatus and storing instructions that cause the data processing apparatus to perform the operations of the method as described above.

According to a third aspect, one or more non-transitory computer storage media are provided that store instructions that, when executed by one or more computers, cause the one or more computers to perform the operations of the above-described methods.

Particular embodiments of the subject matter described in this specification can be implemented to realize one or more of the following advantages.

The speech classification (speaking classification) system described in this specification integrates audio and visual data to determine whether a target person is speaking during portions of a video. By integrating both audio data and video data, the system learns to distinguish complex correlations between the position and activity of the target person's mouth (and, optionally, the target person's entire face or entire body) and the corresponding audio data, thereby accurately predicting whether the target person is speaking. The system described in this specification can achieve an increase in the accuracy of speech classification compared to systems that perform speech classification using only visual data. Furthermore, by integrating audio and visual data, the systems described in this specification can reduce computational resource (e.g., memory and computational power) consumption as compared to systems that use only visual data. In a specific example, by processing 1 video frame and a constant amount of audio data, the system described in this specification can outperform a system that processes 3 video frames (no audio data). Since processing 1 video frame and a constant amount of audio data consumes less computational resources than processing 3 video frames, in this example, the system described in this specification reduces computational resource consumption by processing both audio data and visual data.

The speech classification system described in this specification can handle images that depict not only the mouth of the target person, but potentially the entire face and even the entire body of the target person. In this way, the system can learn to distinguish the correlation between the positions and activities of the face and body of the target person and the audio data to accurately predict whether the target person is speaking. By processing images that depict more than the mouth of the target person, the system described herein is able to learn to recognize signals from the face or body of the target person that can be used to generate a speech classification prediction, and thus can achieve a higher prediction accuracy. Example signals from the target character's face or body include changes in expressions, eye activity, arm gestures, and the like. Recognition signals from the target person's face or body can particularly improve the accuracy of the speech classification prediction when the target person's mouth is occluded (e.g., because the target person looks away from the camera). Furthermore, the system described in this specification directly processes the image of the target person without the need (e.g., in some conventional systems) to pre-process it, for example, to identify facial markers, thus reducing consumption of computing resources. This is a technical improvement in the field of visual and audio processing.

The system described in this specification can use a recurrent neural network to generate sequences predicted for respective utterance classifications of whether a target person is speaking during various portions of a video. By using a recurrent neural network, the system can use its "memory" of previously processed video portions to generate a more accurate prediction of the speaking classification for the video portion currently being processed. In particular, the system may avoid generating "noisy" utterance class predictions that quickly transition between predictions for the target person "speaking" and "not speaking" (e.g., within a time period of 0.1 seconds or less). This is yet another technical improvement in the field of visual and audio processing.

The system described in this specification is trained through an end-to-end optimization process. More specifically, the neural networks included in the system are jointly trained by back-propagating the gradient of the loss function through the fused neural network into the audio-embedded neural network and the image-embedded neural network. Jointly training the neural networks included in the system through an end-to-end optimization process may enable the system to generate more accurate utterance classification predictions than the neural networks included in the system trained alone. In particular, joint training enables neural networks to learn correlations between audio data and image data, thereby improving the accuracy of the system; while the correlation between the two cannot be learned by training the neural network alone. This is yet another technical improvement in the field of visual and audio processing.

The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.

Drawings

FIG. 1 illustrates an example utterance classification system.

FIG. 2 illustrates an example data flow in which the speech classification system processes video frames and audio data from various portions of video in sequence.

Fig. 3 is a flow chart of an example process for generating utterance classification data.

FIG. 4 is a flow diagram of an example process for jointly training an audio embedded neural network, an image embedded neural network, and a converged neural network through an end-to-end optimization process.

Like reference numbers and designations in the various drawings indicate like elements.

Detailed Description

This specification describes a speech classification system that processes one or more video frames and audio data segments from a portion of a video to generate a prediction as to whether a target person (i.e., the person depicted by the video frames) is speaking during the portion of the video. The system processes the audio data using an audio embedded neural network to generate potential representations of the audio data. Alternatively, the system may crop an image depicting (at least) the mouth of the target person from the video frame and process the cropped image using an image embedding neural network to generate a potential representation of the cropped image. The system processes the potential representation of the audio data and the potential representation of the cropped image depicting the mouth of the target person using a fused neural network to generate a prediction of whether the target person is speaking during the portion of the video. These and other features are described in more detail below.

Fig. 1 illustrates an example utterance classification system 100. The utterance classification system 100 is an example of a system implemented as a computer program on one or more computers in one or more locations in which the systems, components, and techniques described below are implemented.

The utterance classification system 100 is configured to process one or more video frames 102 and corresponding audio data 104 from a video 106. Typically, the video frames 102 and corresponding audio data 104 represent only a small portion of the video 106. For example, the video 106 may include thousands of video frames with corresponding audio data, while the system 100 may be configured to process only 3 video frames 102 at a time, along with the audio data 104 corresponding to the 3 video frames. The audio data 104 may correspond to the exact same portion of the video as the video frame 102, but may also correspond to a larger or smaller portion of the video 106 than the video frame 102 in some cases. The audio data 104 from the video 106 may be a recording (e.g., captured by a microphone) at the same time and place that the video frame 102 was captured. In particular, the audio data 104 may be a recording of words spoken by one or more characters depicted in the video frame 102 when the video frame 102 was captured.

The system 100 processes the input audio data 104, the video frames 102, and character recognition data 108 specifying the character depicted in the video frames 102 (referred to herein as the "target character") to generate utterance classification data 110. The utterance classification data 110 defines a prediction (e.g., probability) for whether the target person is speaking during the portion of the video 106 characterized by the video frames 102 and the corresponding audio data 104.

As shown with reference to fig. 2, the system 100 may be used to sequentially process audio data 104 and video frames 102 that represent (possibly overlapping) portions of a video 106 to generate a sequence of utterance classification data 110 outputs. The sequence of output of the utterance classification data 110 may define a prediction of whether the target person is speaking during a plurality of respective portions of the video 106.

The system 100 includes an audio embedded neural network 116, an image embedded neural network 124, and a converged neural network 114. As known to those skilled in the art, an embedded neural network is a type of artificial neural network configured to map discrete inputs (e.g., eigenvectors) to continuous-value outputs (e.g., vectors or matrices). In contrast to other types of neural networks with "one hot spot" outputs, the continuous value output embedded in a neural network has the following characteristics: that is, similar inputs are mapped to outputs that are close to each other in the multidimensional space. The output of the embedded neural network can thus be described as a potential representation of the data input to the embedded neural network. The audio embedded neural network 116 and the image embedded neural network 124 are embedded neural networks configured to process audio input and image input, respectively, as will be described in greater detail below.

The converged neural network 114 is configured to combine the output of the audio embedded neural network 116 and the output of the image embedded neural network 124. The inputs to the converged neural network 114 are the output of the audio embedded neural network 116 and the output of the image embedded neural network 124. The output of the converged neural network 114 defines a prediction as to whether the target person is speaking during the portion of the video 106. The term "prediction" refers to a determination made by the converged neural network 114 as to whether a target person is speaking during a portion of the video 106. The prediction may be expressed as a probability of whether the target person is speaking, which may take the form of a floating point value between 0.0 and 1.0. Alternatively or additionally, the prediction may be represented as a binary value (i.e., "true" or "false") that indicates whether it has been determined that the target person is speaking.

The converged neural network 114 may be implemented using a recurrent neural network, as will be described in greater detail below. As known to those skilled in the art, a recurrent neural network is a type of artificial neural network that has an internal state (or memory), such that the output of the recurrent neural network is a function of its inputs and its internal state. The internal state of the recurrent neural network is iteratively updated to generate a new internal state as a function of the current internal state and the current input. The current internal state of the recurrent neural network is, in turn, a function of the previous input and the previous internal state. The recurrent neural network allows accurate prediction of whether a target person is speaking in a particular frame of video 106. This is because the probability of whether the target character is speaking in any given frame is influenced by whether the character is speaking in the previous frame or frames. Thus, the internal state of the recurrent neural network allows for a prediction for a previous frame to be taken into account when determining a prediction for a current frame, thereby improving the overall accuracy of the prediction.

The audio data 104 and video frames 102 processed by the utterance classification system 100 may be represented by any suitable digital format. For example, the audio data 104 may be represented as an audio waveform embodied as a vector of digital amplitude values. As another example, each of the video frames 102 may be represented as a multi-dimensional matrix of digital values. In a particular example, each of the video frames may be represented as a respective red, green, and blue (RGB) image, each embodied by a three-dimensional (3D) matrix of respective digital values.

To generate the utterance classification data 110, the system 100 generates a potential representation of the audio data 104 and a potential representation of the target person image 112, the target person image 112 being an image cropped from the video frame 102 and depicting the mouth of the target person. The system 100 then provides the audio data 104 and corresponding potential representations of the target person image 112 to the fusion neural network 114. The fusion neural network 114 is configured to process the audio data 104 and the corresponding potential representations of the target person images 112 to generate the utterance classification data 110, as will be described in greater detail below.

In this description, a potential representation of a set of data (e.g., audio data 104 or target person image 112) refers to a digital representation of the data (e.g., as a vector or matrix), which is generated internally by the system 100. For example, the system 100 may generate a set of potential representations of the data by processing the data using a neural network and determining the output of the neural network as potential representations of the data.

To generate a potential representation of the audio data 104, the system 100 processes the audio data 104 using the audio embedding neural network 116 according to current values of the audio embedding neural network parameters. The system 100 determines the output of the final layer of the audio embedded neural network 116 as a potential representation of the audio data 104. Optionally, the system 100 may process the audio data 104 using an audio processing engine 118 to generate an alternative representation 120 of the audio data prior to providing the audio data 104 to the audio embedded neural network 116. For example, the audio processing engine 118 may process a one-dimensional representation of the audio data 104 as an audio waveform to generate a two-dimensional array of Mel-Frequency Cepstral Coefficients (MFCCs) as a two-dimensional array of Mel-Frequency Cepstral Coefficients, or as an alternative representation 120 of the audio data of a Mel-Frequency spectrogram. After generating the alternative representation 120 of the audio data 104, the system 100 may process the alternative representation 120 of the audio data 104 using the audio embedding neural network 116, thereby generating a potential representation of the audio data 104.

In general, the audio embedded neural network 116 may be implemented with any suitable neural network architecture. For example, if the audio embedded neural network 116 is configured to directly process a representation of the audio data 104 as a one-dimensional audio waveform, the architecture of the audio embedded neural network may include one or more one-dimensional convolutional layers. A one-dimensional convolutional layer refers to a convolutional layer defined by a one-dimensional convolutional filter. As another example, if the audio embedded neural network 116 is configured to process a representation of the audio data 104 as a two-dimensional array of mel-frequency cepstral coefficients, the architecture of the audio embedded neural network 116 may include one or more two-dimensional convolutional layers. A two-dimensional convolutional layer refers to a convolutional layer defined by a two-dimensional convolutional filter. In some cases, the final layer of the audio embedding neural network 116 is a fully-connected layer, and the system 100 may determine the potential representation of the audio data 104 as a one-dimensional vector output by the fully-connected layer.

The system 100 processes each of the video frames 102 and the person identification data 108 using the cropping engine 122, thereby generating the target person image 112. The cropping engine 122 is configured to determine the location (i.e., position) of the target person identified by the person identification data 108 in each video frame 102 and crop the portion of each video frame 102 depicting the mouth of the target person to generate the corresponding target person image 112. In some cases, the cropping engine 122 is configured to crop portions of each video frame 102 that depict more than the mouth of the target character. For example, the cropping engine 122 may be configured to crop portions of each video frame 102 that depict an area of the target person's face that includes the mouth, the entire face of the target person, and even the entire body of the target person.

The person identification data 108 identifying the target person may be represented in any suitable format. For example, the person identification data 108 may be a potential representation of a target person's face generated using a face-embedded neural network (e.g., FaceNet neural network). As another example, the person identification data 108 may be data that indicates a location of the target person's face in a previous video frame (e.g., by a bounding box surrounding the target person's face in the previous video frame). In this example, to determine the location of the target person in the video frame 102, the cropping engine 122 may use a face detection neural network to detect the location of each face (e.g., represented by a bounding box) in the video frame 102. Subsequently, the cropping engine 122 may determine the position of the target person in the video frame 102 as the "closest" detected face to the known position of the target person's face in the previous video frame (e.g., as measured by bounding box overlap). An example process by which the cropping engine 122 determines the location of the target person in each video frame 102 using the person identification data 108 is described with reference to fig. 3.

The system 100 may generate a potential representation of the target person image 112 by concatenating (contracting) the target person image 112 according to the current values of the image embedding neural network parameters and processing it using the image embedding neural network 124. The system may determine the final layer output of the image embedding neural network 124 as a potential representation of the target person image 112.

In general, the image-embedded neural network 124 may be implemented with any suitable neural network architecture. For example, the architecture of the image-embedded neural network may include one or more two-dimensional or three-dimensional convolutional layers (i.e., convolutional layers defined by two-dimensional or three-dimensional convolutional filters). In some cases, the final layer of the image-embedding neural network 124 is a fully-connected layer, and the system 100 may determine the potential representation of the target person image 112 as a one-dimensional vector output by the fully-connected layer.

The system 100 concatenates the potential representation of the audio data 104 with the potential representation of the target person image 112 and provides the concatenated potential representation to the fusion neural network 114. The fused neural network 114 processes each potential representation according to the current values of the fused neural network parameters to generate corresponding utterance classification data 110. The utterance classification data 110 defines a prediction (e.g., a numerical probability value between 0 and 1) as to whether the target person is speaking for the duration of the video characterized by the video frames 102 and the corresponding audio data 104. Intuitively, the converged neural network 114 can be understood as learning complex correlations that distinguish between different positions and activities of the target person's mouth and changes in the corresponding audio data 104. When the target person image 112 depicts more than the mouth of the target person (e.g., it depicts the face or the entire body), the fused neural network 114 can further learn to distinguish complex correlations involving the position and activity of the target person's face and body.

In general, the converged neural network 114 may be implemented with any suitable neural network architecture. For example, the converged neural network 114 may include one or more convolutional neural network layers, one or more fully-connected neural network layers, or both.

In some embodiments, the converged neural network 114 is implemented as a recurrent neural network. For example, the converged neural network 114 may be implemented as a gated round robin unit (GRU) or a stack of multiple GRUs. In these embodiments, the converged neural network 114 maintains an internal state that can be understood to summarize audio data and target person images from previous portions of the video 106 that have been processed by the system 100. The converged neural network 114 uses the internal state it maintains to generate the utterance classification data 110 for the audio data 104 and the target person image 112 that the system is currently processing. Thus, when the fused neural network 114 is implemented as a recurrent neural network, the fused neural network 114 can use the "memory" of video frames and audio data from previously processed portions of the video 106 to generate more accurate utterance classification data 110. In this manner, the fused neural network 114 may generate a sequence of outputs of the utterance classification data 110 that define a continuous (i.e., uninterrupted) duration of the video 106 during which the target person is predicted to be "speaking" or "not speaking".

Conversely, if the fusion neural network 114 is not implemented as a recurrent neural network and independently processes the audio data 104 and video frames 102 from portions of the video, the output sequence of the speech classification data 110 generated by the system 100 may be noisy. That is, the output sequence of the utterance classification data 110 generated by the system 100 may predict a very fast (e.g., within a time period of 0.1 seconds or less) transition between predictions for "talking" and "not talking" of the target persona. The prediction of these rapid transitions is not realistic and may degrade the performance of downstream systems that use the utterance classification data 110 output by the system 100.

The system 100 includes a training engine 126, the training engine 126 configured to jointly train the neural networks (i.e., the audio embedded neural network 116, the image embedded neural network 124, and the fused neural network 114) included in the system 100 through an end-to-end optimization process. That is, the neural networks included in the system 100 are jointly trained by back-propagating the gradient of the loss function through the fused neural network 114 into the audio embedded neural network 116 and the image embedded neural network 124. By jointly training the neural networks included in the system 100 using an end-to-end optimization process, the training engine 126 can determine trained neural network parameter values that enable the system 100 to generate more accurate utterance classification data 110 than when the neural networks are trained alone.

The training engine 126 trains the neural network included in the system 100 based on a set of training data 128. The training data 128 includes a plurality of training examples 130, wherein each training example includes: (i) training audio data and training target character images, (ii) labels indicating target utterance classification data used for the training audio data and the training target character images. The training engine 126 iteratively updates the parameter values of the neural networks included in the system 100 so that these neural networks generate the speech classification data 110 that match the labels included in the training examples by processing the training audio data and the training speaker images. An example process for training a neural network included in the system 100 is described with reference to FIG. 4.

FIG. 2 illustrates an exemplary data flow in which the speech classification system 100 sequentially processes video frames and audio data from various portions of the video 106 to generate a corresponding sequence of outputs of the speech classification data 202. The output sequence of the utterance classification data 202 defines a corresponding prediction of whether a target character, specified by the character recognition data 204, is speaking during portions of the video 106.

In the example shown in FIG. 2, the system 100 processes: (i) video frames 206-a and 206-B, (ii) audio data 208-a and 208-B corresponding to the video frames 206-a and 206-B, and (iii) character recognition data 204 to generate an output of the utterance classification data 210. The utterance classification data 210 defines a "yes" prediction, i.e., a prediction that the target character specified by the character recognition data 204 is speaking during the portion of the video 106 characterized by the video frames 206-A and 206-B and the corresponding audio data 208-A and 208-B. Similarly, the system processes: (i) video frames 206-C and 206-D, (ii) audio data 208-C and 208-D, and (iii) person identification data 204 to generate utterance classification data 212. The utterance classification data 212 defines a "yes" prediction. The system also processes: (i) video frames 206-E and 206-F, (ii) audio data 208-E and 208-F, and (iii) person identification data 204 to generate utterance classification data 214. The utterance classification data 214 defines a "no" prediction, i.e., a prediction that the target character specified by the character recognition data 204 is not speaking during the portion of the video 106 characterized by the video frames 206-E and 206-F and the corresponding audio data 208-E and 208-F.

For clarity, the example shown in fig. 2 depicts the video frames and corresponding audio data processed by the system 100 to generate the output sequence of the utterance classification data 202 as disjoint (discrete). For example, the video frames 206-A and 206-B and the audio data 208-A and 208-B processed by the system 100 to generate the speech classification data 210 do not overlap the video frames 206-C and 206-D and the audio data 208-C and 208-D processed by the system 100 to generate the speech classification data 212. In general, the video frames and corresponding audio data processed by the system 100 may characterize overlapping portions of the video 106.

The following examples illustrate how different video processing systems can use the utterance classification data generated by the utterance classification system 100. These examples are intended to be exemplary and should not be construed as limiting the possible applications of the utterance classification data generated by the utterance classification system 100.

In one example, the utterance classification data generated by the utterance classification system 100 can be provided to the videoconferencing system 216. In this example, video may be generated by a camera and microphone of the video conferencing system 216 in a conference room where multiple people are participating in a video conference. The video may be processed in real-time by the utterance classification system 100 to generate a real-time utterance classification data output. That is, the video frames and audio data generated by the videoconferencing system 216 may be provided to the utterance classification system 100 in real-time as they are generated. The utterance classification system 100 may process the provided video to generate a corresponding utterance classification output that defines a prediction for whether each person depicted in the video is currently speaking. The utterance classification data may then be processed by the video conference system 216 to generate a processed video 218 that is sent to other participants in the video conference. In a particular example, the video conference system 216 may annotate the video with a bounding box surrounding the face of the current speaker to generate the processed video 218. In another particular example, video conferencing system 216 may zoom in on the face of the current speaker in the video, thereby generating processed video 218.

In another example, the speech classification data generated by the speech classification system 100 can be provided to the automatic translation system 220. The automatic translation system 220 may be configured to process the video to generate a translated video 222. In the translated video 222, the speech spoken in a natural language (e.g., english) by each speaker depicted in the video is replaced by a corresponding speech spoken in a different natural language (e.g., french) that is a translation of the speaker's utterance. The utterance classification system 100 may define the portion of video depicted in the video in which each character is speaking by processing the utterance classification data generated by the video. Audio corresponding to these portions of the video may be transcribed (e.g., by a speech recognition system), translated (e.g., by a machine translation system) into a different language, and verbalized (e.g., by a language translation system) in the different language. The automatic translation system 220 may replace the original audio of the video with the translated audio generated in this manner to generate a translated video 222.

Fig. 3 is a flow diagram of an example process 300 for generating utterance classification data. For convenience, process 300 will be described as being performed by a system of one or more computers located at one or more locations. For example, a speech classification system suitably programmed in accordance with the present specification, such as the speech classification system 100 of FIG. 1, may perform the process 300.

The system obtains one or more video frames from a video (302). Typically, a video frame represents only a small portion of a video. For example, a video may include thousands of video frames, while a system may be configured to process only 3 video frames (and audio data corresponding to 3 video frames) at a time. The video frames may be obtained from video stored in a data store (e.g., a logical data store or a physical data storage device) or may be obtained in real-time from a video capture device (e.g., a digital video camera).

The system generates a target person image from the video frame (304). To generate a target person image from a video frame, the system receives person identification data that specifies a target person depicted in the video frame. The system determines the position of the target person in the video frames and crops the portion of each of the video frames depicting the mouth of the target person to generate the target person image. In some cases, the system crops more than the portion of each video frame depicting the mouth of the target person. For example, the system may crop the area of each video frame that depicts the target person's face, including the mouth, the entire face of the target person, and even the entire body of the target person.

The person identification data specifying the target person may be represented in any appropriate format. For example, the person identification data may be a potential representation of a target person's face generated using a face-embedded neural network (e.g., FaceNet neural network). In this example, the system may use a face detection neural network to detect the location of each face in the video frame to determine the location of the target person in the video frame. The system may then use the face-embedded neural network to generate a respective potential representation of each detected face. The system may determine a detected face having a potential representation that is most similar to the potential representation designated by the person identification data as the target person. The similarity between the potential representations may be determined according to any suitable similarity metric, such as a euclidean similarity metric or a cosine similarity metric.

The system generates a potential representation of the target person image (306). To generate the potential representation of the target person image, the system may concatenate the target person images and process the concatenated target person images using the image-embedding neural network according to the current values of the image-embedding neural network parameters. Alternatively, the system may process each of the target person images separately using an image-embedding neural network, and then determine the potential representation of the target person image as a concatenation of the respective outputs of the image-embedding neural network for each of the target images. The system may determine the output of the final layer of image embedding neural network as a potential representation of the target person image.

The system obtains audio data from the video corresponding to the obtained video frames (308). The audio data may correspond to the exact same portion of the video as the obtained video frames, but may also correspond to a larger or smaller portion of the video than the obtained video frames in some cases.

The system generates potential representations of the obtained audio data (310). To generate a potential representation of the audio data, the system processes the audio data using the audio embedding neural network according to current values of the audio embedding neural network parameters. The system determines the output of the final layer of the audio-embedded neural network as a potential representation of the audio data. Optionally, the system may process the audio data to generate an alternative representation of the audio data before providing the audio data to the audio embedded neural network. For example, the system may process a one-dimensional representation of the audio data as an audio waveform to generate an alternative representation of the audio data as a two-dimensional array of mel-frequency cepstral coefficients (MFCCs). After generating the alternative representation of the audio data, the system may process the alternative representation of the audio data using the audio embedded neural network, thereby generating a potential representation of the audio data.

It should be understood that fig. 3 does not imply that operations 306 and 310 must be performed in any particular order. That is, generating the potential representation of the image (306) may be performed before, after, or substantially simultaneously with generating the potential representation of the audio data (310).

The system processes the potential representation of the target person image and the potential representation of the audio data using a fused neural network to generate utterance classification data (312). The utterance classification data defines a prediction (e.g., a numerical probability value between 0 and 1) as to whether the target person is speaking during the video portion characterized by the obtained video frames and the corresponding audio data. If the converged neural network is a recurrent neural network (e.g., a GRU), the system may process the potential representation of the target person image and the potential representation of the audio data to update the current internal state of the recurrent neural network. The system then processes the new internal state of the recurrent neural network to generate the utterance classification data.

After generating the utterance classification data for the obtained video frame and the portion of the video characterized by the audio data, the system may return to step 302 and repeat the previous steps for subsequent portions of the video. Alternatively, if the video portion characterized by the obtained video frames and audio data is the last portion of the video, the system can provide an output of the speech classification data generated for a different portion of the video for use by another system. For example, as described with reference to fig. 2, the system may provide the generated utterance classification data for use by a video conferencing system or an automatic translation system.

FIG. 4 is a flow diagram of an example process 400 for jointly training an audio embedded neural network, an image embedded neural network, and a converged neural network through an end-to-end optimization process. For convenience, process 400 will be described as being performed by a system of one or more computers located at one or more locations. For example, a speech classification system suitably programmed in accordance with the present specification, such as the speech classification system 100 of FIG. 1, may perform the process 400.

The system obtains one or more training examples from a set of training data comprising a plurality of training examples (402). Each training example includes: (i) training audio data and training target person images, and (ii) labels indicating target utterance classification data used for the training audio data and the training target person images. The label may have been determined manually by a human evaluator. The system may randomly sample from the set of training data to obtain training examples.

For each training example, the system processes training audio data and training target character images contained by the training example to generate corresponding utterance classification data for the training example (404). To generate the speech classification data for the training examples, the system processes training audio data from the training examples using the audio-embedded neural network to generate potential representations of the training audio data according to current values of the audio-embedded neural network parameters. The system processes a training target human image from a training example using an image-embedded neural network to generate a latent representation of the training target human image based on current values of image-embedded neural network parameters. The system then processes the training audio data from the training examples and each potential representation of the training target character images using the fused neural network based on the current values of the fused neural network parameters to generate the speech classification data for the training examples.

The system determines a gradient of a loss function for current parameter values of the audio embedded neural network, the image embedded neural network, and the fused neural network (406). In general, the loss function compares the generated speech classification data for each training sample to the tags that indicate the target speech classification data. For example, the loss function may be a binary cross-entropy loss function. Optionally, the loss function may include a regularization term (e.g., L to neural network weight)2Penalty). The system may use a back propagation process to determine the gradient.

In some cases, the system processes the training examples to generate an ancillary utterance classification data output that is generated based only on the training target character images (i.e., without relying on training audio data). In these cases, the loss function may include an additional term that compares the speech classification data generated for each training example based only on the training target character images with the target speech classification data.

The system adjusts current parameter values of the audio embedded neural network, the image embedded neural network, and the fused neural network using the determined gradients (408). In general, the system may use the gradient of the loss function to adjust the current parameter values of the neural network based on the update rules of any suitable gradient descent optimization algorithm (e.g., Adam, RMSprop, adagard, adaelta, AdaMax, and the like).

The term "configured" is used herein in connection with system and computer program components. For a system of one or more computers to be configured to perform a particular operation or action, it is meant that the system has installed thereon software, firmware, hardware, or a combination thereof that in operation causes the system to perform the operation or action. For one or more computer programs to be configured to perform particular operations or actions, it is meant that the one or more programs include instructions which, when executed by a data processing apparatus, cause the apparatus to perform the operations or actions.

Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware (including the structures disclosed in this specification and their structural equivalents), or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible, non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by data processing apparatus.

The term "data processing apparatus" refers to data processing hardware and includes all kinds of apparatus, devices and machines for processing data, including for example a programmable processor, a computer or multiple processors or computers. The apparatus may also be or further include special purpose logic circuitry, such as a Field Programmable Gate Array (FPGA) or an Application-Specific Integrated Circuit (ASIC). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for the computer program, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

A computer program (which may also be referred to or described as a program, software application, app, module, software module, script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages; and that computer program may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.

In this specification, the term "engine" is used broadly to refer to a software-based system, subsystem, or process that is programmed to perform one or more particular functions. Typically, the engine will be implemented as one or more software modules or components installed on one or more computers in one or more locations. In some cases, one or more computers will be dedicated to a particular engine; in other cases, multiple engines may be installed and run on the same computer or on multiple computers.

The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and in combination with, special purpose logic circuitry, e.g., an FPGA or an ASIC.

Computers suitable for the execution of a computer program may be based on general and special purpose microprocessors or both, as well as on any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for executing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Furthermore, the computer may be embedded in another device, such as a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, such as a Universal Serial Bus (USB) flash drive, to name a few.

Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and storage devices, including by way of example: semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.

To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having: a Display device (e.g., a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD) monitor) for displaying information to a user; and a keyboard and a pointing device, such as a mouse or a trackball, by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. Additionally, the computer may interact with the user by: sending and receiving documents to and from a device used by a user; for example, by sending a web page to a web browser on the user's device in response to a request received from the web browser. Also, the computer may interact with the user by sending a text message or other form of message to a personal device (e.g., a smartphone running a messaging application), and in return receiving a response message from the user.

The data processing apparatus for implementing the machine learning model may also comprise, for example, a dedicated hardware accelerator unit for processing common and computationally intensive parts of the machine learning training or production, i.e. inferences, workloads.

The machine learning model may be implemented and deployed using a machine learning framework (e.g., a TensorFlow framework, a Microsoft Cognitive Toolkit framework, an Apache Singa framework, or an Apache MXNet framework).

Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a Web browser, or an app, or any combination of one or more such back-end, middleware, or front-end components, through which a user can interact with an implementation of the subject matter described in this specification. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a Local Area Network (LAN) and a Wide Area Network (WAN), such as the internet.

The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, the server sends data (e.g., HTML pages) to the user device, for example, for the purpose of displaying data to and receiving user input from a user interacting with the device as a client. Data generated at the user device, e.g., a result of the user interaction, may be received at the server from the device.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings and are recited in the claims in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated in a single software product or packaged into multiple software products.

Specific embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:使用子带降噪技术降噪的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!