Video description method based on bilinear adaptive feature interaction and target perception

文档序号:1904702 发布日期:2021-11-30 浏览:15次 中文

阅读说明:本技术 基于双线性自适应特征交互与目标感知的视频描述方法 (Video description method based on bilinear adaptive feature interaction and target perception ) 是由 马苗 田卓钰 刘士昌 郭敏 于 2021-07-27 设计创作,主要内容包括:一种基于双线性自适应特征交互与目标感知的视频描述方法,由构建视频描述网络模型、训练视频描述网络模型、检测测试集视频组成。本发明采用了使用编解码结构的视频描述方法。采用了双线性自适应特征交互模块,分别提取视频的动态特征、静态特征和目标特征,进行交互式融合,形成互补的多模态特征,细粒度刻画视频内容;在目标特征提取部分,采用了视频目标感知特征提取分支,在提取关键目标信息的同时,抑制背景信息,让更多信息用于表达视频中的真实目标,将融合特征输入基于门控循环单元构建的自然语言描述模型,生成准确文字。本发明具有视频描述结果准确、详细等优点,适用于任意多种类型特征融合的视频转换成文字。(A video description method based on bilinear adaptive feature interaction and target perception is composed of a video description network model, a training video description network model and a detection test set video. The present invention adopts a video description method using a coding and decoding structure. A bilinear adaptive feature interaction module is adopted to respectively extract dynamic features, static features and target features of the video, interactive fusion is carried out, complementary multi-modal features are formed, and video content is depicted in a fine-grained manner; in the target feature extraction part, a video target perception feature extraction branch is adopted, while key target information is extracted, background information is suppressed, more information is used for expressing a real target in a video, and fusion features are input into a natural language description model constructed based on a gate control circulation unit to generate accurate characters. The method has the advantages of accurate and detailed video description result and the like, and is suitable for converting videos with any various types of feature fusion into characters.)

1. A video description method based on bilinear adaptive feature interaction and target perception is characterized by comprising the following steps:

(1) constructing a video description network model

Constructing a video description model based on bilinear adaptive feature interaction and target perception by using an encoder-decoder structure under a Pythrch framework;

the video description model based on bilinear adaptive feature interaction and target perception is composed of an encoder and a decoder which are connected in series, the encoder is composed of a word embedded feature extraction branch (1), a bilinear adaptive feature interaction module (2), a gating circulation unit (3), a semantic feature extraction branch (4), a video target perception feature extraction branch (5), a video static feature extraction branch (6) and a video dynamic feature extraction branch (7), the outputs of the video dynamic feature extraction branch (7), the video static feature extraction branch (6), the video target perception feature extraction branch (5) and the word embedded feature extraction branch (1) are connected with the input of the bilinear adaptive feature interaction module (2), the outputs of the semantic feature extraction branch (4) and the bilinear adaptive feature interaction module (2) are connected with the input of the gating circulation unit (3), the gate control circulation unit (3) forms a decoder, and the gate control circulation unit (3) outputs video description characters;

(2) training video description network model

(a) Setting hyper-parameters of a network

Taking 1200 videos from an international published reference data set MSVD as a training set, 100 videos as a verification set and 670 videos as a test set, wherein the pixel size of each frame of input video of the training set is 224 multiplied by 224, the data batch is 64, video description network model parameters are initialized by using an Xavier method in the training process, adaptive moment estimation is used as an optimizer of a video description network model, the learning rate is set to be 0.0002-0.0008, and the video description network model is trained for 45-50 rounds in total;

(b) training video description network model

Inputting all videos in the training set into a video description network model, carrying out forward propagation and calculating a loss function L, wherein the loss function L is cross entropy loss:

wherein log (-) is a logarithm operation with base e, P (-) is the confidence of the output prediction statement of the video description network model,the video feature vector corresponding to the video V is obtained, and eta is a video description network model parameter to be trained;

reducing a loss value by using a self-adaptive moment estimation method to perform backward propagation, repeatedly and circularly performing forward propagation and backward propagation, updating the weight and bias of the video description network until 45-50 rounds are reached, and finishing training to obtain a trained video description network;

(3) detecting test set video

And inputting the video in the test set into the trained video description network, and outputting video description characters.

2. The video description method based on bilinear adaptive feature interaction and target perception according to claim 1, wherein in the step of (1) constructing the video description network model, the method for constructing the video target perception feature extraction branch (5) is as follows: detecting activated connected regions in the Center-less thermodynamic diagram by adopting an eight-connected domain detection method for the Center-less thermodynamic diagram output by an FCOS detection model pre-trained on an MS COCO data set, filtering the connected regions smaller than 3 pixel points in the activated connected regions as useless noise information to obtain a target perception diagram Map, and processing the target perception diagram Map by using the target perception diagram MapobjectAnd P7 layer feature Map of feature pyramid network in FCOS detection model7Obtaining a single-frame target feature f according to the following formulak Object

Wherein k is the numerical value obtained by the frame number/20 of the video V, is the multiplication operation of the corresponding positions of the feature vectors, and GAP (positive integer) is the global average pooling operation;

obtaining video target perception characteristics by pressing the characteristics of each single frame target in the video V according to the following formula

3. The bilinear adaptive feature interaction and target perception-based video description method of claim 1, wherein in the step of (1) constructing the video description network model, the bilinear adaptive feature interaction module 2 is constructed by: by global featuresVideo object perception featuresWord embedding featuresAs input features, among others global featuresThe video dynamic features and the video static features are spliced to obtain the video image; will be provided withThe input features are subjected to bilinear feature interaction to obtain interaction features according to the following formulaInteractive featuresInteractive features

Wherein Sign (. cndot.) is a Sign function, and ε is 10-12~10-8

To interact with featuresInteractive featuresInteractive featuresThe weight of each interactive feature is obtained according to the following formula

Wherein Conv1×1(. cndot.) represents a 1 × 1 convolution operation, and Sig (. cndot.) represents activation of a function operation using Sigmoid;

to interact with featuresInteractive featuresInteractive featuresAnd their corresponding weightsThe final fusion characteristics were obtained as follows

Where Concat (. cndot.) represents feature splicing operations from the channel dimension.

Technical Field

The invention belongs to the technical field of cross research of computer vision and natural language processing, and particularly relates to language description generation in videos.

Background

Video description technology is the use of natural language to convert visual information of a given video into semantic text. The video description technology has wide application prospect and value in the fields of human-computer interaction, video monitoring, visual assistance and the like. However, since the video description task involves cross-research of computer vision and natural language processing, closing the gap between low-level visual features and high-level semantic information is complicated, making the task very difficult.

The video description research starts from a video description method based on a template or a rule, wherein early researchers extract visual information from videos by means of manual features, then use an identification detection technology to obtain semantic objects such as characters, actions, scenes and the like, and fill the semantic objects into corresponding positions according to a predefined template or a rule to combine into description sentences. Although template or rule-based video description methods work well in early video description research, these methods mainly focus on detecting predefined entities or events and generating descriptions using fixed templates and rules, severely limiting the efficient expression of semantics, and the generated sentences are not flexible enough in both syntactic structure and semantic expression to describe all possible events in the video.

In recent years, the great success of deep learning in almost all sub-fields of computer vision has also revolutionized video description methods. Researchers began using deep convolutional neural networks to encode video features and cyclic neural networks or their variant long and short term memory networks to decode visual features and generate descriptive statements. However, the existing model lacks methods for fusing different types of features extracted from videos, and the used method for fusing multi-modal features is single in operation means, so that the advantages of various features are difficult to be exerted efficiently; the existing model using the target features only utilizes a detector to obtain and screen out a fixed number of detection frames according to the confidence degree sequence to be used as a target area, and extracts the target features, however, the number of targets in a video is not fixed, the number of targets in different video frames of the same video is not fixed, the target information in the video cannot be fully mined due to the fixed number of detection frames, a large amount of background noise is introduced, the function of the target features is severely limited, and high-quality video description characters are difficult to accurately generate.

In the technical field of video description, a technical problem to be urgently solved at present is to provide a technical scheme capable of accurately and quickly converting a video image into characters.

Disclosure of Invention

The technical problem to be solved by the invention is to overcome the defects of the prior art and provide a video description method based on bilinear adaptive feature interaction and target perception, which can effectively, accurately and quickly convert a video into characters.

The technical scheme adopted for solving the technical problems comprises the following steps:

(1) constructing a video description network model

Under the Pythrch framework, a video description model based on bilinear adaptive feature interaction and target perception is constructed by using an encoder-decoder structure.

The video description model based on bilinear adaptive feature interaction and target perception is formed by connecting an encoder and a decoder in series, wherein the encoder is formed by a word embedded feature extraction branch, a bilinear adaptive feature interaction module, a gating cycle unit, a semantic feature extraction branch, a video target perception feature extraction branch, a video static feature extraction branch and a video dynamic feature extraction branch, the outputs of the video dynamic feature extraction branch, the video static feature extraction branch, the video target perception feature extraction branch and the word embedded feature extraction branch are connected with the input of the bilinear adaptive feature interaction module, the outputs of the semantic feature extraction branch and the bilinear adaptive feature interaction module are connected with the input of the gating cycle unit, the gating cycle unit forms the decoder, and the gating cycle unit outputs video description characters.

(2) Training video description network model

(a) Setting hyper-parameters of a network

The method comprises the steps of taking 1200 videos from an international published reference data set MSVD as a training set, taking 100 videos as a verification set, taking 670 videos as a test set, enabling the pixel size of each frame of input video of the training set to be 224 multiplied by 224, enabling the data batch to be 64, initializing video description network model parameters by using an Xavier method in the training process, using adaptive moment estimation as an optimizer of a video description network model, setting the learning rate to be 0.0002-0.0008, and training the video description network model for 45-50 rounds.

(b) Training video description network model

Inputting all videos in the training set into a video description network model, carrying out forward propagation and calculating a loss function L, wherein the loss function L is cross entropy loss:

wherein log (-) is a logarithm operation with base e, P (-) is the confidence of the output prediction statement of the video description network model,and (4) describing a network model parameter for the video to be trained by using eta as a video feature vector corresponding to the video V.

And reducing a loss value by using an adaptive moment estimation method to perform backward propagation, repeatedly circulating forward propagation and backward propagation, updating the weight and bias of the video description network until 45-50 rounds are reached, and finishing training to obtain the trained video description network.

(3) Detecting test set video

And inputting the video in the test set into the trained video description network, and outputting video description characters.

In the step of (1) constructing the video description network model, the method for constructing the video target perception feature extraction branch comprises the following steps: detecting the activated connected region in the Center-less thermodynamic diagram by adopting an eight-connected domain detection method for the Center-less thermodynamic diagram output by the FCOS detection model pre-trained on the MS COCO data set, and filtering the connected region smaller than 3 pixel points in the activated connected region as useless noise information to obtain a targetPerception Map, target perception MapobjectAnd P7 layer feature Map of feature pyramid network in FCOS detection model7Obtaining the single frame target feature according to the following formula

Where k is the number of frames of the video V/20, rounded, the feature vector corresponds to the location multiplication operation, and GAP (. cndot.) is the global average pooling operation.

Obtaining video target perception characteristics by pressing the characteristics of each single frame target in the video V according to the following formula

In the step of (1) constructing a video description network model, the method for constructing the bilinear adaptive feature interaction module 2 comprises the following steps: by global featuresVideo object perception featuresWord embedding featuresAs input features, among others global featuresThe video dynamic features and the video static features are spliced to obtain the video image; carrying out bilinear feature interaction on the input features to obtain interaction features according to the following formulaInteractive featuresInteractive features

Wherein Sign (. cndot.) is a Sign function, and ε is 10-12~10-8

To interact with featuresInteractive featuresInteractive featuresThe weight of each interactive feature is obtained according to the following formula

Wherein Conv1×1(. cndot.) represents a 1 × 1 convolution operation, and Sig (. cndot.) represents activation of a function operation using Sigmoid;

to interact with featuresInteractive featuresInteractive featuresAnd their corresponding weightsThe final fusion characteristics were obtained as follows

Where Concat (. cndot.) represents feature splicing operations from the channel dimension.

According to the invention, a bilinear adaptive feature interaction module is adopted to respectively extract dynamic features, static features and target features of a video, and interactive fusion is carried out to form complementary multi-modal features, so that video content is depicted in a fine granularity mode; in the target feature extraction part, a target perception feature extraction branch is adopted, so that the background information is suppressed while the key target information is extracted, and more information is used for expressing a real target in a video; and inputting the fusion characteristics into a natural language description model constructed based on a gating cycle unit to generate high-quality description words. The method has the advantages of more accurate and detailed video description results and the like, aims to solve the technical problem of video description, and is suitable for video description tasks with fusion of any multiple types of features.

Drawings

FIG. 1 is a flowchart of example 1 of the present invention.

Fig. 2 is a diagram of the video depicting network model of fig. 1.

Fig. 3 is a truncated image of a test set video in an MSVD dataset.

Fig. 4 is a video description text output from fig. 3 after model processing.

Detailed Description

The present invention will be described in further detail below with reference to the drawings and examples, but the present invention is not limited to the embodiments described below.

Example 1

Taking 1970 videos in the international published reference data set MSVD as an example, the video description method based on bilinear adaptive feature interaction and target perception of the present embodiment includes the following steps (see fig. 1):

(1) constructing a video description network model

Under the Pythrch framework, a video description model based on bilinear adaptive feature interaction and target perception is constructed by using an encoder-decoder structure.

In fig. 2, the video description model based on bilinear adaptive feature interaction and target sensing in this embodiment is composed of an encoder and a decoder connected in series, the encoder is composed of a word embedded feature extraction branch 1, a bilinear adaptive feature interaction module 2, a gating cycle unit 3, a semantic feature extraction branch 4, a video target sensing feature extraction branch 5, a video static feature extraction branch 6, and a video dynamic feature extraction branch 7\ decoder, outputs of the video dynamic feature extraction branch 7, the video static feature extraction branch 6, the video target sensing feature extraction branch 5, and the word embedded feature extraction branch 1 are connected to an input of the bilinear adaptive feature interaction module 2, outputs of the semantic feature extraction branch 4 and the bilinear adaptive feature interaction module 2 are connected to an input of the gating cycle unit 3, and the gating cycle unit 3 constitutes the decoder in this embodiment, the gated loop unit 3 outputs a video description word.

The method for constructing the video target perceptual feature extraction branch 5 of the embodiment includes: detecting activated connected regions in the Center-less thermodynamic diagram by adopting an eight-connected domain detection method for the Center-less thermodynamic diagram output by an FCOS detection model pre-trained on an MS COCO data set, filtering the connected regions smaller than 3 pixel points in the activated connected regions as useless noise information to obtain a target perception diagram Map, and processing the target perception diagram Map by using the target perception diagram MapobjectAnd P7 layer feature Map of feature pyramid network in FCOS detection model7Obtaining the single frame target feature according to the following formula

Wherein k is the numerical value obtained by the frame number/20 of the video V, is the multiplication operation of the corresponding positions of the feature vectors, and GAP (positive integer) is the global average pooling operation;

obtaining video target perception characteristics by pressing the characteristics of each single frame target in the video V according to the following formula

The construction method of the bilinear adaptive feature interaction module 2 in the embodiment comprises the following steps: by global featuresVideo object perception featuresWord embedding featuresAs input features, among which global featuresSign forThe video dynamic features and the video static features are spliced to obtain the video image; carrying out bilinear feature interaction on the input features to obtain interaction features according to the following formulaInteractive featuresInteractive features

Wherein Sign (. cndot.) is a Sign function, and ε is 10-12~10-8In this embodiment, ε is taken to be 10-10May also be in 10-12~10-8And is arbitrarily selected within the range. .

To interact with featuresInteractive featuresInteractive featuresThe weight of each interactive feature is obtained according to the following formula

Wherein Conv1×1(. cndot.) represents a 1 × 1 convolution operation, and Sig (. cndot.) represents activation of a function operation using Sigmoid;

to interact with featuresInteractive featuresInteractive featuresAnd their corresponding weightsThe final fusion characteristics were obtained as follows

Where Concat (. cndot.) represents feature splicing operations from the channel dimension.

(2) Training video description network model

(a) Setting hyper-parameters of a network

The method comprises the steps of taking 1200 videos from an international published reference data set MSVD as a training set, taking 100 videos as a verification set, taking 670 videos as a test set, enabling the pixel size of each frame of input video of the training set to be 224 multiplied by 224, enabling the data batch to be 64, initializing video description network model parameters by using an Xavier method in the training process, using adaptive moment estimation as an optimizer of a video description network model, setting the learning rate to be 0.0002-0.0008, enabling the learning rate value of the embodiment to be 0.0004, training the video description network model for 45-50 rounds, and training the video description network model for 48 rounds.

(b) Training video description network model

Inputting all videos in a training set into a video description network model, performing forward propagation and calculating a loss function, wherein the loss function is cross entropy loss, and a loss function L in the embodiment is as follows:

wherein log (-) is a logarithm operation with base e, P (-) is the confidence of the output prediction statement of the video description network model,and (4) describing a network model parameter for the video to be trained by using eta as a video feature vector corresponding to the video V. And reducing the loss value by using an adaptive moment estimation method to perform backward propagation, repeatedly circulating forward propagation and backward propagation, updating the weight and bias of the video description network until 48 rounds are reached, and finishing training to obtain the trained video description network.

(3) Detecting test set video

And inputting the video in the test set into the trained video description network, and outputting video description characters.

And finishing the video description method based on bilinear adaptive feature interaction and target perception.

The video description method based on bilinear adaptive feature interaction and target perception in the embodiment is adopted to refer to fig. 3 for videos in a reference data set MSVD of an international public reference data set, and refer to fig. 4 for images converted into characters by the video processed by the embodiment. As can be seen from FIG. 4, the method of embodiment 1 converts the video into a group of people dancing Chinese.

Example 2

Taking 1970 videos in a reference data set MSVD of an international published reference data set as an example, the video description method based on bilinear adaptive feature interaction and target perception of the embodiment comprises the following steps:

(1) constructing a video description network model

This procedure is the same as in example 1.

(2) Training video description network model

(a) Setting hyper-parameters of a network

The method comprises the steps of taking 1200 videos from an international published reference data set MSVD as a training set, taking 100 videos as a verification set, taking 670 videos as a test set, enabling the pixel size of each frame of input video of the training set to be 224 multiplied by 224, enabling the data batch to be 64, initializing video description network model parameters by using an Xavier method in the training process, using adaptive moment estimation as an optimizer of a video description network model, setting the learning rate to be 0.0002-0.0008, enabling the learning rate value of the embodiment to be 0.0002, training the video description network model for 45-50 times, and training the video description network model for 45 times.

(b) Training video description network model

Inputting all videos in a training set into a video description network model, performing forward propagation and calculating a loss function, wherein the loss function is cross entropy loss, and a loss function L in the embodiment is as follows:

wherein log (-) is a logarithm operation with base e, P (-) is the confidence of the output prediction statement of the video description network model,and (4) describing a network model parameter for the video to be trained by using eta as a video feature vector corresponding to the video V. And reducing a loss value by using an adaptive moment estimation method to perform backward propagation, repeatedly circulating forward propagation and backward propagation, updating the weight and bias of the video description network until 45 rounds are reached, and finishing training to obtain the trained video description network.

The other steps are the same as embodiment 1, and video description characters are output.

Example 3

Taking 1970 videos in a reference data set MSVD of an international published reference data set as an example, the video description method based on bilinear adaptive feature interaction and target perception of the embodiment comprises the following steps:

(1) constructing a video description network model

This procedure is the same as in example 1.

(2) Training video description network model

(a) Setting hyper-parameters of a network

The method comprises the steps of taking 1200 videos from an international published reference data set MSVD as a training set, taking 100 videos as a verification set, taking 670 videos as a test set, enabling the pixel size of each frame of input video of the training set to be 224 multiplied by 224, enabling the data batch to be 64, initializing video description network model parameters by using an Xavier method in the training process, using adaptive moment estimation as an optimizer of a video description network model, setting the learning rate to be 0.0002-0.0008, enabling the learning rate value of the embodiment to be 0.0008, carrying out 45-50 video description network model training, and carrying out 50 video description network model training rounds.

(b) Training video description network model

Inputting all videos in a training set into a video description network model, performing forward propagation and calculating a loss function, wherein the loss function is cross entropy loss, and a loss function L in the embodiment is as follows:

wherein log (-) is a logarithm operation with base e, P (-) is the confidence of the output prediction statement of the video description network model,and (4) describing a network model parameter for the video to be trained by using eta as a video feature vector corresponding to the video V. And reducing a loss value by using an adaptive moment estimation method to perform backward propagation, repeatedly circulating forward propagation and backward propagation, updating the weight and bias of the video description network until 50 rounds are reached, and finishing training to obtain the trained video description network.

The other steps are the same as embodiment 1, and video description characters are output.

In order to verify the beneficial effects of the present invention, the inventor performed a comparison experiment using the Video description method based on bilinear adaptive feature interaction and target perception of embodiment 1 of the present invention (abbreviated as embodiment 1) and "spatial-temporal dynamic and magnetic adaptive encoding for Video capturing" (abbreviated as comparison experiment 1), "sbnet: spatial adaptive encoding for Video capturing" (abbreviated as comparison experiment 2), "Object relational graph with temporal-systematic learning for Video capturing" (abbreviated as comparison experiment 3), and comprehensively evaluated the generated description text by calculating four evaluation indexes u bler-4, METEOR, and ROUGE-L, CIDEr according to the following formula:

wherein BLEU value is between 0 and 1, lrAs a target character, /)cTo generate text, wnIs the weight of the n-tuple, pnThe coverage of n-tuple, n takes the value of 4.

METEOR=Fmean(1-p)

The METEOR value is between 0 and 1, P is a penalty factor, alpha is 0.9, P is m/c, R is m/R, m represents the number of character combinations which commonly appear in the generated characters and the target characters, c is the length of the generated characters, and R is the length of the target characters.

Wherein, the value of ROUGE-L is between 0 and 1, LCS (X, Y) is the length of the longest public subsequence of the generated character X and the target character Y, and beta is Pcls/RclsAnd b and a are the lengths of X and Y respectively.

The CIDER value is 0-5, c is generated characters, S is a target character set, n represents that n tuples are evaluated, M is the number of generated characters, and gn(. cndot.) represents an n-tuple based TF-IDF vector.

The results of the experiments and calculations are shown in table 1.

TABLE 1 Experimental results of the method of example 1 and comparative experiments 1-3

Experimental group BLEU-4(%) METEOR(%) ROUGE-L(%) CIDEr(%)
Comparative experiment 1 47.9 35.0 71.5 78.1
Comparative experiment 2 54.2 34.8 71.7 88.2
Comparative experiment 3 54.3 36.4 73.9 95.2
Example 1 59.8 39.4 76.7 109.5

As can be seen from table 1, in example 1 of the present invention, compared with comparative experiments 1 to 3, the scores of the evaluation indexes of example 1 of the present invention are greatly improved. The BLEU-4, METEOR and ROUGE-L, CIDEr in example 1 were respectively 11.9%, 4.4%, 5.2% and 31.4% higher than those in experiment 1, 5.6%, 4.6%, 5.0% and 21.3% higher than those in experiment 2, and 5.5%, 3.0%, 2.8% and 14.3% higher than those in experiment 3.

The experiments show that compared with the comparative experiment, the method is superior to the comparative experiment in all indexes, particularly the CIDER indexes are obviously improved, and the method can accurately convert the video into the characters.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:短视频排序及模型训练方法、装置、电子设备和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!