Video compression reference image generation method based on void convolutional neural network

文档序号:410494 发布日期:2021-12-17 浏览:26次 中文

阅读说明:本技术 一种基于空洞卷积神经网络的视频压缩参考图像生成方法 (Video compression reference image generation method based on void convolutional neural network ) 是由 高攀 田皓月 梁栋 于 2021-08-18 设计创作,主要内容包括:本发明公开了一种基于空洞卷积神经网络的视频压缩参考图像生成方法,包括如下步骤:(1)选取视频序列的连续两帧,对其进行分块,得到当前块与相对应的参考块的数据对,将整个视频序列按此方法进行分块后得到的数据作为神经网络模型的训练数据;(2)利用空洞卷积设计网络结构,将步骤(1)中的训练数据放入网络模型进行训练,模型训练好后,将其作为参考图像生成器;(3)在VVC编码器进行编码时,将步骤(2)生成的图像替换掉原本编码器参考列表中的参考图像,让编码器在帧间预测时使用步骤(2)生成的图像做预测。本发明能够对编码器参考图像的生成做改进,得到与当前编码帧更具相关性的参考图像,从而提高编码的效率。(The invention discloses a method for generating a video compression reference image based on a hole convolution neural network, which comprises the following steps: (1) selecting two continuous frames of a video sequence, partitioning the two continuous frames to obtain a data pair of a current block and a corresponding reference block, and partitioning the whole video sequence according to the method to obtain data serving as training data of a neural network model; (2) designing a network structure by using a hole convolution, putting the training data in the step (1) into a network model for training, and taking the training data as a reference image generator after the model is trained; (3) and (3) replacing the reference image in the original encoder reference list with the image generated in the step (2) when the VVC encoder performs encoding, and enabling the encoder to use the image generated in the step (2) for prediction during inter-frame prediction. The invention can improve the generation of the reference image of the encoder, and obtain the reference image which has more correlation with the current encoding frame, thereby improving the encoding efficiency.)

1. A video compression reference image generation method based on a hole convolution neural network is characterized by comprising the following steps:

(1) selecting two continuous frames of a video sequence, partitioning the two continuous frames to obtain a data pair of a current block and a corresponding reference block, and partitioning the whole video sequence according to the method to obtain data serving as training data of a neural network model;

(2) designing a network structure by using a hole convolution, putting the training data in the step (1) into a network model for training, and taking the training data as a reference image generator after the model is trained;

(3) and (3) replacing the reference image in the original encoder reference list with the image generated in the step (2) when the VVC encoder performs encoding, and enabling the encoder to use the image generated in the step (2) for prediction during inter-frame prediction.

2. The method according to claim 1, wherein in step (1), two consecutive frames of the video sequence are selected and blocked to obtain a data pair of the current block and the corresponding reference block, and the data obtained by blocking the entire video sequence according to the method as the training data of the neural network model specifically comprises: when the block is partitioned, finding the corresponding block position of the previous frame according to the current block, and calculating the motion vector MV of the current block; the brightness change of the same target in two continuous frames is very small, the motion change is also very small, the motion information of pixel points in a local area is the same, and the fractional motion vector information from a current block to a corresponding block of a previous frame is obtained through reverse calculation by means of an LK optical flow method.

3. The method for generating a video compression reference image based on the void convolutional neural network as claimed in claim 1, wherein in the step (2), a network structure is designed by using a void convolutional, the training data in the step (1) is put into a network model for training, and after the model is trained, the method for generating a video compression reference image based on the void convolutional neural network is specifically as follows: an input image firstly passes through two convolution layers, and a linear rectification function ReLU is added to the back of each convolution layer as an activation function; after this, three hollow inclusion modules were added; finally, a convolutional layer is used at the end of the network to generate the final output image.

4. The method for generating the video compression reference image based on the hole convolution neural network as claimed in claim 3, wherein for each hole inclusion module in the network, the inclusion module is used as a basic structure of the hole inclusion module, hole convolution is added, and the expansion rate of a convolution kernel is set to adjust the size of a hole;

considering the whole network as a mapping function F and learning the network parameter θ by minimizing the loss L (θ) between the network predicted block F (X; θ) and the corresponding true tag Y, using the mean square error MSE as the loss function:

where M is the number of training samples, and M and n represent the width and height of the training data block, respectively.

5. The method according to claim 1, wherein in the step (3), when the VVC encoder performs encoding, the reference image in the original encoder reference list is replaced with the image generated in the step (2), and the prediction of the encoder using the image generated in the step (2) during inter-frame prediction specifically comprises: in the encoding process of a VTM encoder, a mode decision is made on a current encoding unit CU, and the VTM checks various modes of intra-frame prediction and inter-frame prediction and checks whether it is necessary to continue CU partition; then, respectively calculating the distortion of the current CU, and selecting a mode with the minimum distortion as a prediction mode of the current CU by an encoder; in the inter-frame prediction mode, before encoding a current frame, a reference image list is constructed, the list stores reconstructed images of encoded frames, then an encoder performs iterative search on the candidate images, and finally a block with the minimum prediction distortion in the image is selected as a reference image of an encoding block in the current image.

Technical Field

The invention relates to the technical field of digital video compression, in particular to a video compression reference image generation method based on a hole convolution neural network.

Background

In the classical block-based hybrid video coding framework, inter prediction is the core technique to eliminate temporal redundancy. The basic idea of the inter prediction technique is to represent the relative position of the motion of the current coding block in a reference picture using a motion vector by using an already coded picture as the reference picture of the current coded picture, and to record the index of the reference picture, according to the temporal correlation between successive pictures that make up the video. In predictive coding, only residual images and motion vectors are coded, eliminating the temporal correlation of consecutive images, and further improving the efficiency of video coding.

For a video sequence being coded, when any frame is coded, the reconstructed image needs to be stored at the coding end for a while until the reconstructed image is not needed to be used as a reference image and then released. This is because, in the low-delay P configuration, all the remaining P frames, except the I frame, require the previously encoded frame as a reference picture to construct a reference picture list when encoding. The encoder selects the reference image with the minimum distortion as the current frame by calculating the distortion generated when all the reference images in the reference image list are subjected to predictive coding.

Since the motion of the object has some continuity, the motion of the same object between two images may not be performed in integer pixel units. That is, the matching block may be located at a fractional pixel point position of the reference image. But the fractional pixel values are not actually present and need to be interpolated with integer pixel values. Fractional pixel values are typically computed linearly through a fixed filter using adjacent row or column integer pixel values.

In h.264/AVC, the predicted values for half-pixel sample positions are obtained by using a one-dimensional 6-tap filter in the horizontal or vertical direction, and the predicted values for quarter-pixel sample positions are generated by averaging the samples for full and half-pixel positions. In high efficiency video coding (h.265/HEVC) and multi-function video coding (h.266/VVC), a symmetric 8-tap filter for half-pixel sample interpolation and an asymmetric 7-tap filter for quarter-pixel sample interpolation are included. But such fixed interpolation filters may not work well on different kinds of video due to the non-stationarity of natural video.

Disclosure of Invention

The technical problem to be solved by the present invention is to provide a method for generating a video compression reference image based on a void convolutional neural network, which can improve the generation of a reference image of an encoder, and obtain a reference image having more correlation with a current encoding frame, thereby improving the encoding efficiency.

In order to solve the technical problem, the invention provides a video compression reference image generation method based on a void convolutional neural network, which comprises the following steps:

(1) selecting two continuous frames of a video sequence, partitioning the two continuous frames to obtain a data pair of a current block and a corresponding reference block, and partitioning the whole video sequence according to the method to obtain data serving as training data of a neural network model;

(2) designing a network structure by using a hole convolution, putting the training data in the step (1) into a network model for training, and taking the training data as a reference image generator after the model is trained;

(3) and (3) replacing the reference image in the original encoder reference list with the image generated in the step (2) when the VVC encoder performs encoding, and enabling the encoder to use the image generated in the step (2) for prediction during inter-frame prediction.

Preferably, in the step (1), selecting two consecutive frames of the video sequence, blocking the two consecutive frames to obtain a data pair of the current block and the corresponding reference block, and using data obtained by blocking the entire video sequence according to the method as training data of the neural network model specifically includes: when the block is partitioned, finding the corresponding block position of the previous frame according to the current block, and calculating the motion vector MV of the current block; the brightness change of the same target in two continuous frames is very small, the motion change is also very small, the motion information of pixel points in a local area is the same, and the fractional motion vector information from a current block to a corresponding block of a previous frame is obtained through reverse calculation by means of an LK optical flow method.

Preferably, in the step (2), a network structure is designed by using a hole convolution, the training data in the step (1) is put into a network model for training, and after the model is trained, the method using the training data as a reference image generator specifically comprises the following steps: an input image firstly passes through two convolution layers, and a linear rectification function ReLU is added to the back of each convolution layer as an activation function; after this, three hollow inclusion modules were added; finally, a convolutional layer is used at the end of the network to generate the final output image.

Preferably, for each 'hole inclusion' module in the network, the inclusion module is used as a basic structure of the 'hole inclusion' module, hole convolution is added, and the expansion rate of a convolution kernel is set to adjust the size of a hole, so that the scope of a receptive field is expanded under the condition that the resolution of a feature map is not lost;

considering the whole network as a mapping function F and learning the network parameter θ by minimizing the loss L (θ) between the network predicted block F (X; θ) and the corresponding true tag Y, using the mean square error MSE as the loss function:

where M is the number of training samples, and M and n represent the width and height of the training data block, respectively.

Preferably, in step (3), when the VVC encoder performs encoding, the reference picture in the original encoder reference list is replaced with the picture generated in step (2), and the prediction performed by the encoder using the picture generated in step (2) during inter-frame prediction specifically includes: in the encoding process of a VTM encoder, a mode decision is made on a current encoding unit CU, and the VTM checks various modes of intra-frame prediction and inter-frame prediction and checks whether it is necessary to continue CU partition; then, respectively calculating the distortion of the current CU, and selecting a mode with the minimum distortion as a prediction mode of the current CU by an encoder; in the inter-frame prediction mode, before encoding a current frame, a reference image list is constructed, the list stores reconstructed images of encoded frames, then an encoder performs iterative search on the candidate images, and finally a block with the minimum prediction distortion in the image is selected as a reference image of an encoding block in the current image.

The invention has the beneficial effects that: the invention provides a video compression reference image generation method based on a hole convolution neural network from the generation of a reference image with more relevance, and provides a method for using deep learning and a structure of the convolution neural network to construct a reference image generator in order to improve the traditional coding efficiency; in order to enable a reference image to be more accurately calculated for the current image interpolation, an addition module and a hole convolution are added into a network model so as to obtain multi-scale characteristic image information, and therefore the reference image generated by the model is more similar to a current coding image; the invention also provides a method for replacing the original reference image in the reference image list of the encoder by the reference image generated by the network, so that the inter-frame prediction is more accurate, and the encoder saves the code rate on the premise of not losing the video quality.

Drawings

FIG. 1 is a schematic diagram of the present invention for generating training data for training a network.

Fig. 2 is a schematic diagram of the network overall framework of the invention.

Fig. 3 is a schematic structural diagram of a "hollow inclusion" module in the network framework of the present invention.

FIG. 4 is a schematic flow chart of the method of the present invention.

Fig. 5(a) is a schematic diagram of the encoding result of the encoder VTM of the original VVC without any modification.

Fig. 5(b) is a schematic diagram of the encoding result of the reference image generated by the encoder according to the SRCNN network.

Fig. 5(c) is a schematic diagram of the encoding result of the reference image generated by the encoder according to the VRCNN network.

Fig. 5(d) is a schematic diagram of the encoding result of the reference image generated by the encoder according to the network model and method of the present invention.

Detailed Description

A video compression reference image generation method based on a hole convolution neural network comprises the following steps:

(1) generating network model training data;

since VVC is block-based encoded, the image is also divided into small blocks during network training. First, two consecutive frames are selected as the reference image and the current image because the luminance change of the same object is small and the movement of the same object is small in the two consecutive frames. We assume that pixels of a block have the same motion trajectory and decide to use the LK optical flow method to obtain the fractional motion vector. In the block-based training method, a training data set needs to be created in the form of blocks.

As shown in fig. 1, the current block of the current picture is marked with a real label (Y) of the network, and then the position of the fractional pixel block in its reference picture can be obtained by fractional motion vector, because the fractional pixel has no actual pixel value, so the position of the corresponding integer pixel block needs to be found. The video sequence is created as a training data set by this method by shifting the fractional pixel block to the upper left until the nearest integer pixel is found, then this whole pixel block is labeled as the input (X) to the network, so (X, Y) is used as a training sample of the network model.

(2) Designing a network structure;

the overall scheme of the network structure is shown in fig. 2, where the input image first passes through two convolutional layers, with a linear rectification function (ReLU) added as an activation function behind each convolutional layer. After this, three hollow inclusion modules were added. Finally, a convolutional layer is used at the end of the network to generate the final output image.

For each "hole inclusion" module in the network, we use the inclusion module as its basic structure, as shown in fig. 3, and hopefully the module can obtain the multi-scale feature map information from the previous layer. In order to obtain more context information, hole convolution is also added. We add this to the model and set the dilation rate of the convolution kernel to adjust the size of the hole to achieve an expanded field without losing the resolution of the feature map.

We consider the whole network as a mapping function F and learn the network parameters θ by minimizing the loss L (θ) between the network prediction block F (X; θ) and the corresponding real label Y. We use the mean square error MSE as a loss function:

where M is the number of training samples, and M and n represent the width and height of the training data block, respectively.

(3) Combining the image generated by the network model with an encoder;

as shown in fig. 4, in the VTM encoder encoding process, a mode decision is made on the current Coding Unit (CU). The VTM will check various modes of intra prediction and inter prediction and check whether it is necessary to proceed with CU partitioning. Then their distortions are calculated separately, and the encoder selects the mode with the least distortion as the prediction mode of the current CU. In the inter-frame prediction mode, before encoding a current frame, a reference image list is constructed, the list stores reconstructed images of encoded frames, then an encoder performs iterative search on candidate images, and finally a block with minimum prediction distortion in the image is selected as a reference image of an encoding block in the current image.

The method provided by the invention uses the previous image of the current coding image as the input of the network model, and aims to output a reference image closer to the current coding image through the trained network model. Then we replace the reference pictures in the original reference list in the VVC encoder with the model predicted pictures, such as the pictures with POC t-1 in the reference picture list of fig. 4.

Example (b):

the invention is described in further detail below with reference to a specific embodiment.

The method provided by the invention uses the BlowingBubbeles video in the HEVC test sequence as the videoThe training data is generated by dividing all frames in the video sequence into blocks of size 16 x 16 according to the method of training data generation of the present invention, thereby creating a data set with a total number of blocks exceeding 160,000. For parameter settings of the network model, we initially set the network learning rate to 10-4And the learning rate is adjusted at the same interval. In addition, the network uses an Adadelta optimizer, with the batch size (mini-batch) set to 32. Through training for nearly 80 epochs (epochs), the training loss gradually converges. The encoder used the VVC reference software VTM (version 10.0), followed the test conditions common to VVC in the experiment, and used the default encoding configuration provided by the VTM. We performed compression performance tests using a low delay P configuration at 4 Quantization Parameters (QP)22,27,32 and 37.

In the network model provided by the present invention, there are very important hole inclusion modules, as shown in fig. 3, for each module, we use the inclusion module as a basic architecture, and for each branch, we add 1 × 1 convolutional layer first, and their main purpose is to reduce the dimension and reduce the convolutional parameters while keeping the spatial resolution unchanged. Then add standard convolution and hole convolution in the first three branches. In the first branch, the module uses a standard 3 × 3 convolution. For the second branch, the module uses both standard convolution and hole convolution with an expansion rate of 3. In the third branch, we use two standard 3 × 3 convolutions and one hole convolution with a dilation rate of 5. To reduce the model parameters, two stacked 3 × 3 convolutions are equivalent to one 5 × 5 convolution in capturing the field of view. In this module design, the receptive field sizes of the outputs of the three branches are 3, 9, 15, respectively. We then connect the outputs of these three branches in order to combine the information from the different receptive fields and increase the number of channels of the image feature. On the rightmost branch we have used only a simple 1 x 1 convolutional layer. Thus, the output obtained by this branch still carries to a large extent the information of the original input feature map. Finally, the left and right feature maps are stitched together using a weighting operation:

whereinIs a characteristic diagram after the connection of three branches,and F2 Xl is the output of 1 × 1 convolution operation of the connected characteristic diagram and the characteristic diagram of the previous layer Xl, k is a scale factor and has a value range of [0, 1%]It determines how many features learned at this layer are preserved.

The present invention integrates the proposed method into a VVC encoder and compares the compression performance of our invention with the original algorithm unmodified by the encoder. For each video sequence, we inter-prediction encode the original reference pictures in the reference list by replacing them with the output of the network.

To verify the effectiveness of the invention, we performed comparative experiments with respect to the method of the original VVC encoder and the reference image generated using the other three network models, respectively. In the three Network models, in addition to the Network model proposed by the present invention, two popular Network models, namely, Super-Resolution functional Network (SRCNN) and VRCNN (Variable-Filter-Size remaining functional Network) are used. All models are trained through the same method, and the reference images generated by the models replace the reference images in the original VVC buffer area. As shown in fig. 5(a) - (d), the sixth frame of the BQMall video sequence. The original VVC encoder, the SRCNN network model, the VRCNN network model and the method provided by the invention are used for encoding when the QP is 32. After that, BD-rates of different methods are calculated to compare the bit rates saved by the respective schemes with respect to the original VVC encoder. Experimental observation shows that the cavity convolution neural network model has the highest coding efficiency.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:编码方法、装置、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类