HEVC video quality enhancement algorithm framework combined with convolutional neural network

文档序号:912772 发布日期:2021-02-26 浏览:9次 中文

阅读说明:本技术 一种结合卷积神经网络的hevc视频质量增强算法框架 (HEVC video quality enhancement algorithm framework combined with convolutional neural network ) 是由 何小海 孙伟恒 熊淑华 卡恩·普拉迪普 苏姗 卿粼波 滕奇志 于 2019-08-21 设计创作,主要内容包括:本发明提出了一种结合卷积神经网络的HEVC视频质量增强算法框架,所提框架分为两个部分:首先在编码端对I帧构建CNN环路滤波网络IFN-ND,以此替代HEVC原有的环路滤波器,这样就提升了I帧的质量。I帧作为后续P帧的参考帧,其质量的提升能够减少残差并初步提高P帧的质量;在解码端针对P帧构建了CNN后处理网络PQEN-ND对解码后的P帧质量进行进一步的提升。算法框架还从码流信息中提取HEVC压缩噪声分布信息,加入到所提卷积神经网络中,以进一步提升网络效果。实验结果表明,所提算法框架能够显著地提升压缩视频质量。本发明可广泛应用于数字电影拍摄传输、文体活动直播、远程教育培训和目标检测等领域。(The invention provides an HEVC video quality enhancement algorithm framework combined with a convolutional neural network, which is divided into two parts: firstly, a CNN loop filter network IFN-ND is constructed on an I frame at a coding end to replace an original loop filter of HEVC, so that the quality of the I frame is improved. The I frame is used as a reference frame of a subsequent P frame, and the improvement of the quality of the I frame can reduce residual errors and preliminarily improve the quality of the P frame; and a CNN post-processing network PQEN-ND is constructed for the P frame at a decoding end, so that the quality of the decoded P frame is further improved. The algorithm framework also extracts HEVC compression noise distribution information from the code stream information and adds the HEVC compression noise distribution information into the convolutional neural network so as to further improve the network effect. Experimental results show that the algorithm framework can remarkably improve the quality of compressed video. The invention can be widely applied to the fields of digital film shooting and transmission, living broadcast of cultural and physical activities, remote education and training, target detection and the like.)

1. An HEVC video quality enhancement algorithm framework combined with a convolutional neural network, characterized in that:

(1) constructing a compressed video quality enhancement algorithm framework combined with HEVC compressed noise characteristics, and constructing IFN-ND and PQEN-ND networks at two ends of a coder and a decoder respectively to improve the quality of I frames and P frames;

(2) the method comprises the steps that an original video is coded by H.265/HEVC, corresponding CU partition information and edge information are extracted from a coded code stream, and the CU partition information and the edge information are combined and called as noise distribution information;

(3) combining noise distribution information, proposing an I-frame loop filter network IFN-ND to replace loop filters SAO and DF in HEVC to carry out quality improvement on I frames to obtain compressed video code streams, and decoding the video code streams by H.265/HEVC to obtain decoded videos;

(4) noise distribution information is extracted from a decoded video, and a P frame quality enhancement convolutional neural network PQEN-ND combined with the noise distribution information is provided to enhance the quality of a P frame.

2. The HEVC video quality enhancement algorithm framework of claim 1 in combination with convolutional neural network, wherein the convolutional neural network is constructed at both ends of the encoding and decoding respectively for the characteristics of I frame and P frame, and the noise distribution information is extracted from the code stream and added to the proposed algorithm framework, effectively removing the video compression noise, reducing the video transmission bit rate and enhancing the reconstructed video quality.

3. The framework of claim 1, wherein the noise distribution information is divided into CU partition information and edge information, wherein the edge information is obtained by Canny operator, and the noise distribution information can improve the performance of the video quality enhancement network.

4. The frame of claim 1, wherein a loop filter network IFN-ND is proposed at a coding end to replace the original loop filters SAO and DF in the HEVC standard, and noise distribution information is combined to improve the quality of the I frame, and simultaneously prevent propagation and diffusion of compression noise in the I frame, so that the quality of subsequent P frames is initially improved and the coding rate is reduced.

5. The frame of claim 1, wherein the decoding end is based on IFN-ND, and proposes a P-frame quality improvement network PQEN-ND in combination with noise distribution information for the characteristic that a P-frame contains both intra-coded blocks and inter-coded blocks, which can further improve the P-frame quality of HEVC.

6. A framework for performing the HEVC video quality enhancement algorithm of claims 1-5 in conjunction with a convolutional neural network.

Technical Field

The invention relates to the technical problems of video coding and video processing in the field of image communication, in particular to the construction and optimization of a convolution neural network.

Background

In recent years, with the rapid development of smart phones and wearable smart devices, more video applications are gradually beginning to be used by people in the fields of social media, intelligent monitoring and the like. But limited by the shortage of transmission bandwidth and storage resources, the video usually undergoes lossy compression, and currently, the mainstream compression standard HEVC is officially published in 2 months of 2013, which significantly improves the coding efficiency of the video. Compared with the advanced video coding standard H.264/AVC, the method can save the code rate by 50 percent on the premise of basically unchanging objective quality. However, in the lossy compression process, especially in the low bit rate segment, some compression effects, such as blocking, ringing, blurring, etc., may occur in the video. In addition to causing severe degradation of video quality at the decoding end, these compression effects also affect the effectiveness and accuracy of some video processing applications, such as object recognition and classification, in addition to causing poor viewing experience. Therefore, it is necessary to study how to effectively improve the quality of decoded video at both ends of the codec.

Inspired by the success of deep learning in image quality enhancement, many scholars have attempted to introduce deep convolutional neural networks to the quality enhancement of compressed video. Similar to the idea of compressed picture quality enhancement, some scholars propose to perform quality enhancement on video at the decoding end, and Wang et al propose a very deep convolutional neural network called DCAD that automatically removes artifacts and enhances the details of HEVC compressed video by exploiting the bit stream and the underutilized information remaining in the external image. Li et al have adopted deeper network and richer data set to train, have proposed FECNN, have realized the code rate promotion of 5.5% in the intraframe coding. In view of the temporal redundancy of video, adjacent frames of video tend to be relatively similar. Yang et al propose a multi-frame enhanced network MFQE, which first detects high-quality frames in a decoded video, and then performs quality enhancement on low-quality frames by using the high-quality frames in the decoded video, thereby obtaining a very good effect. Subsequent authors continue to provide MFQE2.0 on the basis of MFQE, construct a larger data set on the basis of unchanging the main idea, optimize a part of algorithms, and further improve the effect of enhancing the quality of multi-frame videos.

Considering that the encoding process of video is different from image encoding, inter-frame encoding is also included. After one frame is coded, a decoder contained in a coding end can reconstruct the coded code stream into a reconstructed frame as a reference frame of a subsequent P frame, so that the improvement of the video quality of the reconstructed frame can improve the video quality of the current frame, reduce the prediction error of the subsequent frame and achieve the effect of reducing the code rate. Park and Kim first propose a method for performing in-loop filtering using convolutional neural network, called IFCNN, replace SAO in HEVC post-processing technology with neural network, and introduce the idea of residual error network to improve the training speed. Dai et al propose VRCNN, a complete replacement for the standard loop filters DF and SAO of HEVC, based on the concepts of ARCNN and IFCNN.

Disclosure of Invention

Aiming at the quality improvement problem of HEVC compressed video, less work is considered to utilize coding information and coding rules and to construct a quality improvement algorithm at two ends of coding and decoding. In order to solve the problems, the invention provides an HEVC video quality enhancement algorithm framework combined with a convolutional neural network, and the quality of a compressed video is improved as much as possible within a reasonable time complexity range.

The basic idea of the invention is to fully combine noise distribution information in compressed video code streams, and respectively construct convolutional neural networks at the two ends of encoding and decoding to fully remove various compression effects in compressed videos, so as to achieve the purpose of improving the quality of HEVC compressed videos. Firstly, a loop filter SAO and a loop filter DF built in an HEVC standard are closed, compressed original video is compressed by HEVC, compressed noise distribution information is extracted from a code stream, and the information is combined with an I frame loop filter network IFN-ND, so that the coding quality of an I frame is improved; and extracting noise distribution information in a similar mode at a decoding end, and combining the noise distribution information with the P frame quality enhancement network PQEN-ND to improve the quality of the P frame in the video.

The method mainly comprises the following steps:

(1) a method for extracting noise distribution information from code stream information is constructed. Firstly, considering that noise is mainly distributed on a block boundary of a CU and an edge area of an object contained in a video, extracting segmentation information of the CU from code stream information, calculating gradient information of a video frame by using a Canny operator, obtaining a corresponding edge information distribution diagram through binarization, and combining the edge information distribution diagram and the corresponding edge information distribution diagram to obtain a noise information distribution diagram. This process is illustrated in fig. 1.

(2) An I-frame loop filtering convolutional neural network IFN-ND combined with noise distribution information is constructed, and the network structure of the I-frame loop filtering convolutional neural network IFN-ND is shown in figure 2 and is divided into three parts, namely a feature extraction part, a feature enhancement part and a reconstruction part. First, a convolutional layer is used to extract the noise profile and the preliminary features of the input compressed image, and then these features are fused together by the Concat operation. Through a differentiation Layer, the original characteristic diagram can be divided into four small diagrams without losing information, so that the complexity of the network can be reduced, and meanwhile, due to the improvement of the receptive field, the network effect can be improved to a certain degree. And then outputting characteristics of different layers through a plurality of cascaded IMSRB modules, and adopting a Bottleneck Layer (Bottleneck Layer) to adaptively extract useful information from the characteristics of each Layer. Each IMSRB includes a number of 1 × 1, 3 × 3, and 5 × 5 convolution kernels to detect different scales of coded frame features, and convolution kernels to reduce network parameters. A large amount of local residual learning and global residual learning modes are utilized in the network, so that the network becomes more efficient.

(3) The original video is coded by H.265/HEVC, wherein the I frame loop filter network IFN-ND combined with the noise distribution information is embedded in a coding end to obtain a compressed video code stream, and the video code stream is decoded by H.265/HEVC to obtain a decoded video.

(4) A convolutional neural network PQEN-ND aiming at P frame quality enhancement is provided at a decoding end, the network structure is as shown in figure 3, the reason for generating the P frame compression effect is considered to be more complicated than that of I frame coding only in an intra-frame coding mode, distortion caused by intra-frame coding blocks and inter-frame coding blocks needs to be restrained simultaneously, and the PQEN-ND has a two-way structure on the basis of IFN-ND, so that a better effect can be achieved.

(5) The proposed PQEN-ND is embedded into the decoding end to construct a complete HEVC compressed video quality enhancement algorithm framework, as shown in fig. 4. When an original video is compressed by HEVC, the quality of an I frame of the original video is improved through an encoding end IFN-ND, the quality of the I frame is better, more accurate predictive encoding can be obtained when the I frame is subsequently used as a P frame of a reference frame for encoding, the encoding code rate can be reduced to a certain degree, the quality of the P frame is initially improved, and the quality of the P frames is further improved through a PQEN-ND at a decoding end.

Experimental results show that compared with HEVC, the HEVC compressed video quality enhancement algorithm framework can obtain better quality improvement.

Drawings

FIG. 1 is a compressed noise distribution information extraction graph;

FIG. 2(a) block diagram of an I-frame loop filter network (IFN-ND) incorporating noise distribution information, (b) an IMSRB module;

FIG. 3 is a block diagram of a P frame quality enhancement network (PQEN-ND) incorporating noise distribution information;

fig. 4 shows a block diagram of an HEVC compressed video quality enhancement algorithm.

Detailed Description

The present invention is further described in detail with reference to the following examples, which should be construed as limiting the scope of the invention and not as limiting the scope of the invention.

The HEVC video quality enhancement algorithm framework combined with the convolutional neural network has the following comparison process with an H.265/HEVC standard test model HM 16.0:

1. the configuration file is characterized in that 27, 32, 37, 42 and 47 are selected from an encoder _ lowdelay _ P _ main.cfg, an H.265/HEVC standard quantization step (QP) and an algorithm quantization step (QP) of the invention;

2. the coded objects are standard test video sequences 18 video sequences from 5 classes of HEVC standard test sequences, the resolution of which includes: 2560 × 1600, 1920 × 1080, 1280 × 720, 832 × 480, 416 × 240;

3. the proposed I-frame loop filtering convolutional neural network IFN-ND combined with noise distribution information is embedded into an HEVC encoding end to replace standard loop filters SAO and DF.

4. The method comprises the steps that video to be coded is coded and processed by an HM16.0 standard method and an algorithm frame provided by the invention, the quality of the decoded video is improved by the algorithm frame at a decoding end through a P frame quality enhancement convolutional neural network PQEN-ND combined with noise distribution information, and objective parameters PSNR of the decoded video are obtained through calculation;

5. the experimental results are shown in Table 1, and statistics show that the PSNR of the method exceeds H.265/HEVC.

TABLE 1 comparison of the method of the present invention with the H.265/HEVC standard PSNR

8页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种网络电视直播系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类