Video processing method and device

文档序号:107617 发布日期:2021-10-15 浏览:42次 中文

阅读说明:本技术 视频处理方法和装置 (Video processing method and device ) 是由 郑萧桢 王苏红 马思伟 王苫社 于 2019-12-31 设计创作,主要内容包括:提供一种视频处理方法和装置,该方法包括:获取当前块的参考帧列表,当前块的参考帧列表包括第一参考帧列表和第二参考帧列表;根据当前块的参考帧列表,确定目标参考帧列表,目标参考帧列表为第一参考帧列表和第二参考帧列表之一;根据当前块的目标参考帧列表确定当前块的时域运动矢量;根据时域运动矢量确定当前块的子块的运动信息;根据当前块的子块的运动信息对当前块进行帧间预测。通过限制双向预测过程中需要扫描的参考帧列表的数量,可以简化编解码操作。(A video processing method and apparatus are provided, the method comprising: acquiring a reference frame list of a current block, wherein the reference frame list of the current block comprises a first reference frame list and a second reference frame list; determining a target reference frame list according to the reference frame list of the current block, wherein the target reference frame list is one of a first reference frame list and a second reference frame list; determining a time domain motion vector of the current block according to a target reference frame list of the current block; determining motion information of a sub-block of the current block according to the time domain motion vector; and performing inter-frame prediction on the current block according to the motion information of the sub-block of the current block. By limiting the number of reference frame lists that need to be scanned during bi-directional prediction, the encoding and decoding operations can be simplified.)

1. A video processing method, comprising:

acquiring a reference frame list of a current block, wherein the reference frame list of the current block comprises a first reference frame list and a second reference frame list;

determining a target reference frame list according to the reference frame list of the current block, wherein the target reference frame list is one of the first reference frame list and the second reference frame list;

determining a time domain motion vector of the current block according to the target reference frame list of the current block;

determining motion information of the sub-block of the current block according to the time domain motion vector;

performing inter-frame prediction on the current block according to the motion information of the sub-block of the current block;

wherein the determining the temporal motion vector of the current block according to the target reference frame list of the current block comprises:

selecting a first candidate motion vector from the current motion vector candidate list;

searching the reference frame of the first candidate motion vector from the target reference frame list;

determining the first candidate motion vector as the temporal motion vector when the reference frame of the first candidate motion vector is the same as the co-located frame of the current block.

2. The method of claim 1, wherein the determining motion information for the sub-block of the current block according to the temporal motion vector comprises:

determining a corresponding block of the current block in a reference frame according to the time domain motion vector;

and determining the motion information of the sub-block of the current block according to the corresponding block of the current block in the reference frame.

3. The method of claim 1 or 2, wherein determining a target reference frame list from the reference frame list of the current block comprises:

if the current frame where the current block is located adopts a low-delay coding mode and the co-located frame of the current frame is the first frame in the second reference frame list, determining the second reference frame list as the target reference frame list; and/or

And if the current frame of the current block does not adopt a low-delay coding mode or the co-located frame of the current frame is not the first frame in the second reference frame list, determining the first reference frame list as the target reference frame list.

4. The method of claim 1, wherein determining the temporal motion vector for the current block based on the target reference frame list for the current block further comprises:

determining the temporal motion vector as a0 vector when the reference frame of the first candidate motion vector is different from the co-located frame of the current block.

5. The method of claim 1, wherein determining the temporal motion vector for the current block based on the target reference frame list for the current block comprises:

determining a motion vector of a spatial neighboring block of a specific position of the current block;

and when the reference frame of the motion vector of the spatial adjacent block is the same as the co-located frame of the current block, determining the motion vector of the spatial adjacent block as the temporal motion vector.

6. The method of claim 5, wherein the spatial neighboring block of the specific location of the current block is a left block of the current block, an upper block of the current block, or an upper left block of the current block.

7. The method of claim 5, wherein said temporal motion vector is determined to be a0 vector when a reference frame of said spatial neighboring block is different from a co-located frame of said current block.

8. The method of claim 1, wherein inter-predicting the current block comprises:

determining a prediction block for the current block;

and calculating a residual block of the current block according to the original block and the prediction block of the current block.

9. The method of claim 1, wherein inter-predicting the current block comprises:

determining a prediction block and a residual block for the current block;

and calculating a reconstructed block of the current block according to the prediction block and the residual block of the current block.

10. The method of claim 1, wherein inter-predicting the current block according to motion information of sub-blocks of the current block comprises:

and performing inter-frame prediction according to the motion information of the sub-block of the current block by taking the sub-block of the current block as a unit.

11. A video processing apparatus, comprising:

a memory for storing code;

a processor to execute code stored in the memory to perform the following operations:

acquiring a reference frame list of a current block, wherein the reference frame list of the current block comprises a first reference frame list and a second reference frame list;

determining a target reference frame list according to the reference frame list of the current block, wherein the target reference frame list is one of the first reference frame list and the second reference frame list;

determining a time domain motion vector of the current block according to the target reference frame list of the current block;

determining motion information of the sub-block of the current block according to the time domain motion vector;

performing inter-frame prediction on the current block according to the motion information of the sub-block of the current block;

wherein the determining the temporal motion vector of the current block according to the target reference frame list of the current block comprises:

selecting a first candidate motion vector from the current motion vector candidate list;

searching the reference frame of the first candidate motion vector from the target reference frame list;

determining the first candidate motion vector as the temporal motion vector when the reference frame of the first candidate motion vector is the same as the co-located frame of the current block.

12. The apparatus of claim 11, wherein the determining motion information for the sub-block of the current block according to the temporal motion vector comprises:

determining a corresponding block of the current block in a reference frame according to the time domain motion vector;

and determining the motion information of the sub-block of the current block according to the corresponding block of the current block in the reference frame.

13. The apparatus of claim 11 or 12, wherein the determining a target reference frame list according to the reference frame list of the current block comprises:

if the current frame where the current block is located adopts a low-delay coding mode and the co-located frame of the current frame is the first frame in the second reference frame list, determining the second reference frame list as the target reference frame list; and/or

And if the current frame of the current block does not adopt a low-delay coding mode or the co-located frame of the current frame is not the first frame in the second reference frame list, determining the first reference frame list as the target reference frame list.

14. The apparatus of claim 11, wherein the determining the temporal motion vector for the current block based on the target reference frame list for the current block further comprises:

determining the temporal motion vector as a0 vector when the reference frame of the first candidate motion vector is different from the co-located frame of the current block.

15. The apparatus of claim 11, wherein the determining the temporal motion vector for the current block from the target reference frame list for the current block comprises:

determining a motion vector of a spatial neighboring block of a specific position of the current block;

and when the reference frame of the motion vector of the spatial adjacent block is the same as the co-located frame of the current block, determining the motion vector of the spatial adjacent block as the temporal motion vector.

16. The apparatus of claim 15, wherein the spatial neighboring block of the current block at a specific position is a left block of the current block, an upper block of the current block, or an upper left block of the current block.

17. The apparatus of claim 15, wherein the temporal motion vector is determined to be a0 vector when a reference frame of the spatial neighboring block is different from a co-located frame of the current block.

18. The apparatus of claim 11, wherein the inter-predicting the current block comprises:

determining a prediction block for the current block;

and calculating a residual block of the current block according to the original block and the prediction block of the current block.

19. The apparatus of claim 11, wherein the inter-predicting the current block comprises:

determining a prediction block and a residual block for the current block;

and calculating a reconstructed block of the current block according to the prediction block and the residual block of the current block.

20. The apparatus of claim 11, wherein inter-predicting the current block according to motion information of sub-blocks of the current block comprises:

and performing inter-frame prediction according to the motion information of the sub-block of the current block by taking the sub-block of the current block as a unit.

Technical Field

The present application relates to the field of video coding and decoding, and more particularly, to a video processing method and apparatus.

Background

The video encoding process includes an inter prediction process. The modes of inter prediction include a merge mode and a non-merge mode. In the merge mode, it is usually required to first construct a motion vector candidate list of the merge mode, and select a motion vector of the current block from the motion vector candidate list of the merge mode. The current block may also be referred to as a current Coding Unit (CU).

With the development of coding technology, an optional/Advanced Temporal Motion Vector Prediction (ATMVP) technology is introduced in an inter-frame prediction mode. In the ATMVP technique, a current block is divided into a plurality of sub-blocks, and motion information of the sub-blocks is calculated. The ATMVP technique aims at introducing motion vector prediction at a subblock level to improve the overall coding performance of video.

The process of finding the motion information of the sub-block of the current block using the ATMVP technique is complicated, there are some redundant operations, and there is still room for improvement in the process.

Disclosure of Invention

The application provides a video processing method and a video processing device, which can simplify coding and decoding operations.

In a first aspect, a video processing method is provided, including: acquiring a reference frame list of a current block, wherein the reference frame list of the current block comprises a first reference frame list and a second reference frame list; determining a target reference frame list according to the reference frame list of the current block, wherein the target reference frame list is one of the first reference frame list and the second reference frame list; determining a time domain motion vector of the current block according to the target reference frame list of the current block; determining motion information of the sub-block of the current block according to the time domain motion vector; and performing inter-frame prediction on the current block according to the motion information of the sub-block of the current block.

In a second aspect, a video processing apparatus is provided, including: a memory for storing code; a processor to execute code stored in the memory to perform the following operations: acquiring a reference frame list of a current block, wherein the reference frame list of the current block comprises a first reference frame list and a second reference frame list; determining a target reference frame list according to the reference frame list of the current block, wherein the target reference frame list is one of the first reference frame list and the second reference frame list; determining a time domain motion vector of the current block according to the target reference frame list of the current block; determining motion information of the sub-block of the current block according to the time domain motion vector; and performing inter-frame prediction on the current block according to the motion information of the sub-block of the current block.

In a third aspect, a computer-readable storage medium is provided having stored thereon instructions for performing the method of the first aspect.

In a fourth aspect, there is provided a computer program product comprising instructions for performing the method in the first aspect.

By limiting the number of reference frame lists that need to be scanned in the bi-directional prediction process, the encoding and decoding operations can be simplified.

Drawings

Fig. 1 is a flow chart for constructing an affine merge candidate list.

Fig. 2 is a schematic diagram of surrounding blocks of a current block.

Fig. 3 is a flow chart of implementation of ATMVP.

Fig. 4 is an exemplary diagram of an acquisition manner of motion information of a sub-block of a current block.

Fig. 5 is a schematic flowchart of a video processing method according to an embodiment of the present application.

Fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application.

Detailed Description

The application can be applied to various video coding standards, such as h.264, High Efficiency Video Coding (HEVC), universal video coding (VVC), audio video coding standard (AVS), AVS +, AVS2, AVS3, and the like.

The video coding process mainly comprises the steps of prediction, transformation, quantization, entropy coding, loop filtering and the like. Prediction is an important component of mainstream video coding techniques. Prediction can be divided into intra prediction and inter prediction. Inter prediction can be achieved by means of motion compensation. The motion compensation process is exemplified below.

For example, a frame of image may be first divided into one or more encoded regions. The coding region may also be referred to as a Coding Tree Unit (CTU). The CTU may be 64 × 64 or 128 × 128 in size (unit is a pixel, and similar description hereinafter is omitted). Each CTU may be divided into square or rectangular image blocks. The image block may also be referred to as a Coding Unit (CU), and a current CU to be encoded will hereinafter be referred to as a current block.

In inter-predicting a current block, a similar block to the current block may be found from a reference frame (which may be a reconstructed frame around a time domain) as a prediction block of the current block. The relative displacement between the current block and the similar block is called a Motion Vector (MV). The process of finding a similar block in the reference frame as the prediction block of the current block is motion compensation.

The inter prediction mode includes a merge mode and a non-merge mode. In the merge mode, a Motion Vector (MV) of an image block is a Motion Vector Prediction (MVP) of the image block, and therefore, for the merge mode, an index of the MVP and an index of a reference frame may be transmitted in a code stream. In contrast, the non-merge mode requires transmission of Motion Vector Difference (MVD) in the bitstream as well as transmission of MVP and reference frame indexes in the bitstream.

Conventional motion vectors use a simple translation model, i.e. the motion vector of the current block represents the relative displacement between the current block and the reference block. This type of motion vector is difficult to accurately describe more complex motion situations in the video, such as zooming, rotation, perspective, etc. In order to be able to describe more complex motion situations, affine models (affine models) are introduced in the relevant codec standards. The affine model describes an affine motion field of a current block using motion vectors of two or three Control Points (CPs) of the current block. The two control points may be, for example, an upper left corner and an upper right corner of the current block; the three control points may be, for example, an upper left corner, an upper right corner and a lower left corner of the current block.

The affine model is combined with the aforementioned merge schema, i.e. an affine merge schema is formed. The MVP of an image block is recorded in the motion vector candidate list (merge candidate) of the normal merge mode, and the Control Point Motion Vector Prediction (CPMVP) is recorded in the motion vector candidate list (after merge candidate) of the after merge mode. Similar to the common merge mode, the affine merge mode does not need to add MVD in the bitstream, but directly uses CPMVP as CPMV of the current block.

The construction of an affine merge candidate list for a current block is one of the important processes of the affine merge mode. Fig. 1 shows a possible construction of an affine merge candidate list.

Step S110, insert ATMVP in affine merge candidate list of the current block.

ATMVP contains motion information of a sub-block of the current block. In other words, when the ATMVP technology is used, the after merge candidate list will be inserted into the motion information of the sub-block of the current block, so that the after merge mode can perform motion compensation at the sub-block level, thereby improving the overall coding performance of the video. The following will describe the implementation of step S110 in detail with reference to fig. 3, and will not be described in detail here.

The motion information comprises a combination of one or more of the following: a motion vector; a motion vector difference value; a reference frame index value; a reference direction for inter prediction; the image block adopts intra-frame coding or inter-frame coding information; a division mode of the image block.

In step S120, inherited affine candidates are inserted into the affine merge candidate list.

For example, as shown in fig. 2, the surrounding blocks of the current block may be scanned in the order of a1- > B1- > B0- > a0- > B2, and the CPMV of the surrounding block adopting the affine merge mode is inserted into the affine merge list of the current block as the affine merge lists of the current block.

Step S130, judging whether the quantity of the affine candidates in the affine merge candidates is less than a preset value.

If the number of affine candidates in the affine merge candidates has reached the preset value, the process of fig. 1 is ended; if the number of affine candidates in the affine merge candidates is less than the preset value, the process continues to step S140.

In step S140, the structured affine candidates are inserted into the affine merge candidates.

For example, the motion information of the surrounding blocks of the current block may be combined to construct new affine candidates, and the resulting affine candidates may be inserted into an affine merge candidates.

Step S150, judging whether the quantity of the affine candidates in the affine merge candidates is less than a preset value.

If the number of affine candidates in the affine merge candidates has reached the preset value, the process of fig. 1 is ended; if the number of affine candidates in the affine merge candidates is less than the preset value, the process proceeds to step S160.

In step S160, a0 vector is inserted into the affine merge candidate list.

In other words, a0 vector fill (padding) after merge candidate valid is used to reach the preset value.

The implementation of step S110 in fig. 1 is illustrated in detail below with reference to fig. 3. In some examples, the method for inserting ATMVP in the after merge candidate list of the current block described below may also not be limited to the embodiment shown in fig. 1.

As shown in fig. 3, the implementation of the ATVMP technique, i.e. the obtaining manner of the motion information of the sub-block of the current block, can be roughly divided into two steps: steps S310 and S320.

In step S310, a corresponding block (correcting block) in the reference frame of the current block is determined.

In the current ATMVP technology, a frame of a current frame (a frame in which a current block is located) for acquiring motion information is called a co-located picture. The co-located frame of the current frame is set at slice initialization. Taking forward prediction as an example, the first reference frame list may be a forward reference frame list, or a reference frame list including a first group of reference frames. The first set of reference frames includes reference frames that temporally precede and follow the current frame. At slice initialization, the first frame in the first reference frame list of the current block is typically set as the co-located frame of the current frame.

The corresponding block of the current block in the reference frame is determined by a temporal motion vector (temp MV). Therefore, in order to obtain the corresponding block of the current block in the reference frame, the temporal motion vector needs to be derived first. The following describes the derivation process of the temporal motion vector by taking forward prediction and bi-directional prediction as examples.

For forward prediction, the number of reference frame lists (which may also be referred to as reference lists or reference picture lists) for the current block is 1. The reference frame list of the current block may be referred to as a first reference frame list (reference list 0). In one scenario, the first reference frame list may be a forward reference frame list. The co-located frame of the current frame is typically set to the first frame in the first reference frame list.

In the process of deriving the temporal motion vector, one implementation is: scanning a motion vector candidate list of a current block (the motion vector candidate list can be constructed based on motion vectors of image blocks at spatial 4 adjacent positions), and taking a first candidate motion vector in the motion vector candidate list as an initial temporal motion vector. Then, scanning a first reference frame list of the current block, and if the reference frame of the first candidate motion vector is the same as the co-located frame of the current frame, taking the first candidate motion vector as a time domain motion vector; if the reference frame of the first candidate motion vector is different from the co-located frame of the current frame, the temporal motion vector may be set to a0 vector and the scanning may be stopped.

In this implementation, a motion vector candidate list needs to be constructed to obtain the first candidate motion vector in the list. In another implementation: the motion vector of a certain spatial adjacent block of the current block can be directly taken as an initial time domain motion vector. If the reference frame of the motion vector of the spatial neighboring block is the same as the co-located frame of the current frame, it can be used as the temporal motion vector; otherwise, the temporal motion vector may be set to a0 vector and the scanning stopped. Here, the spatial neighboring block may be any one of encoded blocks surrounding the current block, such as a block that may be fixed to the left side of the current block, or a block that is fixed to the upper left of the current block, and so on.

For bi-directional prediction, the number of reference frame lists of the current block is 2, i.e., the current block includes a first reference frame list (reference list 0) and a second reference frame list (reference list 1). In one scenario, the first reference frame list may be a forward reference frame list and the second reference frame list may be a backward reference frame list.

In the process of deriving the temporal motion vector, one implementation is: the current motion vector candidate list is scanned first, and the first candidate motion vector in the motion vector candidate list is taken as the initial temporal motion vector. Then, scanning a reference frame list (which may be a first reference frame list or a second reference frame list) in the current reference direction of the current block, and if the reference frame of the first candidate motion vector is the same as the co-located frame of the current frame, taking the first candidate motion vector as a temporal motion vector; if the reference frame of the first candidate motion vector is different from the co-located frame of the current frame, the scanning of the reference frame list in the other reference direction of the current block is continued. Similarly, if the reference frame of the first candidate motion vector in the another reference frame list is the same as the co-located frame of the current frame, the first candidate motion vector may be regarded as the temporal motion vector; if the reference frame of the first candidate motion vector is different from the co-located frame of the current frame, the temporal motion vector may be set to a0 vector and the scanning may be stopped. It should be noted that in some other scenarios, the first reference frame list and the second reference frame list may both contain reference frames that are temporally before and after the current frame, and the bidirectional prediction refers to selecting reference frames with different reference directions from the first reference frame list and the second reference frame list.

In this implementation, the temp MV for deriving ATMVP in bi-directional prediction still needs to construct a motion vector candidate list. In another implementation: the motion vector of a certain spatial adjacent block of the current block can be directly taken as an initial time domain motion vector. For bi-directional prediction, a reference frame list (which may be a first reference frame list or a second reference frame list) in the current reference direction of the current block is scanned first, and if the reference frame in the reference direction of the motion vector of the spatial neighboring block is the same as the co-located frame of the current block, the reference frame may be used as a temporal motion vector. Alternatively, if the reference frame of the spatial neighboring block in this reference direction is different from the co-located frame of the current frame, the scanning of the reference frame list of the current block in another reference direction is continued. Similarly, if the reference frame of the spatial neighboring block in the another reference frame list is the same as the co-located frame of the current frame, the motion vector of the spatial neighboring block may be used as the temporal motion vector; if the reference frame of the spatial neighboring block motion vector is different from the co-located frame of the current frame, the temporal motion vector may be set to a0 vector and the scanning may be stopped. Here, the spatial neighboring block may be any one of encoded blocks surrounding the current block, such as a block fixed to the left side of the current block, or a block fixed to the upper left of the current block.

For bi-directional prediction, the scanning order of the first reference frame list and the second reference frame list may be determined according to the following rule:

when the current frame adopts a low delay (low delay) coding mode and the co-located frame of the current frame is set as the first frame in the second reference frame list, scanning the second reference frame list firstly; otherwise, the first reference frame list is scanned first.

Wherein, the current frame adopts a low latency (low delay) coding mode to indicate that the playing order of the reference frame of the current frame in the video sequence is before the current frame; the setting of the co-located frame of the current frame to the first frame in the second reference frame list may indicate that the quantization step size of the first slice of the first reference frame list of the current frame is smaller than the quantization step size of the first slice of the second reference frame list.

After deriving the temporal motion vector, the temporal motion vector may be used to find a corresponding block of the current block in the reference frame.

In step S320, motion information of a sub-block of the current block is obtained according to a corresponding block of the current block.

As shown in fig. 4, a current block may be divided into a plurality of sub-blocks, and then motion information of the sub-blocks in the corresponding block is determined. It is noted that, for each sub-block, the motion information of the corresponding block may be determined by the minimum motion information storage unit in which it is located.

The motion information comprises a combination of one or more of the following: a motion vector; a motion vector difference value; a reference frame index value; a reference direction for inter prediction; the image block adopts intra-frame coding or inter-frame coding information; a division mode of the image block.

As can be seen from the implementation of ATMVP described in fig. 3, the worst case for bi-directional prediction is: in deriving temporal motion vectors, both reference frame lists are scanned, and a qualified temporal motion vector is still not derived, in which case the scanning of both reference frame lists is redundant.

In addition, in the bi-directional prediction, if the coding mode of the current frame is a low latency mode (low delay B) or a random access mode (random access), the reference frames in the first reference frame list and the second reference frame list overlap to some extent, and therefore, during the process of obtaining the temporal motion vector, there is a redundancy operation in the scanning process of the two reference frame lists.

Therefore, the temporal motion vector derivation scheme provided by the related art for bi-directional prediction is complex, and there is room for improvement.

The following describes embodiments of the present application in detail with reference to fig. 5.

Fig. 5 is a schematic flow chart of a video processing method provided by an embodiment of the present application. The method of fig. 5 is applicable to both the encoding side and the decoding side.

In step S510, a reference frame list of the current block is obtained, where the reference frame list of the current block includes a first reference frame list and a second reference frame list.

The current block may also be referred to as the current CU. The reference frame list of the current block includes a first reference frame list and a second reference frame list, indicating that the current block is to perform inter-frame bi-directional prediction.

Optionally, the first reference frame list may be a forward reference frame list, or may be a reference frame list including a first group of reference frames. The first set of reference frames includes reference frames that temporally precede and follow the current frame.

Optionally, the second reference frame list may be a backward reference frame list, or may be a reference frame list including a second group of reference frames, where the second group of reference frames includes reference frames before and after the current frame in time sequence.

It should be noted that in some scenarios, the first reference frame list and the second reference frame list may both contain reference frames that are temporally before and after the current frame, and the bidirectional prediction may refer to selecting reference frames with different reference directions from the first reference frame list and the second reference frame list.

In step S520, a target reference frame list is determined according to the reference frame list of the current block.

The target reference frame list is one of a first reference frame list and a second reference frame list. The target reference frame list may be selected randomly or according to a certain rule. For example, the following rules may be followed: if the current frame where the current block is located adopts a low-delay coding mode and the co-located frame of the current frame is the first frame in the second reference frame list, determining the second reference frame list as a target reference frame list; and/or determining the first reference frame list as the target reference frame list if the current frame of the current block does not adopt the low-delay coding mode or the co-located frame of the current frame is not the first frame in the second reference frame list.

In step S530, a temporal motion vector of the current block is determined according to the target reference frame list of the current block.

In the bidirectional prediction process, the embodiment of the application determines the temporal motion vector of the current block according to one of the first reference frame list and the second reference frame list. In other words, scanning is stopped after the target reference frame list is scanned, regardless of whether the temporal motion vector can be derived from the target reference frame list. In other words, the temporal motion vector of the current block may be determined only from the target reference frame list.

For example, a first candidate motion vector may be selected from a current motion vector candidate list (the motion vector candidate list may be constructed based on motion vectors of image blocks at 4 adjacent positions in spatial domain); searching a reference frame of a first candidate motion vector from a target reference frame list; when the reference frame of the first candidate motion vector is the same as the co-located frame of the current block, the first candidate motion vector may be determined as a temporal motion vector; scanning is also stopped when the reference frame of the first candidate motion vector is different from the co-located frame of the current block, instead of continuing to scan another reference frame list of the current block as described in the embodiment of fig. 3, in which case the 0 vector may be used as the temporal motion vector of the current block.

In step S540, motion information of a sub-block of the current block is determined according to the temporal motion vector.

For example, the corresponding block of the current block in the reference frame may be determined from the temporal motion vector. Motion information for sub-blocks of the current block may then be determined from the corresponding block of the current block in the reference frame. The motion information comprises a combination of one or more of the following: a motion vector; a motion vector difference value; a reference frame index value; a reference direction for inter prediction; the image block adopts intra-frame coding or inter-frame coding information; a division mode of the image block. Step S540 can be implemented with reference to step S320 above, and is not described in detail here.

In step S550, inter prediction is performed on the current block according to motion information of sub-blocks of the current block.

As an example, step S550 may include: and performing inter prediction according to the motion information of the sub-block of the current block by taking the sub-block of the current block as a unit.

For example, as shown in fig. 1, the motion information of the sub-block of the current block may be inserted as ATMVP into the affine merge candidates list of the current block, and then the complete affine merge candidates list may be constructed as shown in steps S120 to S160 in fig. 1. Then, the candidate motion vector in the affine merge candidates list may be used to perform inter prediction on the current block to determine the optimal candidate motion vector. The detailed implementation manner of step S550 may be executed by referring to the related art, which is not limited in this embodiment of the application.

The embodiment of the application can simplify the operation of the encoding and decoding end by limiting the number of the reference frame lists needing to be scanned in the bidirectional prediction process.

It can be understood that, when the method of fig. 5 is applied to the encoding side and the decoding side, respectively, there is a difference in the inter prediction process for the current block described in step S550. For example, when the method of fig. 5 is applied to an encoding side, inter-predicting a current block may include: determining a prediction block of the current block; and calculating a residual block of the current block according to the original block and the prediction block of the current block. For another example, when the method of fig. 5 is applied to a decoding end, inter-predicting a current block may include: determining a prediction block and a residual block of a current block; and calculating a reconstructed block of the current block according to the prediction block and the residual block of the current block.

Method embodiments of the present application are described in detail above with reference to fig. 1-5, and apparatus embodiments of the present application are described in detail below with reference to fig. 6. It is to be understood that the description of the method embodiments corresponds to the description of the apparatus embodiments, and therefore reference may be made to the preceding method embodiments for parts not described in detail.

Fig. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application. The apparatus 60 of fig. 6 comprises: a memory 62 and a processor 64.

The memory 62 may be used to store code.

The processor 64 may be configured to execute code stored in the memory to perform the following operations: acquiring a reference frame list of a current block, wherein the reference frame list of the current block comprises a first reference frame list and a second reference frame list; determining a target reference frame list according to the reference frame list of the current block, wherein the target reference frame list is one of the first reference frame list and the second reference frame list; determining a time domain motion vector of the current block according to the target reference frame list of the current block; determining motion information of the sub-block of the current block according to the time domain motion vector; and performing inter-frame prediction on the current block according to the motion information of the sub-block of the current block.

Optionally, the determining motion information of the sub-block of the current block according to the temporal motion vector includes: determining a corresponding block of the current block in a reference frame according to the time domain motion vector; and determining the motion information of the sub-block of the current block according to the corresponding block of the current block in the reference frame.

Optionally, the determining a target reference frame list according to the reference frame list of the current block includes: if the current frame where the current block is located adopts a low-delay coding mode and the co-located frame of the current frame is the first frame in the second reference frame list, determining the second reference frame list as the target reference frame list; and/or determining the first reference frame list as the target reference frame list if the current frame of the current block does not adopt the low-delay coding mode or the co-located frame of the current frame is not the first frame in the second reference frame list.

Optionally, the first reference frame list may be a forward reference frame list, or may be a reference frame list including a first group of reference frames. The first set of reference frames includes reference frames that temporally precede and follow the current frame.

Optionally, the second reference frame list may be a backward reference frame list, or may be a reference frame list including a second group of reference frames, where the second group of reference frames includes reference frames before and after the current frame in time sequence.

It should be noted that in some scenarios, the first reference frame list and the second reference frame list may both contain reference frames that are temporally before and after the current frame, and the bidirectional prediction may refer to selecting reference frames with different reference directions from the first reference frame list and the second reference frame list.

Optionally, the determining a temporal motion vector of the current block according to the target reference frame list of the current block includes: selecting a first candidate motion vector from the current motion vector candidate list; searching the reference frame of the first candidate motion vector from the target reference frame list; determining the first candidate motion vector as the temporal motion vector when the reference frame of the first candidate motion vector is the same as the co-located frame of the current block.

Optionally, the determining a temporal motion vector of the current block according to the target reference frame list of the current block further includes: determining the temporal motion vector as a0 vector when the reference frame of the first candidate motion vector is different from the co-located frame of the current block.

Optionally, the inter-predicting the current block includes: determining a prediction block for the current block; and calculating a residual block of the current block according to the original block and the prediction block of the current block.

Optionally, the inter-predicting the current block includes: determining a prediction block and a residual block for the current block; and calculating a reconstructed block of the current block according to the prediction block and the residual block of the current block.

Optionally, the inter-predicting the current block according to the motion information of the sub-block of the current block may include: and performing inter-frame prediction according to the motion information of the sub-block of the current block by taking the sub-block of the current block as a unit.

In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any other combination. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy disk, a hard disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于CDN的视频录入调度系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类