Decoding method, encoding method, device and equipment

文档序号:53010 发布日期:2021-09-28 浏览:33次 中文

阅读说明:本技术 解码方法、编码方法、装置及设备 (Decoding method, encoding method, device and equipment ) 是由 方树清 于 2020-03-26 设计创作,主要内容包括:本申请提供一种解码方法、编码方法、装置及设备,该解码方法包括:获取当前块的码流,并从当前块的码流中解析增强型时域运动矢量预测模式的索引信息,基于当前块的第一周围块,确定当前块的匹配块;基于匹配块以及对匹配块进行偏移得到的新的匹配块,确定候选增强型时域运动矢量预测模式,并基于候选增强型时域运动矢量预测模式构建第二时域候选模式列表;基于索引信息,从第二时域候选模式列表中确定增强型时域运动矢量预测模式;基于增强型时域运动矢量预测模式,确定当前块内各子块的运动信息,并基于当前块内各子块的运动信息,对当前块内各子块进行运动补偿。该方法可以提高解码性能。(The application provides a decoding method, an encoding method, a device and equipment, wherein the decoding method comprises the following steps: acquiring a code stream of a current block, analyzing index information of an enhanced time domain motion vector prediction mode from the code stream of the current block, and determining a matching block of the current block based on a first peripheral block of the current block; determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a second temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode; determining an enhanced temporal motion vector prediction mode from the second temporal candidate mode list based on the index information; and determining the motion information of each subblock in the current block based on the enhanced time domain motion vector prediction mode, and performing motion compensation on each subblock in the current block based on the motion information of each subblock in the current block. The method can improve decoding performance.)

1. A method of decoding, comprising:

acquiring a code stream of a current block, and analyzing index information of an enhanced temporal motion vector prediction mode from the code stream of the current block, wherein the index information is used for identifying the position of the enhanced temporal motion vector prediction mode in a first temporal candidate mode list constructed by a coding end device;

determining a matching block of the current block based on a first surrounding block of the current block;

determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a second temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

determining an enhanced temporal motion vector prediction mode from the second temporal candidate mode list based on the index information;

and determining the motion information of each subblock in the current block based on the enhanced time domain motion vector prediction mode, and performing motion compensation on each subblock in the current block based on the motion information of each subblock in the current block.

2. The method of claim 1, wherein determining the matching block for the current block based on a first surrounding block of the current block comprises:

determining motion information of a first stage based on the first peripheral block;

determining a matching block for the current block based on the motion information of the first stage.

3. The method of claim 2, wherein determining the motion information of the first phase based on the first peripheral block comprises:

determining motion information of a first stage based on the forward motion information and/or the backward motion information of the first peripheral block.

4. The method of claim 3, wherein determining the motion information of the first stage based on the forward motion information and/or the backward motion information of the first peripheral block comprises:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available but the forward motion information is available, scaling the forward motion information of the first peripheral block to point to a first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

5. The method of claim 3, wherein determining the motion information of the first stage based on the forward motion information and/or the backward motion information of the first peripheral block comprises:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first surrounding block is available but the forward motion information of the first surrounding block does not point to the first frame in List0, scaling the forward motion information of the first surrounding block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

If the forward motion information of the first surrounding block is not available, but the backward motion information is available, scaling the backward motion information of the first surrounding block to point to a first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 0;

the reference direction of the motion information in the first stage is the List0 direction.

6. The method of claim 3, wherein determining the motion information of the first stage based on the forward motion information and/or the backward motion information of the first peripheral block comprises:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

7. The method of claim 3, wherein determining the motion information of the first stage based on the forward motion information and/or the backward motion information of the first peripheral block comprises:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first surrounding block is available but the forward motion information of the first surrounding block does not point to the first frame in List0, scaling the forward motion information of the first surrounding block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

determining that the motion vector of the first stage is 0 and the reference frame index is an index of a first frame in List0 if the forward motion information of the first peripheral block is not available;

The reference direction of the motion information in the first stage is the List0 direction.

8. The method of claim 2, wherein determining the motion information of the first phase based on the first peripheral block comprises:

determining that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List0, and the reference direction of the motion information of the first stage is a List0 direction;

or the like, or, alternatively,

it is determined that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List1, and the reference direction of the motion information of the first stage is a List1 direction.

9. The method of any of claims 2-8, wherein determining the matching block for the current block based on the motion information of the first stage comprises:

and determining a matching block of the current block based on the position of the current block, the horizontal motion vector, the vertical motion vector and the precision of the motion vector in the first stage.

10. The method of any of claims 2-8, wherein determining the matching block for the current block based on the motion information of the first stage comprises:

and determining a matching block of the current block based on the position of the current block, the horizontal motion vector and the vertical motion vector of the first stage, the precision of the motion vector and the size of the sub-block.

11. The method according to any of claims 1-10, wherein a new matching block, obtained by offsetting the matching block, is determined by:

pruning the first sub-block and the second sub-block to be within the range of the current coding tree unit CTU, and comparing the motion information of the pruned first sub-block and second sub-block; and pruning the third sub-block and the fourth sub-block to be within the range of the current CTU, comparing the motion information of the pruned third sub-block and fourth sub-block, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the right to obtain a new matching block;

pruning the fifth sub-block and the sixth sub-block to be within the range of the current CTU, and comparing the motion information of the fifth sub-block and the sixth sub-block after pruning; and pruning the seventh sub-block and the eighth sub-block to be within the range of the current CTU, comparing the motion information of the seventh sub-block and the eighth sub-block after pruning, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the left to obtain a new matching block;

Pruning the first sub-block and the ninth sub-block to be within the range of the current CTU, and comparing the motion information of the pruned first sub-block and ninth sub-block; and pruning the fifth sub-block and the tenth sub-block to be within the range of the current CTU, comparing the motion information of the fifth sub-block and the tenth sub-block after pruning, and if at least one comparison result of the two comparison results is different in motion information, vertically and downwards offsetting the matching block by one unit to obtain a new matching block;

pruning the third sub-block and the eleventh sub-block to be within the range of the current CTU, and comparing the motion information of the pruned third sub-block and eleventh sub-block; and pruning the seventh sub-block and the twelfth sub-block to be within the range of the current CTU, comparing the motion information of the seventh sub-block and the twelfth sub-block after pruning, and if at least one of the two comparison results is different in motion information, vertically and upwardly offsetting the matching block by one unit to obtain a new matching block;

the first sub-block is the sub-block at the upper left corner of the matching block, the second sub-block is the adjacent sub-block at the top right corner of the matching block, the third sub-block is the sub-block at the lower left corner of the matching block, the fourth sub-block is the adjacent sub-block at the bottom right corner of the matching block, the fifth sub-block is the sub-block at the upper right corner of the matching block, the sixth sub-block is the adjacent sub-block at the top left corner of the matching block, the seventh sub-block is the sub-block at the lower right corner of the matching block, the eighth sub-block is the adjacent sub-block at the bottom left corner of the matching block, the ninth sub-block is the adjacent sub-block at the left corner of the matching block, the tenth sub-block is the adjacent sub-block at the right corner of the matching block, the eleventh sub-block is the adjacent sub-block at the left corner of the matching block, and the twelfth sub-block is the adjacent sub-block at the top right corner of the matching block; one unit is the side length of the subblock.

12. The method according to any of claims 1-10, wherein a new matching block, obtained by offsetting the matching block, is determined by:

and respectively carrying out horizontal direction and vertical direction deviation on the matching blocks based on one or more deviation amount pairs to obtain one or more new matching blocks.

13. The method of any of claims 1-10, wherein after determining the matching block for the current block, further comprising:

pruning the matching block to be within the range of the current CTU.

14. The method of claim 13, wherein a new matching block obtained by offsetting the matching block is determined by:

when the right boundary of the trimmed matching block is not located at the right boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the fourteenth sub-block, and comparing the motion information of the fifteenth sub-block and the sixteenth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the trimmed matching block by one unit to the right to obtain a new matching block;

when the left boundary of the trimmed matching block is not located at the left boundary position of the current CTU, comparing the motion information of the seventeenth sub-block and the eighteenth sub-block, and comparing the motion information of the nineteenth sub-block and the twentieth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the trimmed matching block by one unit to the left to obtain a new matching block;

When the lower boundary of the trimmed matching block is not located at the position of the lower boundary of the current CTU, comparing the motion information of the thirteenth sub-block with the motion information of the twenty-first sub-block, and comparing the motion information of the seventeenth sub-block with the motion information of the twenty-second sub-block, if at least one of the two comparison results is that the motion information is different, vertically and downwards shifting the trimmed matching block by one unit to obtain a new matching block;

when the upper boundary of the trimmed matching block is not located at the upper boundary position of the current CTU, comparing the motion information of the fifteenth sub-block with the motion information of the twenty-third sub-block, and comparing the motion information of the nineteenth sub-block with the motion information of the twenty-fourth sub-block, if at least one of the two comparison results is that the motion information is different, vertically and upwardly shifting the trimmed matching block by one unit to obtain a new matching block;

wherein the thirteenth sub-block is the sub-block at the top left corner of the trimmed matching block, the fourteenth sub-block is the adjacent sub-block at the top right corner of the trimmed matching block, the fifteenth sub-block is the sub-block at the bottom left corner of the trimmed matching block, the sixteenth sub-block is the adjacent sub-block at the bottom right corner of the trimmed matching block, the seventeenth sub-block is the sub-block at the top right corner of the trimmed matching block, the eighteenth sub-block is the adjacent sub-block at the top left corner of the trimmed matching block, the nineteenth sub-block is the sub-block at the bottom right corner of the trimmed matching block, the twentieth sub-block is the adjacent sub-block at the bottom left corner of the trimmed matching block, the twenty-first sub-block is the adjacent sub-block at the left corner below the trimmed matching block, the twenty-second sub-block is the adjacent sub-block at the right corner below the trimmed matching block, the twenty-third sub-block is an adjacent sub-block on the leftmost side right above the trimmed matching block, and the twenty-fourth sub-block is an adjacent sub-block on the rightmost side right above the trimmed matching block; one unit is the side length of the subblock.

15. The method according to any of claims 11-14, wherein said determining a candidate enhanced temporal motion vector prediction mode based on said matching block and a new matching block obtained by shifting said matching block comprises:

when at least one new matching block exists, determining a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

when at least one new matching block exists, determining a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

and when no new matching block exists, determining the prediction mode corresponding to the matching block before shifting as a candidate enhanced temporal motion vector prediction mode.

16. The method according to any of claims 1-12 or 15, wherein said determining motion information for each sub-block in the current block based on the enhanced temporal motion vector prediction mode comprises:

for any sub-block in the target matching block, pruning the sub-block to be within the range of the current CTU; the target matching block is a matching block corresponding to the enhanced temporal motion vector prediction mode;

If the forward motion information and the backward motion information of the pruned subblock are both available, respectively scaling the forward motion information and the backward motion information of the pruned subblock to point to a first frame of List0 and a first frame of List1, and respectively endowing the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block;

if the forward motion information of the pruned sub-block is available but the backward motion information is not available, scaling the forward motion information of the pruned sub-block to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block;

if the backward motion information of the pruned sub-block is available but the forward motion information is not available, the backward motion information of the pruned sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

17. The method of claim 16, wherein determining motion information for each sub-block in the current block based on the enhanced temporal motion vector prediction mode further comprises:

if the forward motion information and the backward motion information of the pruned subblock are unavailable, pruning the center position of the target matching block to be within the range of the current CTU, respectively stretching the forward motion information and the backward motion information of the center position of the pruned target matching block to a first frame pointing to List0 and a first frame pointing to List1 when both the forward motion information and the backward motion information of the center position of the pruned target matching block are available, and respectively endowing the stretched forward motion information and the stretched backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the pruned target matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the pruned target matching block to a first frame pointing to a List0, and assigning the scaled forward motion information to a sub-block at a corresponding position of the current block; when the backward motion information of the center position of the pruned target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the pruned target matching block to a first frame pointing to a List1, and assigning the scaled backward motion information to a sub-block at a corresponding position of the current block; when the forward motion information and the backward motion information of the center position of the trimmed target matching block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

Or if the forward motion information and the backward motion information of the pruned sub-blocks are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block;

or, if the forward motion information and the backward motion information of the pruned subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are both available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

18. The method according to any of claims 1-15, wherein said determining motion information for each sub-block in said current block based on said enhanced temporal motion vector prediction mode comprises:

for any sub-block in the target matching block, if the forward motion information and the backward motion information of the sub-block are both available, respectively scaling the forward motion information and the backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, and respectively giving the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block;

if the forward motion information of the sub-block is available, but the backward motion information is not available, the forward motion information of the sub-block is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block is available but the forward motion information is not available, the backward motion information of the sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

19. The method of claim 18, wherein determining motion information for each sub-block in the current block based on the enhanced temporal motion vector prediction mode further comprises:

If the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the central position of the target matching block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the central position of the target matching block are both available, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the target matching block is available, but the backward motion information is not available, scaling the forward motion information of the center position of the target matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the target matching block to a first frame pointing to the List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; when the forward motion information and the backward motion information of the central position of the target matching block are unavailable, giving zero motion information to the subblock at the corresponding position of the current block;

Or if the forward motion information and the backward motion information of the sub-block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

or, if the forward motion information and the backward motion information of the subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

20. The method according to any of claims 1-19, wherein said parsing index information of enhanced temporal motion vector prediction mode from the bitstream of the current block comprises:

and when the current block is determined to enable the enhanced time domain motion vector prediction technology, analyzing index information of an enhanced time domain motion vector prediction mode from a code stream of the current block.

21. The method of claim 20, wherein whether the current block activates an enhanced temporal motion vector prediction technique is indicated using a sequence parameter set level syntax or a slice level syntax.

22. The method of claim 21, wherein whether the current block enables an enhanced temporal motion vector prediction technique is determined by, when the current block enables an enhanced temporal motion vector prediction technique using sequence parameter set level syntax:

when the image sequence to which the current block belongs enables the enhanced temporal motion vector prediction technology, determining that the current block enables the enhanced temporal motion vector prediction technology;

when the image sequence to which the current block belongs does not enable the enhanced temporal motion vector prediction technology, determining that the current block does not enable the enhanced temporal motion vector prediction technology.

23. The method of claim 21, wherein when using a slice-level syntax to indicate whether the current block enables an enhanced temporal motion vector prediction technique, whether the current block enables the enhanced temporal motion vector prediction technique is determined by:

determining that the current block enables an enhanced temporal motion vector prediction technique when the slice to which the current block belongs enables the enhanced temporal motion vector prediction technique;

when the enhanced temporal motion vector prediction technique is not enabled for the slice to which the current block belongs, determining that the enhanced temporal motion vector prediction technique is not enabled for the current block.

24. The method of claim 20, wherein whether the current block enables an enhanced temporal motion vector prediction technique is determined by:

when the size of the current block is smaller than or equal to the size of a preset maximum coding block and larger than or equal to the size of a preset minimum coding block, determining that the current block enables an enhanced time domain motion vector prediction technology;

and when the size of the current block is larger than the size of a preset maximum coding block or smaller than the size of a preset minimum coding block, determining that the enhanced temporal motion vector prediction technology is not started for the current block.

25. The method of claim 24, wherein the size of the preset largest coding block is expressed using a sequence parameter set level syntax or using a slice level syntax;

and/or the first and/or second light sources,

the size of the preset minimum coding block is represented by using a sequence parameter set level syntax or by using a slice level syntax.

26. A method of encoding, comprising:

determining a matching block of a current block based on a first peripheral block of the current block;

determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

traversing each candidate enhanced temporal motion vector prediction mode in the first temporal candidate mode list, determining motion information of each subblock in the current block based on any candidate enhanced temporal motion vector prediction mode and the candidate enhanced temporal motion vector prediction mode, and performing motion compensation on each subblock in the current block based on the motion information of each subblock in the current block;

determining the candidate enhanced temporal motion vector prediction mode with the minimum rate distortion cost as the enhanced temporal motion vector prediction mode of the current block based on the rate distortion cost corresponding to each candidate enhanced temporal motion vector prediction mode;

And carrying index information of the enhanced temporal motion vector prediction mode of the current block in a code stream of the current block, wherein the index information is used for identifying the position of the enhanced temporal motion vector prediction mode in the first temporal candidate mode list.

27. A decode-side device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor for executing the machine-executable instructions to implement the method of any one of claims 1-25.

28. An encoding end device comprising a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being configured to execute the machine-executable instructions to implement the method of claim 26.

Technical Field

The present application relates to video encoding and decoding technologies, and in particular, to a decoding method, an encoding method, an apparatus, and a device.

Background

Complete video coding generally includes operations of prediction, transformation, quantization, entropy coding, filtering, and so on. The prediction can be divided into intra-frame prediction and inter-frame prediction, wherein the intra-frame prediction is to predict a current uncoded block by using surrounding coded blocks as references, and effectively remove redundancy on a spatial domain. Inter-frame prediction is to use neighboring coded pictures to predict the current picture, effectively removing redundancy in the temporal domain.

An optional Temporal Motion Vector Prediction (ATMVP) technique adopted in the general Video Coding (VVC) standard is to provide different Motion information for each subblock in a current Coding unit by using Motion information of a Temporal subblock, and the method is to find a Coding block corresponding to an adjacent block of a current Coding block in a Co-located (common position) frame according to Motion information of the adjacent block of the current Coding block, and then provide Motion information of each subblock inside the Coding block at a position corresponding to the current Coding block in the Co-located frame to each subblock in the current Coding block.

However, in the conventional ATMVP technology, finding a coding block in a co-located frame corresponding to a current coding block needs to depend on motion information of surrounding blocks of the current coding block, and if the motion information of the surrounding blocks is inaccurate, the motion information of the found coding block in the corresponding position is unreliable, thereby affecting the coding performance.

Disclosure of Invention

In view of the above, the present application provides a decoding method, an encoding method, an apparatus and a device.

Specifically, the method is realized through the following technical scheme:

according to a first aspect of embodiments of the present application, there is provided a decoding method, including:

acquiring a code stream of a current block, and analyzing index information of an enhanced temporal motion vector prediction mode from the code stream of the current block, wherein the index information is used for identifying the position of the enhanced temporal motion vector prediction mode in a first temporal candidate mode list constructed by a coding end device;

determining a matching block of the current block based on a first surrounding block of the current block;

determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a second temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

determining an enhanced temporal motion vector prediction mode from the second temporal candidate mode list based on the index information;

and determining the motion information of each subblock in the current block based on the enhanced time domain motion vector prediction mode, and performing motion compensation on each subblock in the current block based on the motion information of each subblock in the current block.

In some embodiments, said determining a matching block for the current block based on a first surrounding block of the current block comprises:

determining motion information of a first stage based on the first peripheral block;

determining a matching block for the current block based on the motion information of the first stage.

In some embodiments, the determining motion information of the first stage based on the first peripheral block includes:

determining motion information of a first stage based on the forward motion information and/or the backward motion information of the first peripheral block.

In some embodiments, the determining motion information of the first stage based on the forward motion information and/or backward motion information of the first peripheral block includes:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available but the forward motion information is available, scaling the forward motion information of the first peripheral block to point to a first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In some embodiments, the determining motion information of the first stage based on the forward motion information and/or backward motion information of the first peripheral block includes:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first surrounding block is available but the forward motion information of the first surrounding block does not point to the first frame in List0, scaling the forward motion information of the first surrounding block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

If the forward motion information of the first surrounding block is not available, but the backward motion information is available, scaling the backward motion information of the first surrounding block to point to a first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 0;

the reference direction of the motion information in the first stage is the List0 direction.

In some embodiments, the determining motion information of the first stage based on the forward motion information and/or backward motion information of the first peripheral block includes:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In some embodiments, the determining motion information of the first stage based on the forward motion information and/or backward motion information of the first peripheral block includes:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first surrounding block is available but the forward motion information of the first surrounding block does not point to the first frame in List0, scaling the forward motion information of the first surrounding block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

determining that the motion vector of the first stage is 0 and the reference frame index is an index of a first frame in List0 if the forward motion information of the first peripheral block is not available;

The reference direction of the motion information in the first stage is the List0 direction.

In some embodiments, the determining motion information of the first stage based on the first peripheral block includes:

determining that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List0, and the reference direction of the motion information of the first stage is a List0 direction;

or the like, or, alternatively,

it is determined that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List1, and the reference direction of the motion information of the first stage is a List1 direction.

In some embodiments, said determining a matching block for the current block based on the motion information of the first stage comprises:

and determining a matching block of the current block based on the position of the current block, the horizontal motion vector, the vertical motion vector and the precision of the motion vector in the first stage.

In some embodiments, said determining a matching block for the current block based on the motion information of the first stage comprises:

and determining a matching block of the current block based on the position of the current block, the horizontal motion vector and the vertical motion vector of the first stage, the precision of the motion vector and the size of the sub-block.

In some embodiments, a new matching block obtained by offsetting the matching block is determined by:

pruning the first sub-block and the second sub-block to be within the range of the current coding tree unit CTU, and comparing the motion information of the pruned first sub-block and second sub-block; and pruning the third sub-block and the fourth sub-block to be within the range of the current CTU, comparing the motion information of the pruned third sub-block and fourth sub-block, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the right to obtain a new matching block;

pruning the fifth sub-block and the sixth sub-block to be within the range of the current CTU, and comparing the motion information of the fifth sub-block and the sixth sub-block after pruning; and pruning the seventh sub-block and the eighth sub-block to be within the range of the current CTU, comparing the motion information of the seventh sub-block and the eighth sub-block after pruning, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the left to obtain a new matching block;

pruning the first sub-block and the ninth sub-block to be within the range of the current CTU, and comparing the motion information of the pruned first sub-block and ninth sub-block; and pruning the fifth sub-block and the tenth sub-block to be within the range of the current CTU, comparing the motion information of the fifth sub-block and the tenth sub-block after pruning, and if at least one comparison result of the two comparison results is different in motion information, vertically and downwards offsetting the matching block by one unit to obtain a new matching block;

Pruning the third sub-block and the eleventh sub-block to be within the range of the current CTU, and comparing the motion information of the pruned third sub-block and eleventh sub-block; and pruning the seventh sub-block and the twelfth sub-block to be within the range of the current CTU, comparing the motion information of the seventh sub-block and the twelfth sub-block after pruning, and if at least one of the two comparison results is different in motion information, vertically and upwardly offsetting the matching block by one unit to obtain a new matching block;

the first sub-block is the sub-block at the upper left corner of the matching block, the second sub-block is the adjacent sub-block at the top right corner of the matching block, the third sub-block is the sub-block at the lower left corner of the matching block, the fourth sub-block is the adjacent sub-block at the bottom right corner of the matching block, the fifth sub-block is the sub-block at the upper right corner of the matching block, the sixth sub-block is the adjacent sub-block at the top left corner of the matching block, the seventh sub-block is the sub-block at the lower right corner of the matching block, the eighth sub-block is the adjacent sub-block at the bottom left corner of the matching block, the ninth sub-block is the adjacent sub-block at the left corner of the matching block, the tenth sub-block is the adjacent sub-block at the right corner of the matching block, the eleventh sub-block is the adjacent sub-block at the left corner of the matching block, and the twelfth sub-block is the adjacent sub-block at the top right corner of the matching block; one unit is the side length of the subblock.

In some embodiments, a new matching block obtained by offsetting the matching block is determined by:

and respectively carrying out horizontal direction and vertical direction deviation on the matching blocks based on one or more deviation amount pairs to obtain one or more new matching blocks.

In some embodiments, after determining the matching block for the current block, further comprising:

pruning the matching block to be within the range of the current CTU.

In some embodiments, a new matching block obtained by offsetting the matching block is determined by:

when the right boundary of the trimmed matching block is not located at the right boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the fourteenth sub-block, and comparing the motion information of the fifteenth sub-block and the sixteenth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the trimmed matching block by one unit to the right to obtain a new matching block;

when the left boundary of the trimmed matching block is not located at the left boundary position of the current CTU, comparing the motion information of the seventeenth sub-block and the eighteenth sub-block, and comparing the motion information of the nineteenth sub-block and the twentieth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the trimmed matching block by one unit to the left to obtain a new matching block;

When the lower boundary of the trimmed matching block is not located at the position of the lower boundary of the current CTU, comparing the motion information of the thirteenth sub-block with the motion information of the twenty-first sub-block, and comparing the motion information of the seventeenth sub-block with the motion information of the twenty-second sub-block, if at least one of the two comparison results is that the motion information is different, vertically and downwards shifting the trimmed matching block by one unit to obtain a new matching block;

when the upper boundary of the trimmed matching block is not located at the upper boundary position of the current CTU, comparing the motion information of the fifteenth sub-block with the motion information of the twenty-third sub-block, and comparing the motion information of the nineteenth sub-block with the motion information of the twenty-fourth sub-block, if at least one of the two comparison results is that the motion information is different, vertically and upwardly shifting the trimmed matching block by one unit to obtain a new matching block;

wherein the thirteenth sub-block is the sub-block at the top left corner of the trimmed matching block, the fourteenth sub-block is the adjacent sub-block at the top right corner of the trimmed matching block, the fifteenth sub-block is the sub-block at the bottom left corner of the trimmed matching block, the sixteenth sub-block is the adjacent sub-block at the bottom right corner of the trimmed matching block, the seventeenth sub-block is the sub-block at the top right corner of the trimmed matching block, the eighteenth sub-block is the adjacent sub-block at the top left corner of the trimmed matching block, the nineteenth sub-block is the sub-block at the bottom right corner of the trimmed matching block, the twentieth sub-block is the adjacent sub-block at the bottom left corner of the trimmed matching block, the twenty-first sub-block is the adjacent sub-block at the left corner below the trimmed matching block, the twenty-second sub-block is the adjacent sub-block at the right corner below the trimmed matching block, the twenty-third sub-block is an adjacent sub-block on the leftmost side right above the trimmed matching block, and the twenty-fourth sub-block is an adjacent sub-block on the rightmost side right above the trimmed matching block; one unit is the side length of the subblock.

In some embodiments, the determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block comprises:

when at least one new matching block exists, determining a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

when at least one new matching block exists, determining a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

and when no new matching block exists, determining the prediction mode corresponding to the matching block before shifting as a candidate enhanced temporal motion vector prediction mode.

In some embodiments, said determining motion information for each sub-block within said current block based on said enhanced temporal motion vector prediction mode comprises:

for any sub-block in the target matching block, pruning the sub-block to be within the range of the current CTU; the target matching block is a matching block corresponding to the enhanced temporal motion vector prediction mode;

If the forward motion information and the backward motion information of the pruned subblock are both available, respectively scaling the forward motion information and the backward motion information of the pruned subblock to point to a first frame of List0 and a first frame of List1, and respectively endowing the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block;

if the forward motion information of the pruned sub-block is available but the backward motion information is not available, scaling the forward motion information of the pruned sub-block to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block;

if the backward motion information of the pruned sub-block is available but the forward motion information is not available, the backward motion information of the pruned sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In some embodiments, the determining motion information for each sub-block within the current block based on the enhanced temporal motion vector prediction mode further comprises:

if the forward motion information and the backward motion information of the pruned subblock are unavailable, pruning the center position of the target matching block to be within the range of the current CTU, respectively stretching the forward motion information and the backward motion information of the center position of the pruned target matching block to a first frame pointing to List0 and a first frame pointing to List1 when both the forward motion information and the backward motion information of the center position of the pruned target matching block are available, and respectively endowing the stretched forward motion information and the stretched backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the pruned target matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the pruned target matching block to a first frame pointing to a List0, and assigning the scaled forward motion information to a sub-block at a corresponding position of the current block; when the backward motion information of the center position of the pruned target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the pruned target matching block to a first frame pointing to a List1, and assigning the scaled backward motion information to a sub-block at a corresponding position of the current block; when the forward motion information and the backward motion information of the center position of the trimmed target matching block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

Or if the forward motion information and the backward motion information of the pruned sub-blocks are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block;

or, if the forward motion information and the backward motion information of the pruned subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are both available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In some embodiments, said determining motion information for each sub-block within said current block based on said enhanced temporal motion vector prediction mode comprises:

for any sub-block in the target matching block, if the forward motion information and the backward motion information of the sub-block are both available, respectively scaling the forward motion information and the backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, and respectively giving the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block;

if the forward motion information of the sub-block is available, but the backward motion information is not available, the forward motion information of the sub-block is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block is available but the forward motion information is not available, the backward motion information of the sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In some embodiments, the determining motion information for each sub-block within the current block based on the enhanced temporal motion vector prediction mode further comprises:

If the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the central position of the target matching block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the central position of the target matching block are both available, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the target matching block is available, but the backward motion information is not available, scaling the forward motion information of the center position of the target matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the target matching block to a first frame pointing to the List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; when the forward motion information and the backward motion information of the central position of the target matching block are unavailable, giving zero motion information to the subblock at the corresponding position of the current block;

Or if the forward motion information and the backward motion information of the sub-block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

or, if the forward motion information and the backward motion information of the subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In some embodiments, the parsing the index information of the enhanced temporal motion vector prediction mode from the code stream of the current block includes:

and when the current block is determined to enable the enhanced time domain motion vector prediction technology, analyzing index information of an enhanced time domain motion vector prediction mode from a code stream of the current block.

In some embodiments, whether the current block turns on an enhanced temporal motion vector prediction technique is indicated using a sequence parameter set level syntax or a slice level syntax.

In some embodiments, when the sequence parameter set level syntax is used to indicate whether the current block enables the enhanced temporal motion vector prediction technique, whether the current block enables the enhanced temporal motion vector prediction technique is determined by:

when the image sequence to which the current block belongs enables the enhanced temporal motion vector prediction technology, determining that the current block enables the enhanced temporal motion vector prediction technology;

when the image sequence to which the current block belongs does not enable the enhanced temporal motion vector prediction technology, determining that the current block does not enable the enhanced temporal motion vector prediction technology.

In some embodiments, when the slice-level syntax is used to indicate whether the current block enables the enhanced temporal motion vector prediction technique, whether the current block enables the enhanced temporal motion vector prediction technique is determined by:

Determining that the current block enables an enhanced temporal motion vector prediction technique when the slice to which the current block belongs enables the enhanced temporal motion vector prediction technique;

when the enhanced temporal motion vector prediction technique is not enabled for the slice to which the current block belongs, determining that the enhanced temporal motion vector prediction technique is not enabled for the current block.

In some embodiments, whether the current block enables an enhanced temporal motion vector prediction technique is determined by:

when the size of the current block is smaller than or equal to the size of a preset maximum coding block and larger than or equal to the size of a preset minimum coding block, determining that the current block enables an enhanced time domain motion vector prediction technology;

and when the size of the current block is larger than the size of a preset maximum coding block or smaller than the size of a preset minimum coding block, determining that the enhanced temporal motion vector prediction technology is not started for the current block.

In some embodiments, the size of the preset largest coding block is represented using a sequence parameter set level syntax or using a slice level syntax;

and/or the first and/or second light sources,

the size of the preset minimum coding block is represented by using a sequence parameter set level syntax or by using a slice level syntax.

According to a second aspect of embodiments of the present application, there is provided an encoding method, including:

determining a matching block of a current block based on a first peripheral block of the current block;

determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

traversing each candidate enhanced temporal motion vector prediction mode in the first temporal candidate mode list, determining motion information of each subblock in the current block based on any candidate enhanced temporal motion vector prediction mode and the candidate enhanced temporal motion vector prediction mode, and performing motion compensation on each subblock in the current block based on the motion information of each subblock in the current block;

determining the candidate enhanced temporal motion vector prediction mode with the minimum rate distortion cost as the enhanced temporal motion vector prediction mode of the current block based on the rate distortion cost corresponding to each candidate enhanced temporal motion vector prediction mode;

and carrying index information of the enhanced temporal motion vector prediction mode of the current block in a code stream of the current block, wherein the index information is used for identifying the position of the enhanced temporal motion vector prediction mode in the first temporal candidate mode list.

In some embodiments, the determining a matching block for the current block based on the first peripheral block of the current block comprises:

determining motion information of a first stage based on the first peripheral block;

determining a matching block for the current block based on the motion information of the first stage.

In some embodiments, the determining motion information of the first stage based on the first peripheral block includes:

determining motion information of a first stage based on the forward motion information and/or the backward motion information of the first peripheral block.

In some embodiments, the determining motion information of the first stage based on the forward motion information and/or backward motion information of the first peripheral block includes:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available but the forward motion information is available, scaling the forward motion information of the first peripheral block to point to a first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In some embodiments, the determining motion information of the first stage based on the forward motion information and/or backward motion information of the first peripheral block includes:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first surrounding block is available but the forward motion information of the first surrounding block does not point to the first frame in List0, scaling the forward motion information of the first surrounding block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

If the forward motion information of the first surrounding block is not available, but the backward motion information is available, scaling the backward motion information of the first surrounding block to point to a first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 0;

the reference direction of the motion information in the first stage is the List0 direction.

In some embodiments, the determining motion information of the first stage based on the forward motion information and/or backward motion information of the first peripheral block includes:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In some embodiments, the determining motion information of the first stage based on the forward motion information and/or backward motion information of the first peripheral block includes:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first surrounding block is available but the forward motion information of the first surrounding block does not point to the first frame in List0, scaling the forward motion information of the first surrounding block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

determining that the motion vector of the first stage is 0 and the reference frame index is an index of a first frame in List0 if the forward motion information of the first peripheral block is not available;

The reference direction of the motion information in the first stage is the List0 direction.

In some embodiments, the determining motion information of the first stage based on the first peripheral block includes:

determining that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List0, and the reference direction of the motion information of the first stage is a List0 direction;

or the like, or, alternatively,

it is determined that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List1, and the reference direction of the motion information of the first stage is a List1 direction.

In some embodiments, said determining a matching block for the current block based on the motion information of the first stage comprises:

and determining a matching block of the current block based on the position of the current block, the horizontal motion vector, the vertical motion vector and the precision of the motion vector in the first stage.

In some embodiments, said determining a matching block for the current block based on the motion information of the first stage comprises:

and determining a matching block of the current block based on the position of the current block, the horizontal motion vector and the vertical motion vector of the first stage, the precision of the motion vector and the size of the sub-block.

In some embodiments, a new matching block obtained by offsetting the matching block is determined by:

pruning the first sub-block and the second sub-block to be within the range of the current coding tree unit CTU, and comparing the motion information of the pruned first sub-block and second sub-block; and pruning the third sub-block and the fourth sub-block to be within the range of the current CTU, comparing the motion information of the pruned third sub-block and fourth sub-block, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the right to obtain a new matching block;

pruning the fifth sub-block and the sixth sub-block to be within the range of the current CTU, and comparing the motion information of the fifth sub-block and the sixth sub-block after pruning; and pruning the seventh sub-block and the eighth sub-block to be within the range of the current CTU, comparing the motion information of the seventh sub-block and the eighth sub-block after pruning, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the left to obtain a new matching block;

pruning the first sub-block and the ninth sub-block to be within the range of the current CTU, and comparing the motion information of the pruned first sub-block and ninth sub-block; and pruning the fifth sub-block and the tenth sub-block to be within the range of the current CTU, comparing the motion information of the fifth sub-block and the tenth sub-block after pruning, and if at least one comparison result of the two comparison results is different in motion information, vertically and downwards offsetting the matching block by one unit to obtain a new matching block;

Pruning the third sub-block and the eleventh sub-block to be within the range of the current CTU, and comparing the motion information of the pruned third sub-block and eleventh sub-block; and pruning the seventh sub-block and the twelfth sub-block to be within the range of the current CTU, comparing the motion information of the seventh sub-block and the twelfth sub-block after pruning, and if at least one of the two comparison results is different in motion information, vertically and upwardly offsetting the matching block by one unit to obtain a new matching block;

the first sub-block is the sub-block at the upper left corner of the matching block, the second sub-block is the adjacent sub-block at the top right corner of the matching block, the third sub-block is the sub-block at the lower left corner of the matching block, the fourth sub-block is the adjacent sub-block at the bottom right corner of the matching block, the fifth sub-block is the sub-block at the upper right corner of the matching block, the sixth sub-block is the adjacent sub-block at the top left corner of the matching block, the seventh sub-block is the sub-block at the lower right corner of the matching block, the eighth sub-block is the adjacent sub-block at the bottom left corner of the matching block, the ninth sub-block is the adjacent sub-block at the left corner of the matching block, the tenth sub-block is the adjacent sub-block at the right corner of the matching block, the eleventh sub-block is the adjacent sub-block at the left corner of the matching block, and the twelfth sub-block is the adjacent sub-block at the top right corner of the matching block; one unit is the side length of the subblock.

In some embodiments, a new matching block obtained by offsetting the matching block is determined by:

and respectively carrying out horizontal direction and vertical direction deviation on the matching blocks based on one or more deviation amount pairs to obtain one or more new matching blocks.

In some embodiments, after determining the matching block for the current block, further comprising:

pruning the matching block to be within the range of the current CTU.

In some embodiments, a new matching block obtained by offsetting the matching block is determined by:

when the right boundary of the trimmed matching block is not located at the right boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the fourteenth sub-block, and comparing the motion information of the fifteenth sub-block and the sixteenth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the trimmed matching block by one unit to the right to obtain a new matching block;

when the left boundary of the trimmed matching block is not located at the left boundary position of the current CTU, comparing the motion information of the seventeenth sub-block and the eighteenth sub-block, and comparing the motion information of the nineteenth sub-block and the twentieth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the trimmed matching block by one unit to the left to obtain a new matching block;

When the lower boundary of the trimmed matching block is not located at the position of the lower boundary of the current CTU, comparing the motion information of the thirteenth sub-block with the motion information of the twenty-first sub-block, and comparing the motion information of the seventeenth sub-block with the motion information of the twenty-second sub-block, if at least one of the two comparison results is that the motion information is different, vertically and downwards shifting the trimmed matching block by one unit to obtain a new matching block;

when the upper boundary of the trimmed matching block is not located at the upper boundary position of the current CTU, comparing the motion information of the fifteenth sub-block with the motion information of the twenty-third sub-block, and comparing the motion information of the nineteenth sub-block with the motion information of the twenty-fourth sub-block, if at least one of the two comparison results is that the motion information is different, vertically and upwardly shifting the trimmed matching block by one unit to obtain a new matching block;

wherein the thirteenth sub-block is the sub-block at the top left corner of the trimmed matching block, the fourteenth sub-block is the adjacent sub-block at the top right corner of the trimmed matching block, the fifteenth sub-block is the sub-block at the bottom left corner of the trimmed matching block, the sixteenth sub-block is the adjacent sub-block at the bottom right corner of the trimmed matching block, the seventeenth sub-block is the sub-block at the top right corner of the trimmed matching block, the eighteenth sub-block is the adjacent sub-block at the top left corner of the trimmed matching block, the nineteenth sub-block is the sub-block at the bottom right corner of the trimmed matching block, the twentieth sub-block is the adjacent sub-block at the bottom left corner of the trimmed matching block, the twenty-first sub-block is the adjacent sub-block at the left corner below the trimmed matching block, the twenty-second sub-block is the adjacent sub-block at the right corner below the trimmed matching block, the twenty-third sub-block is an adjacent sub-block on the leftmost side right above the trimmed matching block, and the twenty-fourth sub-block is an adjacent sub-block on the rightmost side right above the trimmed matching block; one unit is the side length of the subblock.

In some embodiments, the determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block comprises:

when at least one new matching block exists, determining a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

when at least one new matching block exists, determining a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

and when no new matching block exists, determining the prediction mode corresponding to the matching block before shifting as a candidate enhanced temporal motion vector prediction mode.

In some embodiments, the determining motion information of each sub-block in the current block based on the candidate enhanced temporal motion vector prediction mode includes:

for any sub-block in the target candidate matching block, pruning the sub-block to be within the range of the current CTU; the target candidate matching block is a matching block corresponding to the candidate enhanced temporal motion vector prediction mode;

If the forward motion information and the backward motion information of the pruned subblock are both available, respectively scaling the forward motion information and the backward motion information of the pruned subblock to point to a first frame of List0 and a first frame of List1, and respectively endowing the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block;

if the forward motion information of the pruned sub-block is available but the backward motion information is not available, scaling the forward motion information of the pruned sub-block to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block;

if the backward motion information of the pruned sub-block is available but the forward motion information is not available, the backward motion information of the pruned sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In some embodiments, the determining motion information of each sub-block in the current block based on the candidate enhanced temporal motion vector prediction mode further comprises:

if the forward motion information and the backward motion information of the pruned subblock are unavailable, pruning the center position of the target candidate matching block to the range of the current CTU, respectively stretching the forward motion information and the backward motion information of the center position of the pruned target candidate matching block to a first frame pointing to List0 and a first frame pointing to List1 when both the forward motion information and the backward motion information of the center position of the pruned target candidate matching block are available, and respectively endowing the stretched forward motion information and the stretched backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the pruned target candidate matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the pruned target candidate matching block to a first frame pointing to the List0 and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the pruned target candidate matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the pruned target candidate matching block to a first frame pointing to the List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; when the forward motion information and the backward motion information of the center position of the trimmed target candidate matching block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

Or if the forward motion information and the backward motion information of the pruned sub-blocks are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block;

or, if the forward motion information and the backward motion information of the pruned subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are both available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In some embodiments, the determining motion information of each sub-block in the current block based on the candidate enhanced temporal motion vector prediction mode includes:

for any sub-block in the target candidate matching block, if the forward motion information and the backward motion information of the sub-block are both available, respectively scaling the forward motion information and the backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, and respectively giving the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block;

if the forward motion information of the sub-block is available, but the backward motion information is not available, the forward motion information of the sub-block is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block is available but the forward motion information is not available, the backward motion information of the sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In some embodiments, the determining motion information of each sub-block in the current block based on the candidate enhanced temporal motion vector prediction mode further comprises:

If the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the center position of the target candidate matching block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the center position of the target candidate matching block are both available, and respectively giving the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the target candidate matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the target candidate matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the target candidate matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the target candidate matching block to a first frame pointing to List1 and assigning the scaled backward motion information to a sub-block at a corresponding position of the current block; when the forward motion information and the backward motion information of the central position of the target candidate matching block are unavailable, giving zero motion information to the subblock at the corresponding position of the current block;

Or if the forward motion information and the backward motion information of the sub-block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

or, if the forward motion information and the backward motion information of the subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In some embodiments, the determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode includes:

when the current block enables an enhanced temporal motion vector prediction technology, a candidate enhanced temporal motion vector prediction mode is determined based on the matching block and a new matching block obtained by offsetting the matching block, and a first temporal candidate mode list is constructed based on the candidate enhanced temporal motion vector prediction mode.

In some embodiments, whether the current block turns on an enhanced temporal motion vector prediction technique is indicated using a sequence parameter set level syntax or a slice level syntax.

In some embodiments, when the sequence parameter set level syntax is used to indicate whether the current block enables the enhanced temporal motion vector prediction technique, whether the current block enables the enhanced temporal motion vector prediction technique is determined by:

when the image sequence to which the current block belongs enables the enhanced temporal motion vector prediction technology, determining that the current block enables the enhanced temporal motion vector prediction technology;

When the image sequence to which the current block belongs does not enable the enhanced temporal motion vector prediction technology, determining that the current block does not enable the enhanced temporal motion vector prediction technology.

In some embodiments, when the slice-level syntax is used to indicate whether the current block enables the enhanced temporal motion vector prediction technique, whether the current block enables the enhanced temporal motion vector prediction technique is determined by:

determining that the current block enables an enhanced temporal motion vector prediction technique when the slice to which the current block belongs enables the enhanced temporal motion vector prediction technique;

when the enhanced temporal motion vector prediction technique is not enabled for the slice to which the current block belongs, determining that the enhanced temporal motion vector prediction technique is not enabled for the current block.

In some embodiments, whether the current block enables an enhanced temporal motion vector prediction technique is determined by:

when the size of the current block is smaller than or equal to the size of a preset maximum coding block and larger than or equal to the size of a preset minimum coding block, determining that the current block enables an enhanced time domain motion vector prediction technology;

and when the size of the current block is larger than the size of a preset maximum coding block or smaller than the size of a preset minimum coding block, determining that the enhanced temporal motion vector prediction technology is not started for the current block.

In some embodiments, the size of the preset largest coding block is represented using a sequence parameter set level syntax or using a slice level syntax;

and/or the first and/or second light sources,

the size of the preset minimum coding block is represented by using a sequence parameter set level syntax or by using a slice level syntax.

According to a third aspect of embodiments of the present application, there is provided a decoding apparatus including:

the acquisition unit is used for acquiring the code stream of the current block;

a decoding unit, configured to parse, from the code stream of the current block, index information of an enhanced temporal motion vector prediction mode, where the index information is used to identify a position of the enhanced temporal motion vector prediction mode in a first temporal candidate mode list constructed by a coding-end device;

a first determining unit configured to determine a matching block of the current block based on a first surrounding block of the current block;

a construction unit, configured to determine a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and construct a second temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

a second determining unit configured to determine an enhanced temporal motion vector prediction mode from the second temporal candidate mode list based on index information of the enhanced temporal motion vector prediction mode;

And the prediction unit is used for determining the motion information of each subblock in the current block based on the enhanced time domain motion vector prediction mode and performing motion compensation on each subblock in the current block based on the motion information of each subblock in the current block.

In some embodiments, the first determining unit is specifically configured to determine motion information of a first stage based on the first surrounding block; determining a matching block for the current block based on the motion information of the first stage.

In some embodiments, the first determining unit is specifically configured to determine the motion information of the first stage based on the forward motion information and/or the backward motion information of the first surrounding block.

In some embodiments, the first determining unit is specifically configured to:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available but the forward motion information is available, scaling the forward motion information of the first peripheral block to point to a first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In some embodiments, the first determining unit is specifically configured to:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first peripheral block is available, but the forward motion information of the first peripheral block does not point to the first frame in List0, scaling the forward motion information of the first peripheral block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

If the forward motion information of the first surrounding block is not available, but the backward motion information is available, scaling the backward motion information of the first surrounding block to point to a first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 0;

the reference direction of the motion information in the first stage is the List0 direction.

In some embodiments, the first determining unit is specifically configured to:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In some embodiments, the first determining unit is specifically configured to:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first peripheral block is available, but the forward motion information of the first peripheral block does not point to the first frame in List0, scaling the forward motion information of the first peripheral block to point to the first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

determining that the motion vector of the first stage is 0 and the reference frame index is an index of a first frame in List0 if the forward motion information of the first peripheral block is not available;

the reference direction of the motion information in the first stage is the List0 direction.

In some embodiments, the first determining unit is specifically configured to:

determining that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List0, and the reference direction of the motion information of the first stage is a List0 direction;

or the like, or, alternatively,

it is determined that the motion vector of the first stage is 0, the reference frame index is an index of the first frame in List1, and the reference direction of the motion information of the first stage is the List1 direction.

In some embodiments, the first determining unit is specifically configured to determine the matching block of the current block based on the horizontal motion vector, the vertical motion vector, the precision of the motion vector, and the position of the current block in the first stage.

In some embodiments, the first determining unit is specifically configured to determine the matching block of the current block based on the horizontal motion vector, the vertical motion vector, the precision of the motion vector, and the sub-block size of the first stage.

In some embodiments, the building unit is specifically configured to:

the first sub-block and the second sub-block are in the range of the current CTU, and the motion information of the first sub-block and the second sub-block after the Clip is compared; and enabling the third sub-block and the fourth sub-block to be in the range of the current CTU, comparing the motion information of the third sub-block and the fourth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the right to obtain a new matching block;

The fifth sub-block and the sixth sub-block are in the range of the current CTU, and the motion information of the fifth sub-block and the sixth sub-block after the Clip is compared; and enabling the seventh sub-block and the eighth sub-block to be in the range of the current CTU, comparing the motion information of the seventh sub-block and the eighth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the left to obtain a new matching block;

the first sub-block and the ninth sub-block are in the range of the current CTU, and the motion information of the first sub-block and the ninth sub-block after the Clip is compared; and enabling the fifth sub-block and the tenth sub-block to be in the range of the current CTU, comparing the motion information of the fifth sub-block and the tenth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, vertically and downwards offsetting the matching block by one unit to obtain a new matching block;

the third sub-block and the eleventh sub-block are in the range of the current CTU, and the motion information of the third sub-block and the eleventh sub-block after the Clip is compared; and enabling the seventh sub-block and the twelfth sub-block to be in the range of the current CTU, comparing the motion information of the seventh sub-block and the twelfth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, vertically and upwardly offsetting the matching block by one unit to obtain a new matching block;

The first sub-block is the sub-block at the upper left corner of the matching block, the second sub-block is the adjacent sub-block at the top right corner of the matching block, the third sub-block is the sub-block at the lower left corner of the matching block, the fourth sub-block is the adjacent sub-block at the bottom right corner of the matching block, the fifth sub-block is the sub-block at the upper right corner of the matching block, the sixth sub-block is the adjacent sub-block at the top left corner of the matching block, the seventh sub-block is the sub-block at the lower right corner of the matching block, the eighth sub-block is the adjacent sub-block at the bottom left corner of the matching block, the ninth sub-block is the adjacent sub-block at the left corner of the matching block, the tenth sub-block is the adjacent sub-block at the right corner of the matching block, the eleventh sub-block is the adjacent sub-block at the left corner of the matching block, and the twelfth sub-block is the adjacent sub-block at the top right corner of the matching block; one unit is the side length of the subblock.

In some embodiments, the building unit is specifically configured to perform horizontal and vertical shifting on the matching block based on one or more shift amount pairs, respectively, to obtain one or more new matching blocks.

In some embodiments, the building unit is specifically configured to range the matching block Clip to the current CTU.

In some embodiments, the building unit is specifically configured to:

when the right boundary of the matching block after the Clip is not positioned at the right boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the fourteenth sub-block, and comparing the motion information of the fifteenth sub-block and the sixteenth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the right to obtain a new matching block;

when the left boundary of the matching block after the Clip is not positioned at the left boundary position of the current CTU, comparing the motion information of the seventeenth sub-block and the eighteenth sub-block, and comparing the motion information of the nineteenth sub-block and the twentieth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the left to obtain a new matching block;

when the lower boundary of the matching block after the Clip is not positioned at the position of the lower boundary of the current CTU, comparing the motion information of the thirteenth sub-block and the twenty-first sub-block, and comparing the motion information of the seventeenth sub-block and the twenty-second sub-block, if at least one of the two comparison results is different in motion information, vertically and downwardly offsetting the matching block after the Clip by one unit to obtain a new matching block;

When the upper boundary of the matching block after the Clip is not positioned at the upper boundary position of the current CTU, comparing the motion information of the fifteenth sub-block and the twenty-third sub-block, and comparing the motion information of the nineteenth sub-block and the twenty-fourth sub-block, if at least one of the two comparison results is different in motion information, vertically and upwardly shifting a unit for the matching block after the Clip to obtain a new matching block;

wherein, the thirteenth sub-block is the upper left corner sub-block of the matching block after the Clip, the fourteenth sub-block is the top right adjacent sub-block of the matching block after the Clip, the fifteenth sub-block is the lower left corner sub-block of the matching block after the Clip, the sixteenth sub-block is the bottom right adjacent sub-block of the matching block after the Clip, the seventeenth sub-block is the upper right corner sub-block of the matching block after the Clip, the eighteenth sub-block is the top left adjacent sub-block of the matching block after the Clip, the nineteenth sub-block is the bottom right adjacent sub-block of the matching block after the Clip, the twentieth sub-block is the bottom left adjacent sub-block of the matching block after the Clip, the twenty-first sub-block is the left most adjacent sub-block right below the matching block after the Clip, the twenty-second sub-block is the right most adjacent sub-block below the matching block after the Clip, the twenty-third sub-block is the left most adjacent sub-block above the matching block after the Clip, the twenty-fourth sub-block is the adjacent sub-block on the rightmost side right above the matched block after the Clip; one unit is the side length of the subblock.

In some embodiments, the building unit is specifically configured to:

when at least one new matching block exists, determining a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced time domain motion vector prediction mode;

or the like, or, alternatively,

when at least one new matching block exists, determining a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

and when no new matching block exists, determining the prediction mode corresponding to the matching block before shifting as a candidate enhanced temporal motion vector prediction mode.

In some embodiments, the prediction unit is specifically configured to, for any sub-block in the target matching block, prune the sub-block to be within the range of the current CTU; the target matching block is a matching block corresponding to the enhanced temporal motion vector prediction mode;

if the forward motion information and the backward motion information of the pruned subblock are both available, respectively scaling the forward motion information and the backward motion information of the pruned subblock to point to a first frame of List0 and a first frame of List1, and respectively endowing the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block;

If the forward motion information of the pruned sub-block is available but the backward motion information is not available, scaling the forward motion information of the pruned sub-block to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block;

if the backward motion information of the pruned sub-block is available but the forward motion information is not available, the backward motion information of the pruned sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In some embodiments, the prediction unit is further configured to:

if the forward motion information and the backward motion information of the pruned subblock are unavailable, pruning the center position of the target matching block to be within the range of the current CTU, respectively stretching the forward motion information and the backward motion information of the center position of the pruned target matching block to a first frame pointing to List0 and a first frame pointing to List1 when both the forward motion information and the backward motion information of the center position of the pruned target matching block are available, and respectively endowing the stretched forward motion information and the stretched backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the pruned target matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the pruned target matching block to a first frame pointing to a List0, and assigning the scaled forward motion information to a sub-block at a corresponding position of the current block; when the backward motion information of the center position of the pruned target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the pruned target matching block to a first frame pointing to a List1, and assigning the scaled backward motion information to a sub-block at a corresponding position of the current block; when the forward motion information and the backward motion information of the center position of the trimmed target matching block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

Or if the forward motion information and the backward motion information of the pruned sub-blocks are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block;

or, if the forward motion information and the backward motion information of the pruned subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are both available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In some embodiments, the prediction unit is specifically configured to, for any sub-block in the target matching block, if both forward motion information and backward motion information of the sub-block are available, scale forward motion information and backward motion information of the sub-block to a first frame of List0 and a first frame of List1, respectively, and assign the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block, respectively;

if the forward motion information of the sub-block is available, but the backward motion information is not available, the forward motion information of the sub-block is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block is available but the forward motion information is not available, the backward motion information of the sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In some embodiments, the prediction unit is further configured to:

if the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the central position of the target matching block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the central position of the target matching block are both available, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the target matching block is available, but the backward motion information is not available, scaling the forward motion information of the center position of the target matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the target matching block to a first frame pointing to the List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; when the forward motion information and the backward motion information of the central position of the target matching block are unavailable, giving zero motion information to the subblock at the corresponding position of the current block;

Or if the forward motion information and the backward motion information of the sub-block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

or, if the forward motion information and the backward motion information of the subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In some embodiments, the decoding unit is specifically configured to, when it is determined that the enhanced temporal motion vector prediction technique is enabled for the current block, parse index information of an enhanced temporal motion vector prediction mode from a code stream of the current block.

In some embodiments, whether the current block initiates an enhanced temporal motion vector prediction technique is indicated using a sequence parameter set level syntax or a Slice level syntax.

In some embodiments, when the sequence parameter set level syntax is used to indicate whether the current block enables an enhanced temporal motion vector prediction technique, the decoding unit is specifically configured to:

when the image sequence to which the current block belongs enables the enhanced temporal motion vector prediction technology, determining that the current block enables the enhanced temporal motion vector prediction technology;

when the image sequence to which the current block belongs does not enable the enhanced temporal motion vector prediction technology, determining that the current block does not enable the enhanced temporal motion vector prediction technology.

In some embodiments, when Slice-level syntax is used to indicate whether the current block enables an enhanced temporal motion vector prediction technique, the decoding unit is specifically configured to:

When Slice to which the current block belongs enables an enhanced temporal motion vector prediction technology, determining that the current block enables the enhanced temporal motion vector prediction technology;

when the Slice to which the current block belongs does not enable the enhanced temporal motion vector prediction technology, determining that the current block does not enable the enhanced temporal motion vector prediction technology.

In some embodiments, the decoding unit is specifically configured to:

when the size of the current block is smaller than or equal to the size of a preset maximum block and larger than or equal to the size of a preset minimum block, determining that the current block enables an enhanced temporal motion vector prediction technology;

when the size of the current block is larger than the size of a preset maximum block or smaller than the size of a preset minimum block, determining that the enhanced temporal motion vector prediction technology is not enabled for the current block.

In some embodiments, the size of the preset maximum block is represented using a sequence parameter set level syntax or using a Slice level syntax;

and/or the first and/or second light sources,

the size of the preset minimum block is expressed using a sequence parameter set level syntax or using a Slice level syntax.

According to a fourth aspect of embodiments of the present application, there is provided a decoding apparatus comprising:

A first determining unit, configured to determine a matching block of a current block based on a first peripheral block of the current block;

a construction unit, configured to determine a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and construct a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

a prediction unit, configured to traverse each candidate enhanced temporal motion vector prediction mode in the first temporal candidate mode list, determine, for any candidate enhanced temporal motion vector prediction mode, motion information of each subblock in the current block based on the candidate enhanced temporal motion vector prediction mode, and perform motion compensation on each subblock in the current block based on the motion information of each subblock in the current block;

a second determining unit, configured to determine, based on rate distortion costs corresponding to the candidate enhanced temporal motion vector prediction modes, a candidate enhanced temporal motion vector prediction mode with a smallest rate distortion cost as an enhanced temporal motion vector prediction mode of the current block;

and the encoding unit is used for carrying index information of the enhanced temporal motion vector prediction mode of the current block in a code stream of the current block, wherein the index information is used for identifying the position of the enhanced temporal motion vector prediction mode in the first temporal candidate mode list.

In some embodiments, the first determining unit is specifically configured to determine motion information of a first stage based on the first surrounding block; determining a matching block for the current block based on the motion information of the first stage.

In some embodiments, the first determining unit is specifically configured to determine the motion information of the first stage based on the forward motion information and/or the backward motion information of the first surrounding block.

In some embodiments, the first determining unit is specifically configured to:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if the backward motion information of the first peripheral block is not available but the forward motion information is available, scaling the forward motion information of the first peripheral block to point to a first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In some embodiments, the first determining unit is specifically configured to:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first peripheral block is available, but the forward motion information of the first peripheral block does not point to the first frame in List0, scaling the forward motion information of the first peripheral block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

if the forward motion information of the first surrounding block is not available, but the backward motion information is available, scaling the backward motion information of the first surrounding block to point to a first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

If neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 0;

the reference direction of the motion information in the first stage is the List0 direction.

In some embodiments, the first determining unit is specifically configured to:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if the backward motion information of the first peripheral block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

The reference direction of the motion information in the first stage is the List1 direction.

In some embodiments, the first determining unit is specifically configured to:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first peripheral block is available, but the forward motion information of the first peripheral block does not point to the first frame in List0, scaling the forward motion information of the first peripheral block to point to the first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

determining that the motion vector of the first stage is 0 and the reference frame index is an index of a first frame in List0 if the forward motion information of the first peripheral block is not available;

the reference direction of the motion information in the first stage is the List0 direction.

In some embodiments, the first determining unit is specifically configured to:

determining that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List0, and the reference direction of the motion information of the first stage is a List0 direction;

Or the like, or, alternatively,

it is determined that the motion vector of the first stage is 0, the reference frame index is an index of the first frame in List1, and the reference direction of the motion information of the first stage is the List1 direction.

In some embodiments, the building unit is specifically configured to determine a matching block of the current block based on the horizontal motion vector, the vertical motion vector, the precision of the motion vector, and the position of the current block in the first stage.

In some embodiments, the building unit is specifically configured to determine the matching block of the current block based on the horizontal motion vector, the vertical motion vector, the precision of the motion vector, and the sub-block size of the first stage.

In some embodiments, the building unit is specifically configured to:

the first sub-block and the second sub-block are in the range of the current CTU, and the motion information of the first sub-block and the second sub-block after the Clip is compared; and enabling the third sub-block and the fourth sub-block to be in the range of the current CTU, comparing the motion information of the third sub-block and the fourth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the right to obtain a new matching block;

The fifth sub-block and the sixth sub-block are in the range of the current CTU, and the motion information of the fifth sub-block and the sixth sub-block after the Clip is compared; and enabling the seventh sub-block and the eighth sub-block to be in the range of the current CTU, comparing the motion information of the seventh sub-block and the eighth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the left to obtain a new matching block;

the first sub-block and the ninth sub-block are in the range of the current CTU, and the motion information of the first sub-block and the ninth sub-block after the Clip is compared; and enabling the fifth sub-block and the tenth sub-block to be in the range of the current CTU, comparing the motion information of the fifth sub-block and the tenth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, vertically and downwards offsetting the matching block by one unit to obtain a new matching block;

the third sub-block and the eleventh sub-block are in the range of the current CTU, and the motion information of the third sub-block and the eleventh sub-block after the Clip is compared; and enabling the seventh sub-block and the twelfth sub-block to be in the range of the current CTU, comparing the motion information of the seventh sub-block and the twelfth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, vertically and upwardly offsetting the matching block by one unit to obtain a new matching block;

The first sub-block is the sub-block at the upper left corner of the matching block, the second sub-block is the adjacent sub-block at the top right corner of the matching block, the third sub-block is the sub-block at the lower left corner of the matching block, the fourth sub-block is the adjacent sub-block at the bottom right corner of the matching block, the fifth sub-block is the sub-block at the upper right corner of the matching block, the sixth sub-block is the adjacent sub-block at the top left corner of the matching block, the seventh sub-block is the sub-block at the lower right corner of the matching block, the eighth sub-block is the adjacent sub-block at the bottom left corner of the matching block, the ninth sub-block is the adjacent sub-block at the left corner of the matching block, the tenth sub-block is the adjacent sub-block at the right corner of the matching block, the eleventh sub-block is the adjacent sub-block at the left corner of the matching block, and the twelfth sub-block is the adjacent sub-block at the top right corner of the matching block; one unit is the side length of the subblock.

In some embodiments, the building unit is specifically configured to perform horizontal and vertical shifting on the matching block based on one or more shift amount pairs, respectively, to obtain one or more new matching blocks.

In some embodiments, the building unit is specifically configured to range the matching block Clip to the current CTU.

In some embodiments, the building unit is specifically configured to:

when the right boundary of the matching block after the Clip is not positioned at the right boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the fourteenth sub-block, and comparing the motion information of the fifteenth sub-block and the sixteenth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the right to obtain a new matching block;

when the left boundary of the matching block after the Clip is not positioned at the left boundary position of the current CTU, comparing the motion information of the seventeenth sub-block and the eighteenth sub-block, and comparing the motion information of the nineteenth sub-block and the twentieth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the left to obtain a new matching block;

when the lower boundary of the matching block after the Clip is not positioned at the position of the lower boundary of the current CTU, comparing the motion information of the thirteenth sub-block and the twenty-first sub-block, and comparing the motion information of the seventeenth sub-block and the twenty-second sub-block, if at least one of the two comparison results is different in motion information, vertically and downwardly offsetting the matching block after the Clip by one unit to obtain a new matching block;

When the upper boundary of the matching block after the Clip is not positioned at the upper boundary position of the current CTU, comparing the motion information of the fifteenth sub-block and the twenty-third sub-block, and comparing the motion information of the nineteenth sub-block and the twenty-fourth sub-block, if at least one of the two comparison results is different in motion information, vertically and upwardly shifting a unit for the matching block after the Clip to obtain a new matching block;

wherein, the thirteenth sub-block is the upper left corner sub-block of the matching block after the Clip, the fourteenth sub-block is the top right adjacent sub-block of the matching block after the Clip, the fifteenth sub-block is the lower left corner sub-block of the matching block after the Clip, the sixteenth sub-block is the bottom right adjacent sub-block of the matching block after the Clip, the seventeenth sub-block is the upper right corner sub-block of the matching block after the Clip, the eighteenth sub-block is the top left adjacent sub-block of the matching block after the Clip, the nineteenth sub-block is the bottom right adjacent sub-block of the matching block after the Clip, the twentieth sub-block is the bottom left adjacent sub-block of the matching block after the Clip, the twenty-first sub-block is the left most adjacent sub-block right below the matching block after the Clip, the twenty-second sub-block is the right most adjacent sub-block below the matching block after the Clip, the twenty-third sub-block is the left most adjacent sub-block above the matching block after the Clip, the twenty-fourth sub-block is the adjacent sub-block on the rightmost side right above the matched block after the Clip; one unit is the side length of the subblock.

In some embodiments, the building unit is specifically configured to:

when at least one new matching block exists, determining a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

when at least one new matching block exists, determining a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

and when no new matching block exists, determining the prediction mode corresponding to the matching block before shifting as a candidate enhanced temporal motion vector prediction mode.

In some embodiments, the prediction unit is specifically configured to, for any sub-block in the target candidate matching block, prune the sub-block to be within the range of the current CTU; the target candidate matching block is a matching block corresponding to the candidate enhanced temporal motion vector prediction mode;

if the forward motion information and the backward motion information of the pruned subblock are both available, respectively scaling the forward motion information and the backward motion information of the pruned subblock to point to a first frame of List0 and a first frame of List1, and respectively endowing the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block;

If the forward motion information of the pruned sub-block is available but the backward motion information is not available, scaling the forward motion information of the pruned sub-block to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block;

if the backward motion information of the pruned sub-block is available but the forward motion information is not available, the backward motion information of the pruned sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In some embodiments, the prediction unit is further configured to:

if the forward motion information and the backward motion information of the pruned subblock are unavailable, pruning the center position of the target candidate matching block to the range of the current CTU, respectively stretching the forward motion information and the backward motion information of the center position of the pruned target candidate matching block to a first frame pointing to List0 and a first frame pointing to List1 when both the forward motion information and the backward motion information of the center position of the pruned target candidate matching block are available, and respectively endowing the stretched forward motion information and the stretched backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the pruned target candidate matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the pruned target candidate matching block to a first frame pointing to the List0 and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the pruned target candidate matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the pruned target candidate matching block to a first frame pointing to the List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; when the forward motion information and the backward motion information of the center position of the trimmed target candidate matching block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

Or if the forward motion information and the backward motion information of the pruned sub-blocks are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block;

or, if the forward motion information and the backward motion information of the pruned subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are both available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In some embodiments, the prediction unit is specifically configured to, for any sub-block in the target candidate matching block, if both forward motion information and backward motion information of the sub-block are available, scale forward motion information and backward motion information of the sub-block to a first frame pointing to List0 and a first frame pointing to List1, and assign the scaled forward motion information and backward motion information to the sub-block at a corresponding position of the current block;

if the forward motion information of the sub-block is available, but the backward motion information is not available, the forward motion information of the sub-block is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block is available but the forward motion information is not available, the backward motion information of the sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In some embodiments, the prediction unit is further configured to:

if the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the center position of the target candidate matching block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the center position of the target candidate matching block are both available, and respectively giving the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the target candidate matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the target candidate matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the target candidate matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the target candidate matching block to a first frame pointing to List1 and assigning the scaled backward motion information to a sub-block at a corresponding position of the current block; when the forward motion information and the backward motion information of the central position of the target candidate matching block are unavailable, giving zero motion information to the subblock at the corresponding position of the current block;

Or if the forward motion information and the backward motion information of the sub-block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

or, if the forward motion information and the backward motion information of the subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In an optional embodiment, the constructing unit is specifically configured to, when the enhanced temporal motion vector prediction technique is enabled for the current block, determine a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and construct a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode.

In some embodiments, whether the current block enables an enhanced temporal motion vector prediction technique is indicated using a sequence parameter set level syntax or Slice level syntax.

In some embodiments, the constructing unit is specifically configured to determine that the current block enables an enhanced temporal motion vector prediction technique when the size of the current block is smaller than or equal to a preset maximum block size and is greater than or equal to a preset minimum block size;

when the size of the current block is larger than the size of a preset maximum block or smaller than the size of a preset minimum block, determining that the enhanced temporal motion vector prediction technology is not enabled for the current block.

In some embodiments, the size of the preset maximum block is represented using a sequence parameter set level syntax or using Slice level syntax;

And/or the first and/or second light sources,

the size of the preset minimum block is expressed using a sequence parameter set level syntax or using a Slice level syntax.

According to a fifth aspect of embodiments of the present application, there is provided a decoding-side device, including a processor and a machine-readable storage medium, the machine-readable storage medium storing machine-executable instructions executable by the processor, the processor being configured to execute the machine-executable instructions to implement the decoding method as claimed above.

According to a sixth aspect of the embodiments of the present application, there is provided an encoding end device, including a processor and a machine-readable storage medium, where the machine-readable storage medium stores machine-executable instructions capable of being executed by the processor, and the processor is configured to execute the machine-executable instructions to implement the encoding method as claimed above.

The decoding method of the embodiment of the application determines a matching block of a current block based on a first peripheral block of the current block by acquiring a code stream of the current block and analyzing index information of an enhanced time domain motion vector prediction mode from the code stream of the current block; determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a second temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode; determining an enhanced temporal motion vector prediction mode from the second temporal candidate mode list based on the index information; based on the enhanced time domain motion vector prediction mode, the motion information of each subblock in the current block is determined, and based on the motion information of each subblock in the current block, motion compensation is performed on each subblock in the current block, so that the probability of inaccurate motion information of a matching block caused by inaccurate motion information of surrounding blocks is reduced, and the decoding performance is improved.

Drawings

FIGS. 1A-1B are schematic diagrams of block partitions shown in exemplary embodiments of the present application;

FIG. 2 is a diagram illustrating a method of encoding and decoding according to an exemplary embodiment of the present application;

FIG. 3 is a schematic diagram illustrating a Clip operation according to an exemplary embodiment of the present application;

FIG. 4 is a flow chart diagram illustrating a decoding method according to an exemplary embodiment of the present application;

FIG. 5 is a diagram illustrating surrounding blocks of a current block according to an exemplary embodiment of the present application;

FIG. 6 is a flowchart illustrating an exemplary embodiment of determining a matching block for a current block based on a first peripheral block of the current block;

FIG. 7 is a diagram illustrating a reference block in shifting a matching block according to an exemplary embodiment of the present application;

FIGS. 8A-8E are diagrams illustrating shifting of matching blocks according to an exemplary embodiment of the present application;

fig. 9 is a schematic structural diagram of a decoding apparatus according to an exemplary embodiment of the present application;

fig. 10 is a schematic diagram illustrating a hardware structure of a decoding-side device according to an exemplary embodiment of the present application;

fig. 11 is a schematic structural diagram of an encoding-side device according to an exemplary embodiment of the present application;

fig. 12 is a schematic diagram of a hardware structure of an encoding-side device according to an exemplary embodiment of the present application.

Detailed Description

Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.

In order to make those skilled in the art better understand the technical solutions provided by the embodiments of the present application, a part of technical terms related to the embodiments of the present application and a main flow of the existing video codec are briefly described below.

In order to make those skilled in the art better understand the technical solutions provided by the embodiments of the present application, a brief description will be given below to some technical terms related to the embodiments of the present application.

Technical terms:

1. prediction pixel (Prediction Signal): the method is characterized in that a residual error is obtained through the difference between an original pixel and a predicted pixel according to a pixel value derived from a pixel which is coded and decoded, and then residual error transformation quantization and coefficient coding are carried out.

For example, the inter-frame prediction pixel refers to a pixel value of a current image block derived from a reference frame (reconstructed pixel frame), and due to the discrete pixel positions, a final prediction pixel needs to be obtained through an interpolation operation. The closer the predicted pixel is to the original pixel, the smaller the residual energy obtained by subtracting the predicted pixel and the original pixel is, and the higher the coding compression performance is.

2. Motion Vector (MV): in inter-coding, the relative displacement between the current coding block and the best matching block in its reference picture is denoted by MV. Each divided block (which may be referred to as a subblock) has a corresponding motion vector to be transmitted to a decoding end. If the MVs of each sub-block are coded and transmitted independently, especially divided into sub-blocks of small size, a considerable amount of bits is consumed. In order to reduce the bit number for coding the MV, in video coding, the MV of a current block to be coded is predicted according to the MV of an adjacent coded block by using the spatial correlation between adjacent image blocks, and then the prediction difference is coded. This effectively reduces the number of bits representing the MV. Therefore, in the process of encoding the MV of the current image block, generally, the MV of the current image block is predicted by using the MVs of the adjacent encoded blocks, and then a Difference value between a predicted value (Motion Vector Prediction, abbreviated as MVP) of the MV and a true estimate of a Motion Vector, that is, a Motion Vector residual (Motion Vector Difference, abbreviated as MVD) is encoded, so as to effectively reduce the number of encoding bits of the MV.

3. Motion Information (Motion Information): since MV represents the relative displacement between the current image block and the best matching block in a certain reference image, in order to accurately acquire information pointing to the image block, it is necessary to indicate which reference image is used by the index information of the reference image in addition to MV information. In the video coding technology, a reference picture list is usually established for a current picture based on a certain principle, and the reference picture index information indicates that the current picture block adopts the several reference pictures in the reference picture list. In addition, many coding techniques also support multiple reference picture lists, and therefore an index value is also needed to indicate which reference picture list is used, which index value may be referred to as a reference direction. In video coding, motion-related coding information such as MV, reference frame index, and reference direction is collectively referred to as motion information.

4. Rate-Distortion principle (Rate-Distortion Optimized, RDO for short): the index for evaluating the coding efficiency includes: code rate and Peak Signal to Noise Ratio (PSNR). The smaller the code rate, the larger the compression rate; the larger the PSNR, the better the reconstructed image quality. In the mode selection, the discriminant formula is essentially the comprehensive evaluation of the two.

Cost corresponding to the mode: j (mode) ═ D + λ R. Wherein D represents Distortion (Distortion), which is usually measured by using SSE (sum of mean square differences) index, wherein SSE refers to the sum of mean square differences between the reconstructed block and the source image block; λ is the Lagrangian multiplier; r is the actual number of bits required for encoding the image block in this mode, including the sum of bits required for encoding mode information, motion information, residual, etc.

When selecting the mode, if the RDO principle is used to make a comparison decision on the coding mode, the best coding performance can be ensured.

The following is a brief description of the block partitioning technique in the existing video coding standard and the main flow of the existing video coding and decoding.

The block division technology in the existing video coding standard:

in HEVC, a Coding Tree Unit (CTU) is recursively divided into CUs using a quadtree. It is determined at the leaf node CU level whether to use intra-coding or inter-coding. A CU may be further divided into two or four Prediction Units (PUs), and the same Prediction information is used in the same PU. After residual information is obtained after prediction is completed, a CU may be further divided into a plurality of Transform Units (TUs). For example, the current block in this application is a PU.

But there are major changes in the block partitioning techniques in the recently proposed general video coding. A mixed binary tree/ternary tree/quadtree partition structure replaces the original partition mode, cancels the concept division of original CU, PU and TU, and supports a more flexible partition mode of the CU. The CU may be a square or a rectangular partition. The CTU first performs the partition of the quadtree, and then the leaf nodes of the quadtree partition may further perform the partition of the binary tree and the ternary tree. As shown in fig. 1A, the CU has five partition types, which are quadtree partition, horizontal binary tree partition, vertical binary tree partition, horizontal ternary tree partition and vertical ternary tree partition, respectively, as shown in fig. 1B, a CU partition in a CTU may be any combination of the above five partition types, and different partition manners are known from the above, so that the shape of each PU is different, such as rectangle or square with different sizes.

The main flow of the existing video coding and decoding is as follows:

referring to fig. 2 (a), taking video coding as an example, video coding generally includes processes of prediction, transformation, quantization, entropy coding, and the like, and further, the coding process can be implemented according to the framework of fig. 2 (b).

The prediction can be divided into intra-frame prediction and inter-frame prediction, wherein the intra-frame prediction is to predict a current uncoded block by using surrounding coded blocks as references, and effectively remove redundancy on a spatial domain. Inter-frame prediction is to use neighboring coded pictures to predict the current picture, effectively removing redundancy in the temporal domain.

The transformation is to transform an image from a spatial domain to a transform domain and to represent the image by using transform coefficients. Most images contain more flat areas and slowly-changing areas, the images can be converted from the dispersed distribution in a space domain into the relatively concentrated distribution in a transform domain through proper transformation, the frequency domain correlation among signals is removed, and code streams can be effectively compressed by matching with a quantization process.

Entropy coding is a lossless coding method that converts a series of element symbols into a binary code stream for transmission or storage, and the input symbols may include quantized transform coefficients, motion vector information, prediction mode information, transform quantization related syntax, and the like. Entropy coding can effectively remove redundancy of the symbols of the video elements.

The above is introduced by taking encoding as an example, the video decoding and the video encoding process are opposite, that is, the video decoding generally includes processes of entropy decoding, prediction, inverse quantization, inverse transformation, filtering, and the like, and the implementation principle of each process is the same as or similar to that of entropy encoding.

The implementation of the conventional ATMVP technology will be briefly described below.

The existing ATMVP technology is mainly realized by the following procedures:

1) determining Temporal Motion Vector Prediction (TMVP): judging whether the motion information of the A0 position of the current coding block meets the following conditions:

a) the A0 position exists, and is in the same Slice (Slice) and unit (Tile) with the current coding unit;

b) the prediction mode at the position A0 is an inter mode;

c) the reference frame index at the a0 position is consistent with the reference frame index of the co-located frame of the current frame (the L0 direction is determined first, and then the L1 direction is determined).

Wherein, the A0 position is the position of (xCb-1, yCb + CbHeight-1); (xCb, yCb) is the coordinate of the upper left corner of the current block, and CbHeight is the height of the current block.

2) Calculating the position of the central reference block: the accuracy of the TMVP acquired in step 1) is 1/16, and 4 bits of right shift are required to be converted into integer pixels. Meanwhile, the position of the reference block needs to be clipped (Clip) to be within the range of the current CTU, that is, when the position of the reference block is not within the range of the current CTU, the reference block is horizontally translated or/and vertically translated to be just within the range of the current CTU, which is schematically shown in fig. 3.

The position of the central reference block is calculated as follows:

xColCb=Clip3(xCtb,Min(PicWidth-1,xCtb+(1<<CtbLog2Size)+3),xColCtrCb+(tempMv[0]>>4))

yColCb=Clip3(yCtb,Min(PicHeight-1,yCtb+(1<<CtbLog2Size)-1),yColCtrCb+(tempMv[1]>>4))

wherein, (xColCb, ycocb) is the coordinate of the top left corner of the central reference block, (xCtb, yCtb) is the coordinate of the top left corner of the current CTU, PicWidth and PicHeight are the width and height of the current frame, respectively, CtbLog2Size is based on 2, logarithm is taken on the Size of the current CTU, (xColCtrCb, ycorcb) is the coordinate of the central position of the current block, and tempMv [0] and tempMv [1] are the horizontal motion vector and vertical motion vector of the a0 position, respectively.

3) Judging the prediction mode of the central reference block, and if the prediction mode is a non-inter prediction mode, both ctrPredFlagL0 and ctrPredFlagL1 are 0; otherwise, the prediction mode is the inter prediction mode, go to step 4).

4) Adjusting the reference position: since the subblock size is 8 × 8, the motion information is in units of 8 × 8. Therefore, the coordinates of the upper left corner of the center reference block need to be adjusted to the 8-fold coordinate position. The adjustment formula is as follows:

xColCb=((xColCb>>3)<<3)

yColCb=((yColCb>>3)<<3)

5) acquiring the motion information of the adjusted central reference block: if the adjusted prediction mode of the central reference Block is an Intra prediction or Intra Block Copy (IBC) mode, FlagLXCol is 0; otherwise, whether the motion information of the adjusted center reference block in the L0 and L1 directions exists or not is respectively judged, if yes, the FlagLXCol is 1, and the motion information of the adjusted center reference block in the L0 and L1 directions is obtained.

Illustratively, LX — L0 or LX — L1, when the prediction mode of the adjusted center reference block is the intra prediction or intra block copy mode, FlagL0Col — 0, and FlagL1Col — 0.

When the prediction mode of the adjusted center reference block is not intra prediction nor intra block copy mode, flag L0Col is 1 when motion information in the L0 direction of the adjusted center reference block exists, and flag L0Col is 0 when motion information in the L0 direction of the adjusted center reference block does not exist; when the motion information of the adjusted center reference block in the L1 direction exists, flag L1Col is 1, and when the motion information of the adjusted center reference block in the L1 direction does not exist, flag L1Col is 0.

When the FlagLXCol is 1, if the long-term reference frame of the current frame and the long-term reference frame of the co-located frame are not equal, determining that the motion information of the adjusted center reference block is unavailable, and ctrPredFlagLX is 0; otherwise, the adjusted motion information of the center reference block is scaled (scale) to the first frame pointing to ListX (X ═ 0 or 1), as the motion information of the current center reference block position, ctrPredFlagLX ═ 1. 6) And when ctrPredFlagLX is 1, calculating motion information of each sub-block: traversing each sub-block in the matching block, for any sub-block, extending the clip of the sub-block to the current CTU range, if the motion information of the sub-block after the clip is available, extending the motion information of the sub-block after the clip to a first frame pointing to ListX, and endowing the extended motion information to the sub-block at the corresponding position of the current block; and if the motion information of the sub-block after the clip is not available, stretching the motion information of the center position of the adjusted center reference block to a first frame pointing to ListX, and endowing the stretched motion information to the sub-block at the position corresponding to the current block.

In order to make the aforementioned objects, features and advantages of the embodiments of the present application more comprehensible, embodiments of the present application are described in detail below with reference to the accompanying drawings.

Please refer to fig. 4, which is a flowchart illustrating a decoding method according to an embodiment of the present disclosure, where the decoding method may be applied to a decoding-side device, and as shown in fig. 4, the decoding method may include the following steps:

step S400, a code stream of the current block is obtained, and index information of the enhanced temporal motion vector prediction mode is analyzed from the code stream of the current block, wherein the index information is the position of the enhanced temporal motion vector prediction mode in a first temporal candidate mode list constructed by the encoding end device.

In the embodiment of the present application, the current block may be any image block in an image to be processed. In implementation, an image to be processed may be divided into different image blocks, and then each image block may be sequentially processed in a certain order. The size and shape of each image block may be set according to a preset partition rule.

For the decoding-end device, the current block is a block to be decoded.

The code stream is sent by the encoding end, and the code stream may be a binary code stream, and the code stream may carry some information that the decoding end device needs to know for decoding, for example, the code stream may carry information of an encoding mode used by the encoding end device, a size of the current block, and the like.

In order to improve the reliability of the motion information of the matching block, the number of the matching blocks determined based on the surrounding blocks of the current block is no longer limited to one, but may be multiple, and the encoding-side device may construct a temporal candidate mode list (referred to as a first temporal candidate mode list herein) based on temporal prediction modes (referred to as candidate enhanced temporal motion vector prediction modes herein) corresponding to the multiple matching blocks, and encode index information of a candidate enhanced temporal motion vector prediction mode (referred to as an enhanced temporal motion vector prediction mode herein) corresponding to a finally used matching block into a code stream of the current block, where the index information is used to identify a position of the enhanced temporal motion vector prediction mode in the first temporal candidate mode list constructed by the encoding-side device.

When the decoding end device obtains the code stream of the current block, the index information of the enhanced temporal motion vector prediction mode can be analyzed from the code stream of the current block.

Step S410, determining a matching block of the current block based on the first peripheral block of the current block.

In the embodiment of the present application, the first surrounding block of the current block may include any decoded neighboring block or non-neighboring block of the current block.

For example, referring to fig. 5, the first surrounding block of the current block may be any one of A, B, C, D, F and G.

The decoding-side device may determine a matching block of the current block based on a first surrounding block of the current block.

As a possible embodiment, as shown in fig. 6, in step S410, determining a matching block of the current block based on the first peripheral block of the current block may be implemented by:

step S411, determining motion information of a first stage based on a first surrounding block of a current block;

step S412, determining a matching block of the current block based on the motion information of the first stage.

For example, the first stage may refer to a process of determining a matching block of the current block based on surrounding blocks of the current block, and the motion information of the first stage may refer to motion information determined based on a current first surrounding block for determining the matching block of the current block.

For example, the motion information of the first stage may be determined based on the forward motion information and/or the backward motion information of the first surrounding block of the current block, and specific implementations thereof may include, but are not limited to, the following implementations:

in a first mode

If the backward motion information of the first peripheral block is available and the backward motion information of the first peripheral block points to a first frame in a List (List)1, determining the motion information of the first stage as the backward motion information of the first peripheral block;

If the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if the backward motion information of the first peripheral block is not available but the forward motion information is available, scaling the forward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if the forward motion information and the backward motion information of the first peripheral block are not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in the List 1;

the reference direction of the motion information in the first stage is the List1 direction.

Illustratively, the List0 and the List1 are both used to record information of reference frames in the bidirectional prediction mode, for example, identification information of the reference frames such as frame numbers, and based on the indexes of the frames in the List0 or the List1, the information of the reference frames may be determined, such as determining the frame numbers of the reference frames, and further, the corresponding reference frames may be obtained based on the frame numbers.

In the first mode, the first-stage motion information is preferentially determined based on the backward motion information of the first peripheral block of the current block.

For example, the decoding-side device may first determine whether backward motion information of a first surrounding block of the current block is available. When the backward motion information of the first surrounding block is available, it is determined whether the backward motion information of the first surrounding block points to the first frame in the List 1.

When the backward motion information of the first surrounding block points to the first frame in the List1, the decoding-side device may determine the backward motion information of the first surrounding block as the motion information of the first stage.

When the backward motion information of the first surrounding block does not point to the first frame in the List1, the decoding-side device may scale the backward motion information of the first surrounding block so that the scaled backward motion information points to the first frame in the List1, determine a motion vector in the scaled backward motion information as a motion vector of the first-stage motion information (may be referred to as a motion vector of the first stage), and determine an index of the first frame in the List1 as a reference frame index of the first-stage motion information.

When the backward motion information of the first surrounding block is not available, the decoding-side device may determine whether the forward motion information of the first surrounding block is available.

When the forward motion information of the first peripheral block is available, the decoding-side device may scale the forward motion information of the first peripheral block so that the scaled forward motion information points to the first frame in the List1, determine a motion vector in the scaled forward motion information as a motion vector of the first stage, and determine an index of the first frame in the List1 as a reference frame index of the first stage motion information.

When the forward motion information of the first peripheral block is not available, that is, when neither the backward motion information nor the forward motion information of the first peripheral block is available, the decoding-side device may determine a zero motion vector as the motion vector of the first stage and determine the index of the first frame in the List1 as the reference frame index of the motion information of the first stage.

Illustratively, the reference direction of the motion information of the first stage is a List1 direction.

Mode two

If the forward motion information of the first peripheral block of the current block is available and the forward motion information of the first peripheral block points to the first frame in the List0, determining the motion information of the first stage as the forward motion information of the first peripheral block;

if the forward motion information of the first surrounding block is available but the forward motion information of the first surrounding block does not point to the first frame in List0, scaling the forward motion information of the first surrounding block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, the reference frame index being an index of the first frame in List 0;

If the forward motion information of the first surrounding block is not available, but the backward motion information is available, scaling the backward motion information of the first surrounding block to point to the first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

if the forward motion information and the backward motion information of the first surrounding block are not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in the List 0;

the reference direction of the motion information in the first stage is the List0 direction.

In the second mode, the first-stage motion information is preferentially determined based on the forward motion information of the first peripheral block of the current block.

For example, the decoding-side device may first determine whether forward motion information of a first surrounding block of the current block is available. When the forward motion information of the first surrounding block is available, it is determined whether the forward motion information of the first surrounding block points to the first frame in the List 0.

When the forward motion information of the first surrounding block points to the first frame in the List0, the decoding-side device may determine the forward motion information of the first surrounding block as the motion information of the first stage.

When the forward motion information of the first surrounding block does not point to the first frame in the List0, the decoding-side device may scale the forward motion information of the first surrounding block such that the scaled forward motion information points to the first frame in the List0, determine a motion vector in the scaled forward motion information as a motion vector of the first stage, and determine an index of the first frame in the List0 as a reference frame index of the first stage motion information.

When the forward motion information of the first surrounding block is not available, the decoding-side device may determine whether the backward motion information of the first surrounding block is available.

When the backward motion information of the first surrounding block is available, the decoding-side device may scale the backward motion information of the first surrounding block so that the scaled backward motion information points to the first frame in the List0, determine a motion vector in the scaled backward motion information as a motion vector of the first stage, and determine an index of the first frame in the List0 as a reference frame index of the motion information of the first stage.

When the backward motion information of the first surrounding block is also unavailable, that is, when neither the forward motion information nor the backward motion information of the first surrounding block is available, the decoding-side device may determine a zero motion vector as the motion vector of the first stage and determine the index of the first frame in the List0 as the reference frame index of the motion information of the first stage.

Illustratively, the reference direction of the motion information of the first stage is a List0 direction.

Mode III

If the backward motion information of the first peripheral block of the current block is available and points to the first frame in the List1, determining the motion information of the first stage as the backward motion information of the first peripheral block;

if the backward motion information of the first surrounding block is available but the backward motion information of the first surrounding block does not point to the first frame in List1, scaling the backward motion information of the first surrounding block to point to the first frame in List1, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 1;

if the backward motion information of the first surrounding block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In the third mode, the first-stage motion information is determined based on backward motion information of a first surrounding block of the current block.

For example, the decoding-side device may determine whether backward motion information of a first surrounding block of the current block is available. When the backward motion information of the first surrounding block is available, it is determined whether the backward motion information of the first surrounding block points to the first frame in the List 1.

When the backward motion information of the first surrounding block points to the first frame in the List1, the decoding-side device may determine the backward motion information of the first surrounding block as the motion information of the first stage.

When the backward motion information of the first surrounding block does not point to the first frame in the List1, the decoding-side device may scale the backward motion information of the first surrounding block so that the scaled backward motion information points to the first frame in the List1, determine a motion vector in the scaled backward motion information as a motion vector of the first-stage motion information (may be referred to as a motion vector of the first stage), and determine an index of the first frame in the List1 as a reference frame index of the first-stage motion information.

When the backward motion information of the first surrounding block is not available, the decoding-side device may determine a zero motion vector as the motion vector of the first stage and determine the index of the first frame in the List1 as the reference frame index of the motion information of the first stage.

Illustratively, the reference direction of the motion information of the first stage is a List1 direction.

Mode IV

If the forward motion information of the first peripheral block of the current block is available and the forward motion information of the first peripheral block points to the first frame in the List0, determining the motion information of the first stage as the forward motion information of the first peripheral block;

If the forward motion information of the first surrounding block is available but the forward motion information of the first surrounding block does not point to the first frame in List0, scaling the forward motion information of the first surrounding block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, the reference frame index being the index of the first frame in List 0;

if the forward motion information of the first surrounding block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 0;

the reference direction of the motion information in the first stage is the List0 direction.

In mode four, the first stage motion information is determined based on forward motion information of a first surrounding block of the current block.

For example, the decoding-side device may determine whether forward motion information of a first surrounding block of the current block is available. When the forward motion information of the first surrounding block is available, it is determined whether the forward motion information of the first surrounding block points to the first frame in the List 0.

When the forward motion information of the first surrounding block points to the first frame in the List0, the decoding-side device may determine the forward motion information of the first surrounding block as the motion information of the first stage.

When the forward motion information of the first surrounding block does not point to the first frame in the List0, the decoding-side device may scale the forward motion information of the first surrounding block such that the scaled forward motion information points to the first frame in the List0, determine a motion vector in the scaled forward motion information as a motion vector of the first stage, and determine an index of the first frame in the List0 as a reference frame index of the first stage motion information.

When the forward motion information of the first peripheral block is not available, the decoding-side device may determine a zero motion vector as the motion vector of the first stage and determine the index of the first frame in the List0 as the reference frame index of the first stage motion information.

Illustratively, the reference direction of the motion information of the first stage is a List0 direction.

It should be noted that, in the embodiment of the present application, when determining the motion information of the first stage based on the first surrounding block of the current block, the determining may include:

determining that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in the List0, and the reference direction of the motion information of the first stage is the List0 direction;

or the like, or, alternatively,

it is determined that the motion vector of the first stage is 0, the reference frame index is an index of the first frame in List1, and the reference direction of the motion information of the first stage is the List1 direction.

As a possible embodiment, in step S420, determining a matching block of the current block based on the motion information of the first stage may include:

and determining a matching block of the current block based on the position of the current block, the horizontal motion vector and the vertical motion vector of the first stage, the precision of the motion vector and the position of the current block.

For example, when the motion information of the first stage is determined, the decoding-end device may determine a matching block of the current block based on the horizontal motion vector of the first stage (i.e., the horizontal component of the motion vector of the first stage), the vertical motion vector (i.e., the vertical component of the motion vector of the first stage), the precision of the motion vector, and the position of the current block, i.e., adjust and adjust the position corresponding to the motion information of the first stage.

Illustratively, the precision of the motion vector may take on values including, but not limited to, 4, 2, 1, 1/2, 1/4, 1/8, or 1/16.

For example, the position of the current block may be represented by coordinates of the upper left corner of the current block.

The decoding-end device may perform left shift (precision is greater than 1) or right shift (precision is less than 1) on the horizontal motion vector and the vertical motion vector of the first stage for several bits respectively based on the precision of the motion vector, convert the bits into integer pixels, determine a reference position based on the position of the current block and the horizontal motion vector and the vertical motion vector of the first stage, adjust the reference position based on a preset value (which may be set according to an actual scene, such as 3), and further determine the position of the matching block based on the adjusted reference position, which may be specifically implemented as in the following embodiments.

As another possible embodiment, in step S420, determining a matching block of the current block based on the motion information of the first stage may include:

and determining a matching block of the current block based on the position of the current block, the horizontal motion vector and the vertical motion vector of the first stage, the precision of the motion vector and the size of the sub-block.

For example, when the motion information of the first stage is determined, the decoding-end device may determine a matching block of the current block based on the horizontal motion vector of the first stage (i.e., the horizontal component of the motion vector of the first stage), the vertical motion vector (i.e., the vertical component of the motion vector of the first stage), the precision of the motion vector, and the position of the current block, i.e., adjust and adjust the position corresponding to the motion information of the first stage.

Illustratively, the precision of the motion vector may take on values including, but not limited to, 4, 2, 1, 1/2, 1/4, 1/8, or 1/16.

The position of the current block may be represented by coordinates of the upper left corner of the current block.

The decoding-side device may perform left shift (precision is greater than 1) or right shift (precision is less than 1) on the horizontal motion vector and the vertical motion vector of the first stage for several bits respectively based on the precision of the motion vector, convert the bits into integer pixels, determine a reference position based on the position of the current block and the horizontal motion vector and the vertical motion vector of the first stage, adjust the reference position based on the size of the subblock (may be referred to as alignment adjustment), and further determine the position of the matching block based on the adjusted reference position.

Examples of the inventionSex, assume sub-block size of 2N*2NThen the reference position may be shifted to the right by N bits first, and then to the left by N bits, where N is a positive integer.

For example, when the alignment adjustment is performed on the reference position based on the side length of the sub-block, the remainder part obtained by adding the remainder to the side length of the sub-block by the abscissa and the ordinate of the reference position is usually directly discarded, but when the remainder obtained by adding the remainder to the side length of the sub-block by the abscissa and the ordinate of the reference position is greater than half of the side length, the effect of processing the added remainder is better than that of directly discarding the remainder, and the determined matching block is usually better.

For example, after the motion information of the first stage is determined, the horizontal motion vector and the vertical motion vector of the first stage may be adjusted based on a preset adjustment value, respectively, and the matching block of the current block may be determined based on the adjusted horizontal motion vector and vertical motion vector of the first stage, the precision of the motion vector, and the position of the current block.

Or, the horizontal and vertical coordinates of the upper left corner of the matching block may be preliminarily determined based on the horizontal motion vector and the vertical motion vector in the first stage and the precision of the motion vector, the preliminarily determined horizontal and vertical coordinates of the upper left corner of the matching block may be adjusted based on preset adjustment values, and finally, the adjusted horizontal and vertical coordinates may be aligned and adjusted based on the side length of the subblock.

For example, after the horizontal motion vector and the vertical motion vector in the first stage are converted into integer pixels based on the precision of the motion vector, 2 may be added to the horizontal motion vector and the vertical motion vector in the first stage, respectivelyN-1Or respectively adding 2 to the horizontal and vertical coordinates of the upper left corner of the matching block preliminarily determined in the above mannerN-1,2NThe side length of the sub-block.

For example, the adjustment value for adjusting the horizontal motion vector in the first stage may be the same as or different from the adjustment value for adjusting the vertical motion vector in the first stage;

or, the adjustment value for adjusting the abscissa of the top left corner of the preliminarily determined matching block may be the same as or different from the adjustment value for adjusting the ordinate of the top left corner of the preliminarily determined matching block.

Step S420, determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a second temporal candidate mode list based on the determined candidate enhanced temporal motion vector prediction mode.

In the embodiment of the present application, when the matching block in step S410 is determined, a new matching block may be obtained by shifting the matching block.

For example, the number of new matching blocks obtained by offsetting the matching blocks may be one or more.

The decoding-side device may determine, based on the matching block determined in step S410 and the new matching block obtained in step S420, a temporal prediction mode corresponding to the matching block and a prediction mode corresponding to the new matching block as candidate enhanced temporal motion vector prediction modes, and construct a temporal candidate mode list (referred to as a second temporal candidate mode list herein) based on the determined candidate enhanced temporal motion vector prediction modes.

For example, the number of candidate enhanced temporal motion vector prediction modes and the order of adding each candidate enhanced temporal motion vector prediction mode to the temporal candidate mode list may not be limited, but the decoding side device and the encoding side device need to be consistent.

In a specific implementation scenario, motion information of a new matching block obtained by shifting the matching block may be the same or similar, and in this case, a prediction mode corresponding to the new matching block may not be used as the candidate enhanced temporal motion vector prediction mode.

As a possible embodiment, shifting the matching block to obtain a new matching block can be implemented by:

the first sub-block and the second sub-block are in the range from the Clip to the current CTU, and the motion information of the first sub-block and the second sub-block after the Clip is compared; and in the range from the third sub-block and the fourth sub-block to the current CTU, comparing the motion information of the third sub-block and the fourth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the right to obtain a new matching block;

the fifth sub-block and the sixth sub-block are in the range from the Clip to the current CTU, and the motion information of the fifth sub-block and the sixth sub-block after the Clip is compared; and comparing the motion information of the seventh sub-block and the eighth sub-block after the Clip in the range from the Clip of the seventh sub-block and the Clip of the eighth sub-block to the current CTU, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the left to obtain a new matching block;

the first sub-block and the ninth sub-block are in the range of the current CTU, and the motion information of the first sub-block and the ninth sub-block after the Clip is compared; and comparing the fifth sub-block and the tenth sub-block within the range from the Clip to the current CTU, and if at least one of the two comparison results is different in motion information, vertically and downwardly offsetting the matching block by one unit to obtain a new matching block;

The third sub-block and the eleventh sub-block are in the range from the Clip to the current CTU, and the motion information of the third sub-block and the eleventh sub-block after the Clip is compared; and comparing the motion information of the seventh sub-block and the twelfth sub-block after the Clip within the range from the Clip of the seventh sub-block and the Clip of the twelfth sub-block to the current CTU, and if at least one of the two comparison results is different in motion information, vertically and upwardly offsetting the matching block by one unit to obtain a new matching block.

Illustratively, the first sub-block is the sub-block at the top left corner of the matching block, the second sub-block is the adjacent sub-block at the top left corner of the matching block, the third sub-block is the sub-block at the bottom left corner of the matching block, the fourth sub-block is the adjacent sub-block at the bottom left corner of the matching block, the fifth sub-block is the sub-block at the top right corner of the matching block, the sixth sub-block is the adjacent sub-block at the top right corner of the matching block, the seventh sub-block is the sub-block at the bottom right corner of the matching block, the eighth sub-block is the adjacent sub-block at the bottom right corner of the matching block, the ninth sub-block is the adjacent sub-block at the left corner of the matching block, the tenth sub-block is the adjacent sub-block at the right corner of the matching block, the eleventh sub-block is the adjacent sub-block at the left corner of the matching block, and the twelfth sub-block is the adjacent sub-block at the right corner of the matching block.

Illustratively, one unit is the side length of the subblock.

For example, for 8 by 8 sub-blocks, one unit is 8 pixels; for 4 x 4 subblocks, one unit is 4 pixels; for a 16 x 16 subblock, one unit is 16 pixels.

It should be noted that, when shifting the matching block, it is not limited to shifting by one unit as described in the foregoing embodiments, and may also be other values, for example, when the sub block is an 8 × 8 sub block, when shifting the matching block, it may also shift by other values besides 8 pixels, and specific implementation thereof is not described herein again.

Taking fig. 7 as an example, the first sub-block is a1, the second sub-block is B2, the third sub-block is A3, the fourth sub-block is B4, the fifth sub-block is a2, the sixth sub-block is B1, the 7 th sub-block is a4, the eighth sub-block is B3, the ninth sub-block is C3, the tenth sub-block is C4, the eleventh sub-block is C1, and the twelfth sub-block is C2.

Taking a1, B2, A3 and B4 as examples, the decoding-end device can respectively put a1, B2, A3 and B4 clips into the range of the current CTU (i.e. the CTU to which the current block belongs), and compare the motion information of a1 and B2 after the Clip with the motion information of A3 and B4 after the Clip. Considering that if the motion information of a1 and B2 after the Clip is the same and the motion of A3 and B4 after the Clip is the same, the motion information of other sub-blocks between a1 and B2 after the Clip is the same in large probability and the motion information of other sub-blocks between A3 and B4 after the Clip is the same in large probability, the matching block is horizontally shifted by one unit to the right, the obtained motion information of a new matching block is the same in large probability as the motion information of the original matching block, and the new motion information may not be obtained by the shift.

If the motion information of a1 and B2 after the Clip is different, or/and the motion information of A3 and B4 after the Clip is different, a new matching block can be obtained by shifting the matching block horizontally by one unit to the right.

Similarly, it may be determined according to the above logic whether a corresponding new matching block may be obtained by shifting the matching block horizontally one unit to the left, vertically one unit up, or vertically one unit down.

It should be noted that, in this embodiment, when a new matching block is not obtained in the foregoing manner, the original matching block may be determined to be a final candidate matching block, or the current matching block may be shifted again according to another shifting strategy to obtain the new matching block.

As another possible embodiment, a new matching block obtained by offsetting the matching block may be determined by:

and respectively carrying out horizontal direction and vertical direction deviation on the matching blocks based on one or more deviation amount pairs to obtain one or more new matching blocks.

For example, the decoding-end device may further obtain a new matching block by respectively shifting the matching block by a preset shift amount in the horizontal direction and the vertical direction.

Illustratively, one offset amount that offsets the horizontal direction of the matching block and one offset amount that offsets the vertical direction of the matching block constitute one offset amount pair.

The decoding-end device may perform horizontal direction and vertical direction shifting on the matching block based on one or more shift pairs, respectively, to obtain one or more new matching blocks.

As another possible embodiment, after determining the matching block of the current block in step S410, the method may further include:

the determined matching block is Clip-in to the range of the current CTU.

For example, in order to improve the efficiency of determining the motion information of each sub-block of the current block in the subsequent process, when the matching block in step S410 is determined, a Clip operation may be performed on the determined matching block, and the matching block is clipped to the current CTU, so as to ensure that each sub-block of the matching block is within the current CTU.

In one example, a new matching block obtained by offsetting the matching block may be determined by:

when the right boundary of the matching block after the Clip is not positioned at the right boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the fourteenth sub-block, and comparing the motion information of the fifteenth sub-block and the sixteenth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the right to obtain a new matching block;

When the left boundary of the matching block after the Clip is not positioned at the left boundary position of the current CTU, comparing the motion information of the seventeenth sub-block and the eighteenth sub-block, and comparing the motion information of the nineteenth sub-block and the twentieth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the left to obtain a new matching block;

when the lower boundary of the matching block after the Clip is not positioned at the lower boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the twenty-first sub-block, and comparing the motion information of the seventeenth sub-block and the twenty-second sub-block, if at least one of the two comparison results is different in motion information, vertically and downwards offsetting one unit to the matching block after the Clip to obtain a new matching block;

when the upper boundary of the matching block after the Clip is not positioned at the upper boundary position of the current CTU, comparing the motion information of the fifteenth sub-block and the twenty-third sub-block, and comparing the motion information of the nineteenth sub-block and the twenty-fourth sub-block, if at least one of the two comparison results is different in motion information, vertically and upwardly shifting a unit for the matching block after the Clip to obtain a new matching block;

Illustratively, the thirteenth sub-block is a sub-block at the upper left corner of the matching block after the Clip, the fourteenth sub-block is an adjacent sub-block at the top right corner of the matching block after the Clip, the fifteenth sub-block is a sub-block at the lower left corner of the matching block after the Clip, the sixteenth sub-block is an adjacent sub-block at the bottom right corner of the matching block after the Clip, the seventeenth sub-block is a sub-block at the upper right corner of the matching block after the Clip, the eighteenth sub-block is an adjacent sub-block at the top left corner of the matching block after the Clip, the nineteenth sub-block is a sub-block at the bottom right corner of the matching block after the Clip, the twentieth sub-block is an adjacent sub-block at the bottom left corner of the matching block after the Clip, the twenty-first sub-block is an adjacent sub-block at the left corner below the matching block after the Clip, the twenty-second sub-block is an adjacent sub-block at the right corner below the matching block after the Clip, the twenty-third sub-block is an adjacent sub-block at the left corner above the matching block after the Clip, and the twenty-fourth sub-block at the right corner of the matching block after the Clip.

Illustratively, one unit is the side length of the subblock.

Still taking fig. 7 as an example, assuming that the matching block in fig. 7 is the matching block after the Clip, the thirteenth sub-block is a1, the fourteenth sub-block is B2, the fifteenth sub-block is A3, the sixteenth sub-block is B4, the seventeenth sub-block is a2, the eighteenth sub-block is B1, the nineteenth sub-block is a4, the twentieth sub-block is B3, the twenty-first sub-block is C3, the twenty-second sub-block is C4, the twenty-third sub-block is C1, and the twenty-second sub-block is C2.

In order to ensure that each sub-block in the matching block is within the range of the current CTU, so as to improve the efficiency of determining the motion information of each sub-block in the current block, when the matching block in step S410 is determined, a Clip operation may be performed on the matching block, so that the matching block after the Clip is within the range of the current CTU.

Considering that the vertex coordinates of the matching blocks are all aligned according to integral multiples of the side lengths of the sub-blocks, namely the vertex coordinates of the matching blocks are all integral multiples of the side lengths of the sub-blocks, the distances from the boundary of the matching block after the Clip to the boundary positions of the current CTU are all integral multiples of the side lengths of the sub-blocks.

In order to ensure that each sub-block in the matching block obtained by offsetting is within the range of the current CTU, when offsetting the matching block after the Clip, it is required to ensure that the distance between the boundary of the matching block after the Clip in the offsetting direction and the corresponding current CTU boundary position is greater than 0, that is, the boundary of the matching block after the Clip in the offsetting direction is not located at the corresponding current CTU boundary position.

Illustratively, when the right boundary of the post-Clip matching block is at the right boundary position of the current CTU, the post-Clip matching block may not be shifted to the right; when the left boundary of the matching block behind the Clip is at the position of the right boundary of the current CTU, the matching block behind the Clip may not be shifted to the left; when the upper boundary of the matching block after the Clip is at the upper boundary position of the current CTU, the matching block after the Clip does not need to be shifted upwards; when the lower boundary of the post-Clip matching block is at the lower boundary position of the current CTU, the post-Clip matching block may not be shifted downward.

For example, when the right boundary of the post-Clip matching block is not located at the right boundary position of the current CTU, the decoding-end device may determine whether horizontal rightward shifting of the post-Clip matching block is required based on the comparison result of the motion information of a1 and B2 and the comparison result of the motion information of A3 and B4. If at least one of the two comparison results is different in motion information, the horizontal shift may be made by one unit to the right with respect to the matched block after the Clip.

Similarly, when the left boundary of the post-Clip matching block is not located at the left boundary position of the current CTU, the decoding-end device may determine whether horizontal leftward shifting of the post-Clip matching block is required based on the comparison result of the motion information of a2 and B1 and the comparison result of the motion information of a4 and B3. If at least one of the two comparison results is different in motion information, the matching block after the Clip may be horizontally shifted by one unit to the left.

When the lower boundary of the post-Clip matching block is not at the lower boundary position of the current CTU, the decode-side device may determine whether a vertical downward shift of the post-Clip matching block is required based on the comparison result of the motion information of a1 and C3 and the comparison result of the motion information of a2 and C4. If at least one of the two comparison results is different in motion information, the matching block after the Clip can be vertically shifted downward by one unit.

When the upper boundary of the post-Clip matching block is not at the upper boundary position of the current CTU, the decode-side device may determine whether a vertical upward shift of the post-Clip matching block is required based on the comparison result of the motion information of A3 and C1 and the comparison result of the motion information of a4 and C2. If at least one of the two comparison results is different in motion information, the matching block after the Clip can be shifted vertically upward by one unit. In one example, the determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block in step S420 may include:

when at least one new matching block exists, determining a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

when at least one new matching block exists, determining a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

and when no new matching block exists, determining the prediction mode corresponding to the matching block before shifting as a candidate enhanced temporal motion vector prediction mode.

For example, when a Clip operation is performed on a matching block before the matching block is shifted, the matching block before the shift may be a matching block after the Clip; and before the offset is carried out on the matching block, when the Clip operation is not carried out on the matching block, the matching block before the offset is the original matching block.

For example, when the decoding-end device obtains at least one new matching block in the manner described in the above embodiment, the decoding-end device may determine, as candidate enhanced temporal motion vector prediction modes, a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting; or, the prediction mode corresponding to the new matching block obtained by shifting may be determined as the candidate enhanced temporal motion vector prediction mode, and the prediction mode corresponding to the matching block before shifting may not be determined as the candidate enhanced temporal motion vector prediction mode.

It should be noted that, when there are multiple new matching blocks, the decoding-end device may determine some or all of the prediction modes corresponding to the multiple new matching blocks as candidate enhanced temporal motion vector prediction modes.

When the decoding-end device does not obtain a new matching block in the manner described in the above embodiment, the decoding-end device may determine the prediction mode corresponding to the matching block before the offset as the candidate enhanced temporal motion vector prediction mode.

Illustratively, the second temporal candidate pattern list includes at least one candidate enhanced temporal motion vector prediction mode, and each candidate enhanced temporal motion vector prediction mode corresponds to a different matching block.

Step S430, determining an enhanced temporal motion vector prediction mode from the second temporal candidate mode list based on the index information of the enhanced temporal motion vector prediction mode.

In the embodiment of the present application, when it is considered that when the technical solution provided by the embodiment of the present application is currently adopted, for the same image block, when the encoding-side device and the decoding-side device construct a time domain candidate mode list, the order of each candidate enhanced temporal motion vector prediction mode in the time domain candidate mode list is the same, and therefore, the position of the enhanced temporal motion vector prediction mode in the time domain candidate mode list constructed by the encoding-side device is the same as the position of the enhanced temporal motion vector prediction mode in the time domain candidate mode list constructed by the decoding-side device.

Therefore, the decoding-end device may determine the enhanced temporal motion vector prediction mode from the second temporal candidate mode list constructed by the decoding-end device based on the index information of the enhanced temporal motion vector prediction mode parsed from the code stream of the current block.

Step S440, determining the motion information of each sub-block in the current block based on the enhanced temporal motion vector prediction mode, and performing motion compensation on each sub-block in the current block based on the motion information of each sub-block in the current block.

In this embodiment, when the enhanced temporal motion vector prediction mode is determined, motion information of each sub-block in the current block may be determined based on a matching block corresponding to the enhanced temporal motion vector prediction mode (the matching block may be referred to as a target matching block herein), and motion compensation may be performed on each sub-block in the current block based on the motion information of each sub-block in the current block.

Illustratively, each candidate enhanced temporal motion vector prediction mode in the second temporal candidate mode list corresponds to a different matching block.

As a possible embodiment, the determining the motion information of each sub-block in the current block based on the enhanced temporal motion vector prediction mode in step S440 may include:

for any sub-block in the target matching block, the sub-block Clip is in the range of the current CTU;

if the forward motion information and the backward motion information of the sub-block after the Clip are both available, respectively scaling the forward motion information and the backward motion information of the sub-block after the Clip to point to a first frame of List0 and a first frame of List1, and respectively endowing the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block;

If the forward motion information of the sub-block after the Clip is available, but the backward motion information is unavailable, the forward motion information of the sub-block after the Clip is stretched to a first frame pointing to List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block after the Clip is available but the forward motion information is not available, the backward motion information of the sub-block after the Clip is extended to the first frame pointing to the List1, and the extended backward motion information is given to the sub-block at the corresponding position of the current block.

For example, to improve the accuracy of the determined motion information of each sub-block of the current block, for any sub-block in the target matching block, the decoding-side device may Clip the sub-block into the range of the current CTU, and determine whether the forward motion information and the backward motion information of the sub-block after the Clip are available.

When both the forward motion information and the backward motion information of the sub-block after the Clip are available, the decoding-side device may respectively scale the forward motion information and the backward motion information of the sub-block after the Clip, so as to scale the forward motion information of the sub-block after the Clip to the first frame pointing to the List0, scale the backward motion information of the sub-block after the Clip to the first frame pointing to the List1, and further respectively give the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

When the forward motion information of the sub-block after the Clip is available, but the backward motion information is unavailable, the forward motion information of the sub-block after the Clip is stretched to a first frame pointing to the List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block;

and when the backward motion information of the sub-block after the Clip is available but the forward motion information is unavailable, scaling the backward motion information of the sub-block after the Clip to a first frame pointing to the List1, and giving the scaled backward motion information to the sub-block at the corresponding position of the current block.

In this embodiment, when the forward motion information of the sub-block after the Clip is available but the backward motion information is not available, the forward motion information of the sub-block after the Clip may be respectively scaled to the first frame pointing to the List0 and the first frame pointing to the List1, and the scaled motion information may be assigned to the sub-block corresponding to the current block (the scaled motion information of the first frame pointing to the List0 is used as the forward motion information, and the scaled motion information of the first frame pointing to the List1 is used as the backward motion information).

The backward motion information of the sub-block after the Clip is available, but the same thing applies if the forward motion information is not available.

For example, for any sub-block in the target matching block, if when the sub-block is Clip-in the range of the current CTU, neither the forward motion information nor the backward motion information of the sub-block after the Clip is available, the motion information of the sub-block at the corresponding position in the current block may be determined at least in one of the following manners:

in a first mode

If the forward motion information and the backward motion information of the sub-block after the Clip are unavailable, the central position of the target matching block is in the range from the Clip to the current CTU, when the forward motion information and the backward motion information of the central position of the target matching block after the Clip are available, the forward motion information and the backward motion information of the central position of the target matching block after the Clip are respectively extended to a first frame pointing to List0 and a first frame pointing to List1, and the extended forward motion information and the extended backward motion information are respectively endowed to the sub-block at the corresponding position of the current block; when the forward motion information of the central position of the target matching block after the Clip is available but the backward motion information is unavailable, the forward motion information of the central position of the target matching block after the Clip is stretched to a first frame pointing to the List0, and the stretched forward motion information is given to a sub-block at the corresponding position of the current block; when the backward motion information of the central position of the target matching block after the Clip is available but the forward motion information is unavailable, the backward motion information of the central position of the target matching block after the Clip is stretched to a first frame pointing to the List1, and the stretched backward motion information is given to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the central position of the target matching block behind the Clip are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

For example, when both the forward motion information and the backward motion information of the sub-block after the Clip are not available, the decoding-side device may scale the center position of the target matching block within the range of the current CUT, and determine whether the forward motion information and the backward motion information of the center position of the target matching block after the Clip are available, and if both the forward motion information and the backward motion information of the center position of the target matching block after the Clip are available, the decoding-side device may scale the forward motion information and the backward motion information of the center position of the target matching block after the Clip to the first frame pointing to the List0 and the first frame pointing to the List1, and respectively assign the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the center position of the target matching block after the Clip is available but the backward motion information is not available, the forward motion information of the center position of the target matching block after the Clip is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the position corresponding to the current block.

If the backward motion information of the center position of the target matching block after the Clip is available but the forward motion information is not available, the backward motion information of the center position of the target matching block after the Clip is extended to the first frame pointing to the List1, and the extended backward motion information is given to the sub-block at the position corresponding to the current block.

If the forward motion information and the backward motion information of the central position of the target matching block after the Clip are unavailable, the decoding-end device can endow the sub-block at the corresponding position of the current block with zero motion information.

Mode two

And if the forward motion information and the backward motion information of the sub-block behind the Clip are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

For example, when neither the forward motion information nor the backward motion information of the sub-block after the Clip is available, the decoding-side device may assign zero motion information to the sub-block at the position corresponding to the current block.

Mode III

If the forward motion information and the backward motion information of the sub-block after the Clip are unavailable, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the second surrounding block of the current block are available, and respectively giving the scaled forward motion information and the scaled backward motion information to the sub-block at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, the forward motion information of the second bounding box is stretched to the first frame pointing to the List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to the first frame pointing to the List1, and giving the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

For example, the first surrounding block of the current block may include any decoded neighboring block or non-neighboring block of the current block.

The second peripheral block may be the same as or different from the first peripheral block.

For example, when both the forward motion information and the backward motion information of the sub-block after the Clip are unavailable, the decoding-side device may determine whether the forward motion information and the backward motion information of the second peripheral block of the current block are available, and if both the forward motion information and the backward motion information of the second peripheral block are available, the decoding-side device may respectively scale the forward motion information and the backward motion information of the second peripheral block to the first frame pointing to the List0 and the first frame pointing to the List1, and respectively assign the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the second bounding box is available, but the backward motion information is not available, the forward motion information of the second bounding box is extended to the first frame pointing to the List0, and the extended forward motion information is assigned to the sub-block at the corresponding position of the current block.

If the backward motion information of the second bounding box is available but the forward motion information is not available, the backward motion information of the second bounding box is extended to the first frame pointing to the List1, and the extended backward motion information is assigned to the sub-block at the corresponding position of the current block.

If the forward motion information and the backward motion information of the second bounding volume are not available, the decoding-end device may assign zero motion information to the sub-block at the position corresponding to the current block.

In this embodiment, when the forward motion information of the center position of the target matching block after the Clip is available but the backward motion information is not available, the forward motion information of the center position of the target matching block after the Clip may be respectively scaled to the first frame of List0 and the first frame of List1, and the scaled motion information may be assigned to the sub-block corresponding to the current block (the scaled motion information of the first frame of List0 is used as the forward motion information and the scaled motion information of the first frame of List1 is used as the backward motion information).

Backward motion information for the center position of the target matching block after the Clip is available, but forward motion information is not available and one of the forward motion information for the second bounding block is available, but the other is not.

As another possible embodiment, the determining the motion information of each sub-block in the current block based on the enhanced temporal motion vector prediction mode in step S440 may include:

For any sub-block in the target matching block, if the forward motion information and the backward motion information of the sub-block are both available, respectively scaling the forward motion information and the backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, and respectively assigning the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the subblock is available but the backward motion information is not available, scaling the forward motion information of the subblock to a first frame pointing to the List0, and giving the scaled forward motion information to the subblock at the corresponding position of the current block;

if the backward motion information of the sub-block is available but the forward motion information is not available, the backward motion information of the sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

For example, to improve the efficiency of determining the motion information of each sub-block of the current block, for any sub-block in the target matching block, the decoding-side device may determine whether the motion information of the sub-block is available.

When both the forward motion information and the backward motion information of the subblock are available, the decoding-side device may respectively scale the forward motion information and the backward motion information of the subblock to scale the forward motion information of the subblock to a first frame pointing to the List0, and scale the backward motion information of the subblock to a first frame pointing to the List1, and further respectively give the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block.

When the forward motion information of the subblock is available but the backward motion information is unavailable, scaling the forward motion information of the subblock to a first frame pointing to the List0, and assigning the scaled forward motion information to the subblock at the corresponding position of the current block;

when the backward motion information of the subblock is available but the forward motion information is not available, the backward motion information of the subblock is scaled to point to the first frame of the List1, and the scaled backward motion information is assigned to the subblock at the corresponding position of the current block.

In this embodiment, for the case that the forward motion information of the sub-block is available but the backward motion information is not available, the forward motion information of the sub-block may be respectively scaled to the first frame pointing to List0 and the first frame pointing to List1, and the scaled motion information may be assigned to the sub-block corresponding to the current block (the scaled motion information pointing to the first frame of List0 is used as the forward motion information, and the scaled motion information pointing to the first frame of List1 is used as the backward motion information).

Backward motion information for a sub-block is available, but the same is true for situations where forward motion information is not available.

For example, for any sub-block in the target matching block, if the sub-block motion information is not available, the motion information of the sub-block at the corresponding position in the current block may be determined at least in one of the following manners:

In a first mode

If the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the center position of the target matching block to a first frame pointing to List0 and a first frame pointing to List1 when the motion information of the center position of the target matching block is available, and respectively giving the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the target matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the target matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to a sub-block at a position corresponding to the current block; when the backward motion information of the center position of the target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the target matching block to the first frame pointing to the List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the central position of the target matching block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

For example, when both the forward motion information and the backward motion information of the subblock are unavailable, the decoding-side device may determine whether the forward motion information and the backward motion information of the center position of the target matching block are available, and if both the forward motion information and the backward motion information of the center position of the target matching block are available, the decoding-side device may respectively scale the forward motion information and the backward motion information of the center position of the target matching block to the first frame pointing to the List0 and the first frame pointing to the List1, and respectively assign the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block.

If the forward motion information of the center position of the target matching block is available but the backward motion information is not available, the forward motion information of the center position of the target matching block is scaled to the first frame pointing to the List0, and the scaled forward motion information is given to the sub-block at the corresponding position of the current block.

If the backward motion information of the center position of the target matching block is available but the forward motion information is not available, the backward motion information of the center position of the target matching block is scaled to the first frame pointing to the List1, and the scaled backward motion information is given to the sub-block at the corresponding position of the current block.

If the forward motion information and the backward motion information of the central position of the target matching block are unavailable, the decoding-end device can give zero motion information to the subblock at the corresponding position of the current block.

Mode two

And if the forward motion information and the backward motion information of the subblock are unavailable, giving zero motion information to the subblock at the corresponding position of the current block.

For example, when neither the forward motion information nor the backward motion information of the subblock is available, the decoding-side device may assign zero motion information to the subblock at the corresponding position of the current block.

Mode III

If the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the second surrounding block are both available, and respectively giving the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, the forward motion information of the second bounding box is stretched to the first frame pointing to the List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to the first frame pointing to the List1, and giving the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

For example, when both the forward motion information and the backward motion information of the subblock are unavailable, the decoding-side device may determine whether the forward motion information and the backward motion information of the second surrounding block of the current block are available, and if both the forward motion information and the backward motion information of the second surrounding block are available, the decoding-side device may scale the forward motion information and the backward motion information of the second surrounding block to the first frame pointing to the List0 and the first frame pointing to the List1, respectively, and assign the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block.

If the forward motion information of the second bounding box is available, but the backward motion information is not available, the forward motion information of the second bounding box is extended to the first frame pointing to the List0, and the extended forward motion information is assigned to the sub-block at the corresponding position of the current block.

If the backward motion information of the second bounding box is available but the forward motion information is not available, the backward motion information of the second bounding box is extended to the first frame pointing to the List1, and the extended backward motion information is assigned to the sub-block at the corresponding position of the current block.

If the forward motion information and the backward motion information of the second bounding volume are not available, the decoding-end device may assign zero motion information to the sub-block at the position corresponding to the current block.

In this embodiment, for the case that the forward motion information of the center position of the target matching block is available but the backward motion information is not available, the forward motion information of the center position of the target matching block may be respectively scaled to the first frame pointing to the List0 and the first frame pointing to the List1, and the scaled motion information may be assigned to the sub-block corresponding to the current block (the scaled motion information of the first frame pointing to the List0 is used as the forward motion information, and the scaled motion information of the first frame pointing to the List1 is used as the backward motion information).

Backward motion information for the center position of the target matching block is available but forward motion information is not available and one of the forward motion information for the second bounding volume is available but the other is not.

As a possible embodiment, in step S400, parsing the index information of the enhanced temporal motion vector prediction mode from the code stream of the current block may include:

and when the current block is determined to enable the enhanced time domain motion vector prediction technology, analyzing the index information of the enhanced time domain motion vector prediction mode from the code stream of the current block.

For example, in order to enhance controllability and flexibility of a technique (referred to as an Enhanced Temporal Motion Vector Prediction (ETMVP) technique herein) provided in an embodiment of the present application and implement compatibility with a conventional ATMVP technique, enabling of the Enhanced temporal Motion Vector Prediction technique may be controlled according to an actual scene.

When the decoding end device receives the code stream of the current block and determines that the current block enables the enhanced temporal motion vector prediction technology, the decoding end device may analyze the index information of the enhanced temporal motion vector prediction mode from the code stream of the current block and perform subsequent processing according to the manner described in the above embodiments.

When the decoding-end device determines that the enhanced temporal motion vector prediction technique is not enabled for the current block, the decoding-end device may process according to the conventional ATMVP technique.

In one example, whether the current block enables the enhanced temporal motion vector prediction technique is represented using a Sequence Parameter Set (SPS) level syntax.

For example, to reduce the bit consumption of the control of whether the enhanced temporal motion vector prediction technique is enabled, the SPS-level syntax may be utilized to control whether the current block enables the enhanced temporal motion vector prediction technique.

For a picture sequence, a flag bit of SPS level may be set, and the flag bit of SPS level indicates whether the picture sequence enables the enhanced temporal motion vector prediction technique.

For example, the flag bit of the SPS stage may be a flag bit with a length of 1bit, and when the value of the flag bit is a first value (e.g., 1), the flag bit indicates that the corresponding image sequence enables the enhanced temporal motion vector prediction technique; when the value of the flag bit is a second value (e.g., 0), it indicates that the corresponding image sequence does not enable the enhanced temporal motion vector prediction technique.

For any block, the decoding-side device may determine whether the current block enables the enhanced temporal motion vector prediction technique based on whether the image sequence to which the block belongs enables the enhanced temporal motion vector prediction technique.

Illustratively, when the image sequence to which the current block belongs enables the enhanced temporal motion vector prediction technology, determining that the current block enables the enhanced temporal motion vector prediction technology;

when the enhanced temporal motion vector prediction technology is not enabled in the image sequence to which the current block belongs, determining that the enhanced temporal motion vector prediction technology is not enabled in the current block.

In another example, Slice level syntax is used to indicate whether the current block enables the enhanced temporal motion vector prediction technique.

For example, the granularity of control on whether the enhanced temporal motion vector prediction technology is enabled is increased, the flexibility of control on whether the enhanced temporal motion vector prediction technology is enabled is increased, and whether the enhanced temporal motion vector prediction technology is enabled for the current block can be controlled by Slice-level syntax.

For a Slice, a Slice-level flag may be set, and the Slice-level flag indicates whether the Slice enables the enhanced temporal motion vector prediction technique.

For example, the flag bit of the Slice level may be a flag bit with a length of 1bit, and when the value of the flag bit is a first value (e.g., 1), the corresponding Slice is instructed to enable an enhanced temporal motion vector prediction technique; when the value of the flag bit is a second value (e.g., 0), it indicates that the corresponding Slice does not enable the enhanced temporal motion vector prediction technique.

For any block, the decoding-end device may determine whether the current block enables the enhanced temporal motion vector prediction technique based on whether Slice to which the block belongs enables the enhanced temporal motion vector prediction technique.

Exemplarily, when Slice to which the current block belongs enables the enhanced temporal motion vector prediction technique, determining that the current block enables the enhanced temporal motion vector prediction technique;

and when the Slice to which the current block belongs does not enable the enhanced temporal motion vector prediction technology, determining that the current block does not enable the enhanced temporal motion vector prediction technology.

In yet another example, whether the current block enables the enhanced temporal motion vector prediction technique may be determined by:

when the size of the current block is smaller than or equal to the size of a preset maximum block and larger than or equal to the size of a preset minimum block, determining that the current block enables an enhanced time domain motion vector prediction technology;

And when the size of the current block is larger than the preset maximum block size or smaller than the preset minimum block size, determining that the enhanced temporal motion vector prediction technology is not enabled for the current block.

For example, considering that the performance of encoding and decoding using the enhanced temporal motion vector prediction technique may not be guaranteed when the size of the image block is too large or too small, the maximum size and/or the minimum size of the image block in which the enhanced temporal motion vector prediction technique is enabled may be preset, and it may be determined whether the current block enables the enhanced temporal motion vector prediction technique based on the size of the current block.

When the decoding-end device acquires the code stream of the current block, the size of the current block can be analyzed, and the size of the current block is compared with the size of a preset maximum block and/or the size of a preset minimum block.

When the size of the current block is smaller than or equal to the size of a preset maximum block and larger than or equal to the size of a preset minimum block, determining that the current block enables an enhanced time domain motion vector prediction technology;

and when the size of the current block is larger than the preset maximum block size or smaller than the preset minimum block size, determining that the enhanced temporal motion vector prediction technology is not enabled for the current block.

Illustratively, the size of the preset maximum block is represented using a sequence parameter set level syntax or using a Slice level syntax;

and/or the first and/or second light sources,

the size of the preset minimum block is expressed using a sequence parameter set level syntax or using a Slice level syntax.

In the embodiment of the application, a code stream of a current block is obtained, and index information of an enhanced time domain motion vector prediction mode is analyzed from the code stream of the current block; determining a matching block of the current block based on a first peripheral block of the current block; determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a second temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode; determining an enhanced temporal motion vector prediction mode from the second temporal candidate mode list based on index information of the enhanced temporal motion vector prediction mode; the method comprises the steps of determining motion information of each subblock in a current block based on an enhanced time domain motion vector prediction mode, performing motion compensation on each subblock in the current block based on the motion information of each subblock in the current block, and offsetting a matching block determined based on surrounding blocks of the current block to obtain a new matching block, so that the probability of inaccurate motion information of the matching block caused by inaccurate motion information of the surrounding blocks is reduced, and the decoding performance is improved.

As an example, an encoding method provided in an embodiment of the present application is also provided, where the encoding method may be applied to an encoding end device, and the method may include the following steps:

step a1, determining a matching block for the current block based on the first peripheral block of the current block.

The specific implementation manner of this embodiment may refer to step S410 in fig. 4, which is not described herein again in this embodiment of the application.

Step A2, determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode.

The specific implementation manner of this embodiment may refer to step S420 in fig. 4, which is not described herein again in this embodiment of the application.

Step A3, traversing each candidate enhanced temporal motion vector prediction mode in the first temporal candidate mode list, determining the motion information of each sub-block in the current block based on the candidate enhanced temporal motion vector prediction mode for any candidate enhanced temporal motion vector prediction mode, and performing motion compensation on each sub-block in the current block based on the motion information of each sub-block in the current block.

For example, when the first temporal candidate mode list is constructed as described in step a2, the encoding-side device may traverse each candidate enhanced temporal motion vector prediction mode in the first temporal candidate mode list.

For any candidate enhanced temporal motion vector prediction mode, the encoding end device may determine motion information of each sub-block in the current block based on the candidate enhanced temporal motion vector prediction mode, and perform motion compensation on each sub-block in the current block based on the motion information of each sub-block in the current block, and a specific implementation manner of the encoding end device may refer to step S440 in the embodiment of fig. 4, which is not described herein again in this embodiment of the present application.

Step A4, based on the rate distortion cost corresponding to each candidate enhanced temporal motion vector prediction mode, determining the candidate enhanced temporal motion vector prediction mode with the minimum rate distortion cost as the enhanced temporal motion vector prediction mode of the current block.

For example, when the encoding-side device performs motion compensation on each subblock in the current block based on each candidate enhanced temporal motion vector prediction mode respectively according to the manner described in step a3, the encoding-side device may determine rate distortion costs corresponding to each candidate enhanced temporal motion vector prediction mode respectively, and determine the candidate enhanced temporal motion vector prediction mode with the smallest rate distortion cost as the enhanced temporal motion vector prediction mode of the current block.

Step A5, carrying index information of the enhanced temporal motion vector prediction mode of the current block in the code stream of the current block, where the index information is used to identify the position of the enhanced temporal motion vector prediction mode in the first temporal candidate mode list.

For example, when the encoding-side device determines the enhanced temporal motion vector prediction mode of the current block, the index information of the enhanced temporal motion vector prediction mode of the current block may be carried in a code stream of the current block, so that the decoding-side device may determine the enhanced temporal motion vector prediction mode of the current block based on the index information in the code stream.

In the embodiment of the application, a matching block of the current block is determined based on a first peripheral block of the current block; determining a candidate enhanced temporal motion vector prediction mode based on a new matching block obtained by shifting the matching block and the candidate enhanced temporal motion vector prediction mode, and constructing a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode; traversing each candidate enhanced temporal motion vector prediction mode in the first temporal candidate mode list, determining motion information of each subblock in the current block based on any candidate enhanced temporal motion vector prediction mode and motion compensation is performed on each subblock in the current block based on the motion information of each subblock in the current block; determining the candidate enhanced temporal motion vector prediction mode with the minimum rate distortion cost as the enhanced temporal motion vector prediction mode of the current block based on the rate distortion cost corresponding to each candidate enhanced temporal motion vector prediction mode; the index information of the enhanced time domain motion vector prediction mode of the current block is carried in the code stream of the current block, and a new matching block is obtained by offsetting the matching block determined based on the surrounding blocks of the current block, so that the probability of inaccurate motion information of the matching block caused by inaccurate motion information of the surrounding blocks is reduced, and the coding performance is improved.

In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below with reference to specific examples.

Example one

The encoding method provided by the embodiment of the application can comprise the following steps:

1. determining motion information of a first stage using motion information of surrounding blocks (such as the first surrounding block) of the current block;

2. determining a matching block of the current block by using the motion information of the first stage;

3. determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

4. traversing each candidate enhanced temporal motion vector prediction mode in the first temporal candidate mode list in sequence, and for any candidate enhanced temporal motion vector prediction mode, acquiring motion information of all sub-blocks in the current block according to motion information of a matched block;

5. performing motion compensation on each subblock in the current block according to the motion information of each subblock in the current block;

6. determining the rate distortion cost corresponding to the candidate enhanced temporal motion vector prediction mode, comparing the rate distortion cost value with the rate distortion costs of other candidate enhanced temporal motion vector prediction modes, taking the candidate enhanced temporal motion vector prediction mode with the minimum rate distortion cost value as the enhanced temporal motion vector prediction mode of the current block, carrying index information of the enhanced temporal motion vector prediction mode in a code stream, and transmitting the index information to decoding end equipment, wherein the index information is used for identifying the position of the enhanced temporal motion vector prediction mode in the first temporal candidate mode list.

Example two

The decoding method provided by the embodiment of the application can comprise the following steps:

1. acquiring a code stream of a current block, and analyzing index information of an enhanced domain motion vector prediction mode from the code stream of the current block, wherein the index information is used for identifying the position of the enhanced time domain motion vector prediction mode in a first time domain candidate list constructed by encoding end equipment;

2. the motion information of the first stage is determined using the motion information of the surrounding blocks of the current block, such as the first surrounding block described above.

3. Determining a matching block of the current block by using the motion information of the first stage;

4. determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a second temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

5. acquiring a corresponding enhanced temporal motion vector prediction mode in a second temporal candidate mode list according to the index information obtained in the step 1, and acquiring motion information of all sub-blocks in the current block according to the motion information of a target matching block corresponding to the enhanced temporal motion vector prediction mode;

6. and performing motion compensation on each subblock in the current block according to the motion information of each subblock in the current block.

The following describes a specific implementation of some steps.

Firstly, motion information of a first stage is determined by using motion information of surrounding blocks of a current block.

For example, the schematic diagram of the spatial neighboring blocks of the current block can be as shown in fig. 5, taking the surrounding blocks of the current block as F blocks as an example.

EXAMPLE III

If the backward motion information of the F block is available and points to the first frame in a List1 (List 1 of the current frame, the same below), it is determined that the motion information of the first stage is the backward motion information of the F block.

If backward motion information of the F block is available and does not point to the first frame in List1, the backward motion information of the F block is scaled to point to the first frame in List1 and the motion vector of the first stage is determined to be the scaled motion vector, and the reference frame index is the index of the first frame in List 1.

If the backward motion information of the F block is not available but the forward motion information is available, the forward motion information of the F block is scaled to point to the first frame of List1, and the motion vector of the first stage is determined to be a scaled motion vector, and the reference frame index is the index of the first frame in List 1.

If neither the forward motion information nor the backward motion information of the F block is available, the motion vector of the first stage is determined to be 0, and the reference frame index is the index of the first frame in List 1.

The reference direction of the motion information of the first stage is the List1 direction.

It should be noted that the surrounding blocks of the current block are not limited to the F block, and may also be other spatial neighboring blocks or other non-spatial neighboring blocks in fig. 5, and the specific implementation thereof is not described herein again.

Example four

If the forward motion information of the F block is available and points to the first frame in the List0 (List 0 of the current frame, the same below), the motion information of the first stage is determined to be the forward motion information of the F block.

If the forward motion information of the F-block is available and does not point to the first frame in List0, the forward motion information of the F-block is scaled to point to the first frame in List0 and the motion vector of the first stage is determined to be the scaled motion vector, and the reference frame index is the index of the first frame in List 0.

If the forward motion information of the F block is not available but the backward motion information is available, the backward motion information of the F block is scaled to point to the first frame of List0, and the motion vector of the first stage is determined to be a scaled motion vector, and the reference frame index is the index of the first frame in List 0.

If neither the forward motion information nor the backward motion information of the F block is available, the motion vector of the first stage is determined to be 0, and the reference frame index is the index of the first frame in List 0.

The reference direction of the motion information of the first stage is the List0 direction.

EXAMPLE five

If backward motion information of the F block is available and points to the first frame in the List1, the motion information of the first stage is determined to be the backward motion information of the F block.

If backward motion information of the F block is available and does not point to the first frame in List1, the backward motion information of the F block is scaled to point to the first frame in List1 and the motion vector of the first stage is determined to be the scaled motion vector, and the reference frame index is the index of the first frame in List 1.

If the backward motion information of the F block is not available, the motion vector of the first stage is determined to be 0, and the reference frame index is the index of the first frame in the List1 List.

The reference direction of the motion information of the first stage is the List1 direction.

EXAMPLE six

If the forward motion information of the F block is available and points to the first frame in the List0, the motion information of the first stage is determined to be the forward motion information of the F block.

If the forward motion information of the F-block is available and does not point to the first frame in List0, the forward motion information of the F-block is scaled to point to the first frame in List0 and the motion vector of the first stage is determined to be the scaled motion vector, and the reference frame index is the index of the first frame in List 0.

If the forward motion information of the F block is not available, the motion vector of the first stage is determined to be 0 and the reference frame index is the index of the first frame in the List0 List.

The reference direction of the motion information of the first stage is the List0 direction.

EXAMPLE seven

It is determined that the motion vector of the first stage is 0, the reference frame index is an index of the first frame in List0, and the reference direction of the motion information of the first stage is the List0 direction.

Example eight

It is determined that the motion vector of the first stage is 0, the reference frame index is an index of the first frame in List1, and the reference direction of the motion information of the first stage is the List1 direction.

Secondly, determining a matching block of the current block by utilizing the motion information of the first stage

Example nine

Suppose that the horizontal motion vector of the first stage is MVx, the vertical motion vector is MVy, the coordinates of the current block in the top left corner of the image are (Xpos, Ypos), and the Precision of the motion vector is Precision.

Illustratively, Precision may take on a value of 4, 2, 1, 1/2, 1/4, 1/8, or 1/16.

When Precision takes a value of 1, 1/2, 1/4, 1/8 or 1/16, the coordinates of the upper left corner of the matching block corresponding to the current block are:

Mx0=((Xpos+(MVx>>shift)+4)>>3)<<3

My0=((Ypos+(MVy>>shift)+4)>>3)<<3

the shift value corresponds to Precision, and when Precision takes values of 1, 1/2, 1/4, 1/8 and 1/16, the shift value is 0, 1, 2, 3 and 4 respectively.

When Precision takes a value of 4 or 2, the coordinates of the upper left corner of the matching block corresponding to the current block are:

Mx0=((Xpos+(MVx<<shift)+4)>>3)<<3

My0=((Ypos+(MVy<<shift)+4)>>3)<<3

the shift value corresponds to Precision one by one, and when the Precision value is 4 and 2, the shift value is 2 and 1.

In this embodiment, the adjustment values 2 are used respectivelyN-1(N is the side length of the sub-block, in this example 8 x 8 sub-blocks, i.e. 2N-14) horizontal motion vector (MVx) for the first stage>>shift) and vertical motion vector (MVy)>>shift) or the abscissa (Xpos + (MVx) of the upper left corner of the preliminarily determined matching block is adjusted using an adjustment value of 4, respectively<<shift)) and ordinate (Ypos + (MVy)<<shift)) was adjusted;

when the reference positions (Xpos + (MVx > > shift)) and (Ypos + (MVy > > shift) are aligned, the adjustment value N (N is the logarithm of base 2 to the side length of the sub-block, which is 8 × 8 sub-blocks in this embodiment, that is, N is 3) is used.

Example ten

Suppose that the horizontal motion vector of the first stage is MVx, the vertical motion vector is MVy, the coordinates of the current block in the top left corner of the image are (Xpos, Ypos), and the Precision of the motion vector is Precision.

Illustratively, Precision may take on a value of 4, 2, 1, 1/2, 1/4, 1/8, or 1/16.

When Precision takes a value of 1, 1/2, 1/4, 1/8 or 1/16, the coordinates of the upper left corner of the matching block corresponding to the current block are:

Mx0=((Xpos+(MVx>>shift))>>3)<<3

My0=((Ypos+(MVy>>shift))>>3)<<3

the shift value corresponds to Precision, and when Precision takes values of 1, 1/2, 1/4, 1/8 and 1/16, the shift value is 0, 1, 2, 3 and 4 respectively.

When Precision takes a value of 4 or 2, the coordinates of the upper left corner of the matching block corresponding to the current block are:

Mx0=((Xpos+(MVx<<shift))>>3)<<3

My0=((Ypos+(MVy<<shift))>>3)<<3

wherein the shift value corresponds to Precision one by one, and when the Precision value is 4 and 2 respectively, the shift value is 2 and 1 respectively

In this embodiment, the determined reference positions (Xpos + (MVx > > shift) and (Ypos + (MVy > > shift) are aligned using a constant of 3.

Thirdly, determining a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and constructing a temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode (the first temporal candidate mode list for the encoding end device and the second temporal candidate mode list for the decoding end device)

For example, taking the matching block shown in fig. 7 as an example, it is assumed that the sub-block size is 8 × 8.

EXAMPLE eleven

Obtaining the motion information of A1 and B2 after the Clip in the range from the A1 and the B2Clip to the current CTU, comparing the motion information of the A1 and the B2, and marking the motion information as r1 (the r1 comprises the same or different motion information, and the following is the same);

Obtaining the motion information of A3 and B4 after the Clip in the range from the A3 and the B4Clip to the current CTU, comparing the motion information of the A3 and the B4, and marking as r 2;

and if at least one of r1 and r2 is different in motion information, horizontally shifting the matching block by 8 pixels to the right to obtain a new matching block, and adding a prediction mode corresponding to the new matching block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

Obtaining the motion information of A2 and B1 after the Clip in the range from the A2 and the B1Clip to the current CTU, comparing the motion information of the A2 and the B1, and marking as r 3;

obtaining the motion information of A4 and B3 after the Clip in the range from the A4 and the B3Clip to the current CTU, comparing the motion information of the A4 and the B3, and marking as r 4;

and if at least one of r3 and r4 is different in motion information, horizontally shifting the matching block by 8 pixels to the left to obtain a new matching block, and adding a prediction mode corresponding to the new matching block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

Obtaining the motion information of A1 and C3 after the Clip in the range from the A1 and the C3Clip to the current CTU, comparing the motion information of the A1 and the C3, and marking as r 5;

obtaining the motion information of A2 and C4 after the Clip in the range from the A2 and the C4Clip to the current CTU, comparing the motion information of the A2 and the C4, and marking as r 6;

And if at least one of r5 and r6 is different in motion information, vertically and downwardly shifting the matching block by 8 pixels to obtain a new matching block, and adding a prediction mode corresponding to the new matching block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

Obtaining the motion information of A3 and C1 after the Clip in the range from the A3 and the C1Clip to the current CTU, comparing the motion information of the A3 and the C1, and marking as r 7;

obtaining the motion information of A4 and C2 after the Clip in the range from A4 and C2Clip to the current CTU, comparing the motion information of the A4 and the C2, and marking as r 8;

and if at least one of r7 and r8 is different in motion information, vertically and upwardly shifting the matching block by 8 pixels to obtain a new matching block, and adding a prediction mode corresponding to the new matching block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

And taking the prediction mode corresponding to the original matching block as a candidate enhanced time domain motion vector prediction mode, and adding the candidate enhanced time domain motion vector prediction mode into a time domain candidate mode list.

It should be noted that the order in which the prediction mode corresponding to the original matching block and the prediction mode corresponding to the new matching block are added to the time domain candidate mode list is not limited.

In addition, when there are a plurality of new matching blocks, the prediction mode corresponding to a part of the new matching blocks may be added to the temporal candidate mode list as a candidate enhanced temporal motion vector prediction mode.

And when no new matching block exists, taking the prediction mode corresponding to the original matching block as a candidate enhanced temporal motion vector prediction mode, and adding the candidate enhanced temporal motion vector prediction mode into a temporal candidate mode list.

For example, a schematic diagram of matching blocks determined based on surrounding blocks of the current block may be as shown in fig. 8A; a schematic diagram of a new matching block obtained by horizontally shifting the matching block to the right by 8 pixels may be shown in fig. 8B, and a schematic diagram of a new matching block obtained by horizontally shifting the matching block to the left by 8 pixels may be shown in fig. 8C; a schematic diagram of a new match block obtained by vertically shifting the match block up by 8 pixels may be shown in fig. 8D; a schematic diagram of a new matching block obtained by vertically shifting the matching block downward by 8 pixels may be shown in fig. 8E.

Example twelve

The matching block Clip is brought into range of the current CTU.

Assume that the matching block shown in fig. 7 is a matching block after Clip.

When the right boundary of the matching block after the Clip is not at the position of the right boundary of the current CTU, acquiring the motion information of A1 and B2, and comparing the motion information of the A1 and the B2, and recording as r9(r9 comprises the same or different motion information, and the following steps are the same);

Acquiring motion information of A3 and B4, and comparing the motion information of the A3 and the B4 and recording as r 10;

and if at least one of r9 and r10 is different in motion information, horizontally shifting the matched block behind the Clip by 8 pixels to the right to obtain a new matched block, and adding a prediction mode corresponding to the new matched block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

When the left boundary of the matching block after the Clip is not positioned at the left boundary position of the current CTU, acquiring the motion information of A2 and B1, and comparing the motion information of the A2 and the B1, and marking as r 11;

acquiring motion information of A4 and B3, and comparing the motion information of the A4 and the B3 and recording as r 12;

and if at least one of r11 and r12 is different in motion information, horizontally shifting the matched block behind the Clip by 8 pixels to the left to obtain a new matched block, and adding a prediction mode corresponding to the new matched block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

When the lower boundary of the matching block after the Clip is not positioned at the lower boundary position of the current CTU, acquiring the motion information of A1 and C3, and comparing the motion information of the A1 and the C3 and marking as r 13;

acquiring motion information of A2 and C4, and comparing the motion information of the A2 and the C4 and recording as r 14;

And if at least one of r13 and r14 is different in motion information, vertically and downwardly shifting the matched block after the Clip by 8 pixels to obtain a new matched block, and adding a prediction mode corresponding to the new matched block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

When the upper boundary of the matching block after the Clip is not positioned at the upper boundary position of the current CTU, acquiring the motion information of A3 and C1, and comparing the motion information of the A3 and the C1 and marking as r 15;

acquiring motion information of A4 and C2, and comparing the motion information of the A4 and the C2 and recording as r 16;

and if at least one of r15 and r16 is different in motion information, vertically shifting the matched block behind the Clip upwards by 8 pixels to obtain a new matched block, and adding a prediction mode corresponding to the new matched block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

And taking the prediction mode corresponding to the matching block before the shift as a candidate enhanced time domain motion vector prediction mode, and adding the candidate enhanced time domain motion vector prediction mode into a time domain candidate mode list.

It should be noted that the order in which the prediction mode corresponding to the matching block before the offset and the prediction mode corresponding to the new matching block are added to the time domain candidate mode list is not limited.

In addition, when there are a plurality of new matching blocks, the prediction mode corresponding to a part of the new matching blocks may be added to the temporal candidate mode list as a candidate enhanced temporal motion vector prediction mode.

And when no new matching block exists, taking the prediction mode corresponding to the matching block before the shift as a candidate enhanced temporal motion vector prediction mode, and adding the candidate enhanced temporal motion vector prediction mode into a temporal candidate mode list.

EXAMPLE thirteen

And shifting on the basis of the original matching block to obtain a new matching block.

Illustratively, the offset is as follows:

Mx1=Mx0+offset1

My1=My0+offset2

wherein, Mx0 and My0 are respectively the horizontal and vertical coordinates of the top left corner of the original matching block, Mx1 and My1 are respectively the horizontal and vertical coordinates of the top left corner of the new matching block, and offset1 and offset2 are offset values, and the values are arbitrary integers.

And taking the prediction mode corresponding to the new matching block as a candidate enhanced time domain motion vector prediction mode, and adding the candidate enhanced time domain motion vector prediction mode into a time domain candidate mode list.

And taking the prediction mode corresponding to the original matching block as a candidate enhanced time domain motion vector prediction mode, and adding the candidate enhanced time domain motion vector prediction mode into a time domain candidate mode list.

It should be noted that the order in which the prediction mode corresponding to the original matching block and the prediction mode corresponding to the new matching block are added to the time domain candidate mode list is not limited.

offset1 and offset2 may take multiple sets of values to obtain a number of new matching blocks, the number of new matching blocks is not limited.

When there are a plurality of new matching blocks, the prediction mode corresponding to a part of the new matching blocks may also be used as a candidate enhanced temporal motion vector prediction mode and added to the temporal candidate mode list.

Example fourteen

Obtaining the motion information of A1 and B2 after the Clip from the range of the A1 and the B2Clip to the current block, and comparing the motion information of the A1 and the B2 to be recorded as r1(r1 comprises the same or different parts, and the same below);

obtaining the motion information of A3 and B4 after the Clip in the range of the A3 and the B4Clip to the current block, comparing the motion information of the A3 and the B4, and marking as r 2;

and if at least one of r1 and r2 is not the same, horizontally shifting the matching block by 8 pixels to the right to obtain a new matching block, and adding a prediction mode corresponding to the matching block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

Obtaining the motion information of A2 and B1 after the Clip in the range of the A2 and the B1Clip to the current block, comparing the motion information of the A2 and the B1, and marking as r 3;

obtaining the motion information of A4 and B3 after the Clip in the range of the A4 and the B3Clip to the current block, comparing the motion information of the A4 and the B3, and marking as r 4;

And if at least one of r3 and r4 is not the same, horizontally shifting the matching block by 8 pixels to the left to obtain a new matching block, and adding a prediction mode corresponding to the matching block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

Obtaining the motion information of A1 and C3 after the Clip in the range of the A1 and the C3Clip to the current block, comparing the motion information of the A1 and the C3, and marking as r 5;

obtaining the motion information of A2 and C4 after the Clip in the range of the A2 and the C4Clip to the current block, comparing the motion information of the A2 and the C4, and marking as r 6;

and if at least one of r5 and r6 is not the same, vertically and downwards shifting the matching block by 8 pixels to obtain a new matching block, and adding a prediction mode corresponding to the matching block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

Obtaining the motion information of A3 and C1 after the Clip in the range of the A3 and the C1Clip to the current block, comparing the motion information of the A3 and the C1, and marking as r 7;

obtaining the motion information of A4 and C2 after the Clip in the range of the A4 and the C2Clip to the current block, comparing the motion information of the A4 and the C2, and marking as r 8;

and if at least one of r7 and r8 is not the same, vertically shifting the matching block upwards by 8 pixels to obtain a new matching block, and adding a prediction mode corresponding to the matching block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

When there are a plurality of new matching blocks, the order in which the prediction mode corresponding to each new matching block is added to the time domain candidate mode list is not limited.

In addition, when there are a plurality of new matching blocks, the prediction mode corresponding to a part of the new matching blocks may be added to the temporal candidate mode list as a candidate enhanced temporal motion vector prediction mode.

And when no new matching block exists, taking the prediction mode corresponding to the original matching block as a candidate enhanced temporal motion vector prediction mode, and adding the candidate enhanced temporal motion vector prediction mode into a temporal candidate mode list.

Example fifteen

The matching block Clip is brought into range of the current CTU.

Assume that the matching block shown in fig. 7 is a matching block after Clip.

When the right boundary of the matching block after the Clip is not at the position of the right boundary of the current CTU, acquiring the motion information of A1 and B2, and comparing the motion information of the A1 and the B2, and recording as r9(r9 comprises the same or different motion information, and the following steps are the same);

acquiring motion information of A3 and B4, and comparing the motion information of the A3 and the B4 and recording as r 10;

and if at least one of r9 and r10 is different in motion information, horizontally shifting the matched block behind the Clip by 8 pixels to the right to obtain a new matched block, and adding a prediction mode corresponding to the new matched block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

When the left boundary of the matching block after the Clip is not positioned at the left boundary position of the current CTU, acquiring the motion information of A2 and B1, and comparing the motion information of the A2 and the B1, and marking as r 11;

acquiring motion information of A4 and B3, and comparing the motion information of the A4 and the B3 and recording as r 12;

and if at least one of r11 and r12 is different in motion information, horizontally shifting the matched block behind the Clip by 8 pixels to the left to obtain a new matched block, and adding a prediction mode corresponding to the new matched block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

When the lower boundary of the matching block after the Clip is not positioned at the lower boundary position of the current CTU, acquiring the motion information of A1 and C3, and comparing the motion information of the A1 and the C3 and marking as r 13;

acquiring motion information of A2 and C4, and comparing the motion information of the A2 and the C4 and recording as r 14;

and if at least one of r13 and r14 is different in motion information, vertically and downwardly shifting the matched block after the Clip by 8 pixels to obtain a new matched block, and adding a prediction mode corresponding to the new matched block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

When the upper boundary of the matching block after the Clip is not positioned at the upper boundary position of the current CTU, acquiring the motion information of A3 and C1, and comparing the motion information of the A3 and the C1 and marking as r 15;

Acquiring motion information of A4 and C2, and comparing the motion information of the A4 and the C2 and recording as r 16;

and if at least one of r15 and r16 is different in motion information, vertically shifting the matched block behind the Clip upwards by 8 pixels to obtain a new matched block, and adding a prediction mode corresponding to the new matched block into a time domain candidate mode list as a candidate enhanced time domain motion vector prediction mode.

It should be noted that the order in which the prediction mode corresponding to the matching block before the offset and the prediction mode corresponding to the new matching block are added to the time domain candidate mode list is not limited.

In addition, when there are a plurality of new matching blocks, the prediction mode corresponding to a part of the new matching blocks may be added to the temporal candidate mode list as a candidate enhanced temporal motion vector prediction mode.

And when no new matching block exists, taking the prediction mode corresponding to the matching block before the shift as a candidate enhanced temporal motion vector prediction mode, and adding the candidate enhanced temporal motion vector prediction mode into a temporal candidate mode list.

Example sixteen

And shifting on the basis of the original matching block to obtain a new matching block.

Illustratively, the offset is as follows:

Mx1=Mx0+offset1

My1=My0+offset2

wherein, Mx0 and My0 are respectively the horizontal and vertical coordinates of the top left corner of the original matching block, Mx1 and My1 are respectively the horizontal and vertical coordinates of the top left corner of the new matching block, and offset1 and offset2 are offset values, and the values are arbitrary integers.

And taking the prediction mode corresponding to the new matching block as a candidate enhanced time domain motion vector prediction mode, and adding the candidate enhanced time domain motion vector prediction mode into a time domain candidate mode list.

It should be noted that, the offset1 and the offset2 may take multiple sets of values to obtain multiple new matching blocks, and the number of the new matching blocks is not limited.

When there are a plurality of new matching blocks, the order in which the prediction mode corresponding to the new matching block is added to the time domain candidate mode list is not limited.

When there are a plurality of new matching blocks, the prediction mode corresponding to a part of the new matching blocks may also be used as a candidate enhanced temporal motion vector prediction mode and added to the temporal candidate mode list.

Fourthly, acquiring the motion information of all sub-blocks in the current block according to the motion information of the matching block, and performing motion compensation on each sub-block in the current block according to the motion information of each sub-block in the current block

Illustratively, for the encoding-side device, the matching block includes a matching block (referred to as a target candidate matching block herein) corresponding to any candidate enhanced temporal motion vector prediction mode in the temporal candidate mode list;

for the decoding device, the matching block includes a matching block corresponding to the enhanced temporal motion vector prediction mode (i.e., the target matching block).

Example seventeen

For any sub-block in the matching block, the sub-block is in the range of the current CTU from Clip;

if the forward motion information and the backward motion information of the sub-block after the Clip are both available, respectively scaling the forward motion information and the backward motion information of the sub-block after the Clip to point to the first frame of List0 and the first frame of List1, and respectively giving the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the sub-block after the Clip is available but the backward motion information is not available, the forward motion information of the sub-block after the Clip is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the corresponding position of the current block.

If the backward motion information of the sub-block after the Clip is available but the forward motion information is not available, the backward motion information of the sub-block after the Clip is extended to the first frame pointing to the List1, and the extended backward motion information is given to the sub-block at the corresponding position of the current block.

If the forward motion information and the backward motion information of the sub-block after the Clip are unavailable, the central position of the matched block is in the range from the Clip to the current CTU, when the forward motion information and the backward motion information of the central position of the matched block after the Clip are available, the forward motion information and the backward motion information of the central position of the matched block after the Clip are respectively extended to a first frame pointing to List0 and a first frame pointing to List1, and the extended forward motion information and the extended backward motion information are respectively endowed to the sub-block at the corresponding position of the current block;

When the forward motion information of the central position of the matching block after the Clip is available, but the backward motion information is unavailable, the forward motion information of the central position of the matching block after the Clip is stretched to a first frame pointing to the List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block;

when the backward motion information of the central position of the matched block after the Clip is available but the forward motion information is unavailable, the backward motion information of the central position of the matched block after the Clip is stretched to a first frame pointing to the List1, and the stretched backward motion information is given to the sub-block at the corresponding position of the current block;

and when the forward motion information and the backward motion information of the central position of the matched block behind the Clip are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

And performing motion compensation on each sub-block in the current block according to the motion information of each sub-block in the current block.

EXAMPLE eighteen

For any sub-block in the matching block, the sub-block is in the range of the current CTU from Clip;

if the forward motion information and the backward motion information of the sub-block after the Clip are both available, respectively scaling the forward motion information and the backward motion information of the sub-block after the Clip to point to the first frame of List0 and the first frame of List1, and respectively giving the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the sub-block after the Clip is available, but the backward motion information is unavailable, the forward motion information of the sub-block after the Clip is stretched to a first frame pointing to List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block after the Clip is available but the forward motion information is not available, the backward motion information of the sub-block after the Clip is extended to the first frame pointing to the List1, and the extended backward motion information is given to the sub-block at the corresponding position of the current block.

And if the forward motion information and the backward motion information of the sub-block behind the Clip are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

And performing motion compensation on each sub-block in the current block according to the motion information of each sub-block in the current block.

Example nineteen

For any sub-block in the matching block, the sub-block is in the range of the current CTU from Clip;

if the forward motion information and the backward motion information of the sub-block after the Clip are both available, respectively scaling the forward motion information and the backward motion information of the sub-block after the Clip to point to the first frame of List0 and the first frame of List1, and respectively giving the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the sub-block after the Clip is available, but the backward motion information is unavailable, the forward motion information of the sub-block after the Clip is stretched to a first frame pointing to List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block after the Clip is available but the forward motion information is not available, the backward motion information of the sub-block after the Clip is extended to the first frame pointing to the List1, and the extended backward motion information is given to the sub-block at the corresponding position of the current block.

If the forward motion information and the backward motion information of the sub-block after the Clip are unavailable, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the second surrounding block of the current block are available, and respectively endowing the scaled forward motion information and the scaled backward motion information to the sub-block at the corresponding position of the current block;

when the forward motion information of the second bounding box is available but the backward motion information is not available, the forward motion information of the second bounding box is stretched to the first frame pointing to the List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block;

When the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to the first frame pointing to the List1, and giving the scaled backward motion information to the sub-block at the corresponding position of the current block;

and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

And performing motion compensation on each sub-block in the current block according to the motion information of each sub-block in the current block.

Example twenty

For any sub-block in the matching block, if the forward motion information and the backward motion information of the sub-block are available, respectively scaling the forward motion information and the backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, and respectively giving the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the sub-block is available but the backward motion information is not available, the forward motion information of the sub-block is scaled to point to the first frame of the List0, and the scaled forward motion information is assigned to the sub-block at the corresponding position of the current block.

If the backward motion information of the subblock is available but the forward motion information is not available, the backward motion information of the subblock is scaled to point to the first frame of the List1, and the scaled backward motion information is assigned to the subblock at the corresponding position of the current block.

If the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the central position of the matching block to a first frame pointing to List0 and a first frame pointing to List1 when both the forward motion information and the backward motion information of the central position of the matching block are available, and giving the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block;

when the forward motion information of the center position of the matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to a subblock at a corresponding position of the current block;

when the backward motion information of the center position of the matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the matching block to a first frame pointing to the List1, and assigning the scaled backward motion information to a sub-block at a corresponding position of the current block;

And when the forward motion information and the backward motion information of the central position of the matching block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

And performing motion compensation on each sub-block in the current block according to the motion information of each sub-block in the current block.

Example twenty one

For any sub-block in the matching block, if the forward motion information and the backward motion information of the sub-block are both available, respectively scaling the forward motion information and the backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, and respectively assigning the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the subblock is available but the backward motion information is not available, scaling the forward motion information of the subblock to a first frame pointing to the List0, and giving the scaled forward motion information to the subblock at the corresponding position of the current block;

if the backward motion information of the subblock is available but the forward motion information is not available, the backward motion information of the subblock is scaled to point to the first frame of the List1, and the scaled backward motion information is assigned to the subblock at the corresponding position of the current block.

And if the forward motion information and the backward motion information of the subblock are unavailable, giving zero motion information to the subblock at the corresponding position of the current block.

And performing motion compensation on each sub-block in the current block according to the motion information of each sub-block in the current block.

Example twenty two

For any sub-block in the matching block, if the forward motion information and the backward motion information of the sub-block are available, respectively scaling the forward motion information and the backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, and respectively assigning the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block.

If the forward motion information of the sub-block is available but the backward motion information is not available, the forward motion information of the sub-block is scaled to point to the first frame of the List0, and the scaled forward motion information is assigned to the sub-block at the corresponding position of the current block.

If the backward motion information of the subblock is available but the forward motion information is not available, the backward motion information of the subblock is scaled to point to the first frame of the List1, and the scaled backward motion information is assigned to the subblock at the corresponding position of the current block.

If the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the second surrounding block of the current block are both available, and respectively giving the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block;

When the forward motion information of the second bounding box is available but the backward motion information is not available, the forward motion information of the second bounding box is stretched to the first frame pointing to the List0, and the stretched forward motion information is given to the sub-block at the corresponding position of the current block;

when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to the first frame pointing to the List1, and giving the scaled backward motion information to the sub-block at the corresponding position of the current block;

and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

And performing motion compensation on each sub-block in the current block according to the motion information of each sub-block in the current block.

The control of whether the enhanced temporal motion vector prediction technique is enabled is explained below.

Example twenty three

SPS level syntax may be added to control whether ETMVP is enabled.

For example, u (n), u (v) or ue (n), ue (v) may be selected for encoding, where u (n) indicates that n bits are read in consecutively and decoded into an unsigned number, and ue (n) indicates unsigned exponential Golomb entropy coding. When the parameter in the parentheses in the descriptor is n, the syntax element is the fixed-length coding, and when the parameter is v, the syntax element adopts the variable-length coding. The selected encoding method is not limited.

In one example, u (1) is used for encoding, and control of whether ETMVP is enabled is achieved by one bit.

Example twenty-four

The SPS-level syntax is added to control the size of the largest block that allows ETMVP techniques to be enabled. For example, the maximum block size that allows the ETMVP technique to be enabled is 32 × 32.

For example, u (n), u (v), or ue (n), ue (v) may be selected for encoding.

In one example, ue (v) is used for encoding to improve flexibility in setting the size of the largest block and to avoid bit waste.

Example twenty-five

The SPS-level syntax is added to control the size of the minimum block that allows ETMVP techniques to be enabled. For example, the size of the smallest block that allows the ETMVP technique to be enabled is 32 × 32.

For example, u (n), u (v), or ue (n), ue (v) may be selected for encoding.

In one example, ue (v) is used for encoding to improve flexibility in setting the size of the minimum block and to avoid bit waste.

Example twenty-six

Slice level syntax may be added to control whether ETMVP is enabled.

For example, u (n), u (v) or ue (n), ue (v) may be selected for encoding, where u (n) indicates that n bits are read in consecutively and decoded into an unsigned number, and ue (n) indicates unsigned exponential Golomb entropy coding. When the parameter in the parentheses in the descriptor is n, the syntax element is the fixed-length coding, and when the parameter is v, the syntax element adopts the variable-length coding. The selected encoding method is not limited.

In one example, u (1) is used for encoding, and control of whether ETMVP is enabled is achieved by one bit.

Example twenty-seven

Slice level syntax is added to control the size of the largest block that allows the ETMVP technique to be enabled. For example, the maximum block size that allows the ETMVP technique to be enabled is 32 × 32.

For example, u (n), u (v), or ue (n), ue (v) may be selected for encoding.

In one example, ue (v) is used for encoding to improve flexibility in setting the size of the largest block and to avoid bit waste.

Example twenty-eight

Slice level syntax is added to control the size of the minimum block that allows the ETMVP technique to be enabled. For example, the size of the smallest block that allows the ETMVP technique to be enabled is 32 × 32.

For example, u (n), u (v), or ue (n), ue (v) may be selected for encoding.

In one example, ue (v) is used for encoding to improve flexibility in setting the size of the minimum block and to avoid bit waste.

The methods provided herein are described above. The following describes the apparatus provided in the present application:

referring to fig. 9, fig. 9 is a schematic structural diagram of a decoding apparatus according to an embodiment of the present application, where the decoding apparatus may include:

an obtaining unit 910, configured to obtain a code stream of a current block;

A decoding unit 920, configured to parse, from the code stream of the current block, index information of an enhanced temporal motion vector prediction mode, where the index information is used to identify a position of the enhanced temporal motion vector prediction mode in a first temporal candidate mode list constructed by a coding-end device;

a first determining unit 930 configured to determine a matching block of the current block based on a first surrounding block of the current block;

a constructing unit 940, configured to determine a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and construct a second temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

a second determining unit 950 for determining an enhanced temporal motion vector prediction mode from the second temporal candidate mode list based on the index information;

a prediction unit 960, configured to determine motion information of each sub-block in the current block based on the enhanced temporal motion vector prediction mode, and perform motion compensation on each sub-block in the current block based on the motion information of each sub-block in the current block.

In a possible embodiment, the first determining unit 930 is specifically configured to determine motion information of a first phase based on the first surrounding block; determining a matching block for the current block based on the motion information of the first stage.

In a possible embodiment, the first determining unit 930 is specifically configured to determine the motion information of the first stage based on the forward motion information and/or the backward motion information of the first surrounding block.

In a possible embodiment, the first determining unit 930 is specifically configured to:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if the backward motion information of the first peripheral block is not available but the forward motion information is available, scaling the forward motion information of the first peripheral block to point to a first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In a possible embodiment, the first determining unit 930 is specifically configured to:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first peripheral block is available, but the forward motion information of the first peripheral block does not point to the first frame in List0, scaling the forward motion information of the first peripheral block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

if the forward motion information of the first surrounding block is not available, but the backward motion information is available, scaling the backward motion information of the first surrounding block to point to a first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

If neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 0;

the reference direction of the motion information in the first stage is the List0 direction.

In a possible embodiment, the first determining unit 930 is specifically configured to:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if the backward motion information of the first peripheral block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

The reference direction of the motion information in the first stage is the List1 direction.

In a possible embodiment, the first determining unit 930 is specifically configured to:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first peripheral block is available, but the forward motion information of the first peripheral block does not point to the first frame in List0, scaling the forward motion information of the first peripheral block to point to the first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

determining that the motion vector of the first stage is 0 and the reference frame index is an index of a first frame in List0 if the forward motion information of the first peripheral block is not available;

the reference direction of the motion information in the first stage is the List0 direction.

In a possible embodiment, the first determining unit 930 is specifically configured to:

determining that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List0, and the reference direction of the motion information of the first stage is a List0 direction;

Or the like, or, alternatively,

it is determined that the motion vector of the first stage is 0, the reference frame index is an index of the first frame in List1, and the reference direction of the motion information of the first stage is the List1 direction.

In a possible embodiment, the first determining unit 930 is specifically configured to determine a matching block of the current block based on the horizontal motion vector, the vertical motion vector, the precision of the motion vector, and the position of the current block in the first stage.

In a possible embodiment, the first determining unit 930 is specifically configured to determine the matching block of the current block based on the horizontal motion vector, the vertical motion vector, the precision of the motion vector, and the sub-block size of the first stage.

In a possible embodiment, the constructing unit 940 is specifically configured to:

the first sub-block and the second sub-block are in the range of the current CTU, and the motion information of the first sub-block and the second sub-block after the Clip is compared; and enabling the third sub-block and the fourth sub-block to be in the range of the current CTU, comparing the motion information of the third sub-block and the fourth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the right to obtain a new matching block;

The fifth sub-block and the sixth sub-block are in the range of the current CTU, and the motion information of the fifth sub-block and the sixth sub-block after the Clip is compared; and enabling the seventh sub-block and the eighth sub-block to be in the range of the current CTU, comparing the motion information of the seventh sub-block and the eighth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the left to obtain a new matching block;

the first sub-block and the ninth sub-block are in the range of the current CTU, and the motion information of the first sub-block and the ninth sub-block after the Clip is compared; and enabling the fifth sub-block and the tenth sub-block to be in the range of the current CTU, comparing the motion information of the fifth sub-block and the tenth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, vertically and downwards offsetting the matching block by one unit to obtain a new matching block;

the third sub-block and the eleventh sub-block are in the range of the current CTU, and the motion information of the third sub-block and the eleventh sub-block after the Clip is compared; and enabling the seventh sub-block and the twelfth sub-block to be in the range of the current CTU, comparing the motion information of the seventh sub-block and the twelfth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, vertically and upwardly offsetting the matching block by one unit to obtain a new matching block;

The first sub-block is the sub-block at the upper left corner of the matching block, the second sub-block is the adjacent sub-block at the top right corner of the matching block, the third sub-block is the sub-block at the lower left corner of the matching block, the fourth sub-block is the adjacent sub-block at the bottom right corner of the matching block, the fifth sub-block is the sub-block at the upper right corner of the matching block, the sixth sub-block is the adjacent sub-block at the top left corner of the matching block, the seventh sub-block is the sub-block at the lower right corner of the matching block, the eighth sub-block is the adjacent sub-block at the bottom left corner of the matching block, the ninth sub-block is the adjacent sub-block at the left corner of the matching block, the tenth sub-block is the adjacent sub-block at the right corner of the matching block, the eleventh sub-block is the adjacent sub-block at the left corner of the matching block, and the twelfth sub-block is the adjacent sub-block at the top right corner of the matching block; one unit is the side length of the subblock.

In a possible embodiment, the constructing unit 940 is specifically configured to perform horizontal and vertical shifting on the matching block based on one or more shifting amount pairs, respectively, to obtain one or more new matching blocks.

In a possible embodiment, the constructing unit 940 is specifically configured to range the matching block Clip to the current CTU.

In a possible embodiment, the constructing unit 940 is specifically configured to:

when the right boundary of the matching block after the Clip is not positioned at the right boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the fourteenth sub-block, and comparing the motion information of the fifteenth sub-block and the sixteenth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the right to obtain a new matching block;

when the left boundary of the matching block after the Clip is not positioned at the left boundary position of the current CTU, comparing the motion information of the seventeenth sub-block and the eighteenth sub-block, and comparing the motion information of the nineteenth sub-block and the twentieth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the left to obtain a new matching block;

when the lower boundary of the matching block after the Clip is not positioned at the position of the lower boundary of the current CTU, comparing the motion information of the thirteenth sub-block and the twenty-first sub-block, and comparing the motion information of the seventeenth sub-block and the twenty-second sub-block, if at least one of the two comparison results is different in motion information, vertically and downwardly offsetting the matching block after the Clip by one unit to obtain a new matching block;

When the upper boundary of the matching block after the Clip is not positioned at the upper boundary position of the current CTU, comparing the motion information of the fifteenth sub-block and the twenty-third sub-block, and comparing the motion information of the nineteenth sub-block and the twenty-fourth sub-block, if at least one of the two comparison results is different in motion information, vertically and upwardly shifting a unit for the matching block after the Clip to obtain a new matching block;

wherein, the thirteenth sub-block is the upper left corner sub-block of the matching block after the Clip, the fourteenth sub-block is the top right adjacent sub-block of the matching block after the Clip, the fifteenth sub-block is the lower left corner sub-block of the matching block after the Clip, the sixteenth sub-block is the bottom right adjacent sub-block of the matching block after the Clip, the seventeenth sub-block is the upper right corner sub-block of the matching block after the Clip, the eighteenth sub-block is the top left adjacent sub-block of the matching block after the Clip, the nineteenth sub-block is the bottom right adjacent sub-block of the matching block after the Clip, the twentieth sub-block is the bottom left adjacent sub-block of the matching block after the Clip, the twenty-first sub-block is the left most adjacent sub-block right below the matching block after the Clip, the twenty-second sub-block is the right most adjacent sub-block below the matching block after the Clip, the twenty-third sub-block is the left most adjacent sub-block above the matching block after the Clip, the twenty-fourth sub-block is the adjacent sub-block on the rightmost side right above the matched block after the Clip; one unit is the side length of the subblock.

In a possible embodiment, the constructing unit 940 is specifically configured to:

when at least one new matching block exists, determining a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced time domain motion vector prediction mode;

or the like, or, alternatively,

when at least one new matching block exists, determining a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

and when no new matching block exists, determining the prediction mode corresponding to the matching block before shifting as a candidate enhanced temporal motion vector prediction mode.

In a possible embodiment, the prediction unit 960 is specifically configured to, for any sub-block in the target matching block, prune the sub-block to the range of the current CTU; the target matching block is a matching block corresponding to the enhanced temporal motion vector prediction mode;

if the forward motion information and the backward motion information of the pruned subblock are both available, respectively scaling the forward motion information and the backward motion information of the pruned subblock to point to a first frame of List0 and a first frame of List1, and respectively endowing the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block;

If the forward motion information of the pruned sub-block is available but the backward motion information is not available, scaling the forward motion information of the pruned sub-block to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block;

if the backward motion information of the pruned sub-block is available but the forward motion information is not available, the backward motion information of the pruned sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In a possible embodiment, the prediction unit 960 is further configured to:

if the forward motion information and the backward motion information of the pruned subblock are unavailable, pruning the center position of the target matching block to be within the range of the current CTU, respectively stretching the forward motion information and the backward motion information of the center position of the pruned target matching block to a first frame pointing to List0 and a first frame pointing to List1 when both the forward motion information and the backward motion information of the center position of the pruned target matching block are available, and respectively endowing the stretched forward motion information and the stretched backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the pruned target matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the pruned target matching block to a first frame pointing to a List0, and assigning the scaled forward motion information to a sub-block at a corresponding position of the current block; when the backward motion information of the center position of the pruned target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the pruned target matching block to a first frame pointing to a List1, and assigning the scaled backward motion information to a sub-block at a corresponding position of the current block; when the forward motion information and the backward motion information of the center position of the trimmed target matching block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

Or if the forward motion information and the backward motion information of the pruned sub-blocks are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block;

or, if the forward motion information and the backward motion information of the pruned subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are both available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In a possible embodiment, the prediction unit 960 is specifically configured to, for any sub-block in the target matching block, if the forward motion information and the backward motion information of the sub-block are both available, scale the forward motion information and the backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, respectively, and assign the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block;

if the forward motion information of the sub-block is available, but the backward motion information is not available, the forward motion information of the sub-block is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block is available but the forward motion information is not available, the backward motion information of the sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In a possible embodiment, the prediction unit 960 is further configured to:

if the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the central position of the target matching block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the central position of the target matching block are both available, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the target matching block is available, but the backward motion information is not available, scaling the forward motion information of the center position of the target matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the target matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the target matching block to a first frame pointing to the List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; when the forward motion information and the backward motion information of the central position of the target matching block are unavailable, giving zero motion information to the subblock at the corresponding position of the current block;

Or if the forward motion information and the backward motion information of the sub-block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

or, if the forward motion information and the backward motion information of the subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In a possible embodiment, the decoding unit 920 is specifically configured to, when it is determined that the enhanced temporal motion vector prediction technique is enabled for the current block, parse index information of an enhanced temporal motion vector prediction mode from a code stream of the current block.

In a possible embodiment, the sequence parameter set level syntax or Slice level syntax is used to indicate whether the current block enables an enhanced temporal motion vector prediction technique.

In a possible embodiment, when the sequence parameter set level syntax is used to indicate whether the current block enables the enhanced temporal motion vector prediction technique, the decoding unit 920 is specifically configured to:

when the image sequence to which the current block belongs enables the enhanced temporal motion vector prediction technology, determining that the current block enables the enhanced temporal motion vector prediction technology;

when the image sequence to which the current block belongs does not enable the enhanced temporal motion vector prediction technology, determining that the current block does not enable the enhanced temporal motion vector prediction technology.

In a possible embodiment, when Slice-level syntax is used to indicate whether the current block enables an enhanced temporal motion vector prediction technique, the decoding unit 920 is specifically configured to:

When Slice to which the current block belongs enables an enhanced temporal motion vector prediction technology, determining that the current block enables the enhanced temporal motion vector prediction technology;

when the Slice to which the current block belongs does not enable the enhanced temporal motion vector prediction technology, determining that the current block does not enable the enhanced temporal motion vector prediction technology.

In a possible embodiment, the decoding unit 920 is specifically configured to:

when the size of the current block is smaller than or equal to the size of a preset maximum block and larger than or equal to the size of a preset minimum block, determining that the current block enables an enhanced temporal motion vector prediction technology;

when the size of the current block is larger than the size of a preset maximum block or smaller than the size of a preset minimum block, determining that the enhanced temporal motion vector prediction technology is not enabled for the current block.

In a possible embodiment, the size of the preset maximum block is represented using a sequence parameter set level syntax or using a Slice level syntax;

and/or the first and/or second light sources,

the size of the preset minimum block is expressed using a sequence parameter set level syntax or using a Slice level syntax.

In a possible embodiment, the decoding device may comprise a video decoder.

Fig. 10 is a schematic diagram of a hardware structure of a decoding-side device according to an embodiment of the present application. The decode-side device may include a processor 1001, a machine-readable storage medium 1002 having stored thereon machine-executable instructions. The processor 1001 and the machine-readable storage medium 1002 may communicate via a system bus 1003. Also, the processor 1001 may perform the decoding method described above by reading and executing machine-executable instructions corresponding to the decoding control logic in the machine-readable storage medium 1002.

The machine-readable storage medium 1002 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.

In some embodiments, there is also provided a machine-readable storage medium having stored therein machine-executable instructions which, when executed by a processor, implement the decoding method described above. For example, the machine-readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and so forth.

Referring to fig. 11, fig. 11 is a schematic structural diagram of an encoding apparatus according to an embodiment of the present disclosure, where the encoding apparatus may include:

a first determining unit 1110, configured to determine a matching block of a current block based on a first peripheral block of the current block;

a constructing unit 1120, configured to determine a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and construct a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode;

a predicting unit 1130, configured to traverse each candidate enhanced temporal motion vector prediction mode in the first temporal candidate mode list, determine, for any candidate enhanced temporal motion vector prediction mode, motion information of each subblock in the current block based on the candidate enhanced temporal motion vector prediction mode, and perform motion compensation on each subblock in the current block based on the motion information of each subblock in the current block;

a second determining unit 1140, configured to determine, based on a rate distortion cost corresponding to each candidate enhanced temporal motion vector prediction mode, a candidate enhanced temporal motion vector prediction mode with a minimum rate distortion cost as an enhanced temporal motion vector prediction mode of the current block;

An encoding unit 1150, configured to carry index information of the enhanced temporal motion vector prediction mode of the current block in a code stream of the current block, where the index information is used to identify a position of the enhanced temporal motion vector prediction mode in the first temporal candidate mode list.

In a possible embodiment, the first determining unit 1110 is specifically configured to determine, based on the first surrounding block, motion information of a first phase; determining a matching block for the current block based on the motion information of the first stage.

In a possible embodiment, the first determining unit 1110 is specifically configured to determine the motion information of the first stage based on the forward motion information and/or the backward motion information of the first surrounding block.

In a possible embodiment, the first determining unit 1110 is specifically configured to:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available but the forward motion information is available, scaling the forward motion information of the first peripheral block to point to a first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In a possible embodiment, the first determining unit 1110 is specifically configured to:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first peripheral block is available, but the forward motion information of the first peripheral block does not point to the first frame in List0, scaling the forward motion information of the first peripheral block to point to the first frame in List0, and determining the motion vector of the first stage as a scaled motion vector, with the reference frame index being the index of the first frame in List 0;

If the forward motion information of the first surrounding block is not available, but the backward motion information is available, scaling the backward motion information of the first surrounding block to point to a first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

if neither the forward motion information nor the backward motion information of the first surrounding block is available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 0;

the reference direction of the motion information in the first stage is the List0 direction.

In a possible embodiment, the first determining unit 1110 is specifically configured to:

determining the motion information of the first stage as backward motion information of the first surrounding block if the backward motion information of the first surrounding block is available and points to a first frame in a List 1;

if the backward motion information of the first peripheral block is available but the backward motion information of the first peripheral block does not point to the first frame in List1, scaling the backward motion information of the first peripheral block to point to the first frame in List1, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 1;

If the backward motion information of the first peripheral block is not available, determining that the motion vector of the first stage is 0, and the reference frame index is the index of the first frame in List 1;

the reference direction of the motion information in the first stage is the List1 direction.

In a possible embodiment, the first determining unit 1110 is specifically configured to:

determining the motion information of the first stage as forward motion information of the first surrounding block if the forward motion information of the first surrounding block is available and the forward motion information of the first surrounding block points to a first frame in a List 0;

if the forward motion information of the first peripheral block is available, but the forward motion information of the first peripheral block does not point to the first frame in List0, scaling the forward motion information of the first peripheral block to point to the first frame in List0, and determining that the motion vector of the first stage is a scaled motion vector, and the reference frame index is an index of the first frame in List 0;

determining that the motion vector of the first stage is 0 and the reference frame index is an index of a first frame in List0 if the forward motion information of the first peripheral block is not available;

the reference direction of the motion information in the first stage is the List0 direction.

In a possible embodiment, the first determining unit 1110 is specifically configured to:

determining that the motion vector of the first stage is 0, the reference frame index is an index of a first frame in List0, and the reference direction of the motion information of the first stage is a List0 direction;

or the like, or, alternatively,

it is determined that the motion vector of the first stage is 0, the reference frame index is an index of the first frame in List1, and the reference direction of the motion information of the first stage is the List1 direction.

In a possible embodiment, the constructing unit 1120 is specifically configured to determine a matching block of the current block based on the horizontal motion vector, the vertical motion vector, the precision of the motion vector, and the position of the current block in the first stage.

In a possible embodiment, the constructing unit 1120 is specifically configured to determine a matching block of the current block based on the horizontal motion vector, the vertical motion vector, the precision of the motion vector, and the sub-block size of the first stage.

In a possible embodiment, the building unit 1120 is specifically configured to:

the first sub-block and the second sub-block are in the range of the current CTU, and the motion information of the first sub-block and the second sub-block after the Clip is compared; and enabling the third sub-block and the fourth sub-block to be in the range of the current CTU, comparing the motion information of the third sub-block and the fourth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the right to obtain a new matching block;

The fifth sub-block and the sixth sub-block are in the range of the current CTU, and the motion information of the fifth sub-block and the sixth sub-block after the Clip is compared; and enabling the seventh sub-block and the eighth sub-block to be in the range of the current CTU, comparing the motion information of the seventh sub-block and the eighth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, horizontally shifting the matching block by one unit to the left to obtain a new matching block;

the first sub-block and the ninth sub-block are in the range of the current CTU, and the motion information of the first sub-block and the ninth sub-block after the Clip is compared; and enabling the fifth sub-block and the tenth sub-block to be in the range of the current CTU, comparing the motion information of the fifth sub-block and the tenth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, vertically and downwards offsetting the matching block by one unit to obtain a new matching block;

the third sub-block and the eleventh sub-block are in the range of the current CTU, and the motion information of the third sub-block and the eleventh sub-block after the Clip is compared; and enabling the seventh sub-block and the twelfth sub-block to be in the range of the current CTU, comparing the motion information of the seventh sub-block and the twelfth sub-block after the Clip, and if at least one of the two comparison results is different in motion information, vertically and upwardly offsetting the matching block by one unit to obtain a new matching block;

The first sub-block is the sub-block at the upper left corner of the matching block, the second sub-block is the adjacent sub-block at the top right corner of the matching block, the third sub-block is the sub-block at the lower left corner of the matching block, the fourth sub-block is the adjacent sub-block at the bottom right corner of the matching block, the fifth sub-block is the sub-block at the upper right corner of the matching block, the sixth sub-block is the adjacent sub-block at the top left corner of the matching block, the seventh sub-block is the sub-block at the lower right corner of the matching block, the eighth sub-block is the adjacent sub-block at the bottom left corner of the matching block, the ninth sub-block is the adjacent sub-block at the left corner of the matching block, the tenth sub-block is the adjacent sub-block at the right corner of the matching block, the eleventh sub-block is the adjacent sub-block at the left corner of the matching block, and the twelfth sub-block is the adjacent sub-block at the top right corner of the matching block; one unit is the side length of the subblock.

In a possible embodiment, the constructing unit 1120 is specifically configured to perform horizontal and vertical shifting on the matching block based on one or more shifting amount pairs, respectively, to obtain one or more new matching blocks.

In a possible embodiment, the building unit 1120 is specifically configured to map the matching block Clip to the current CTU.

In a possible embodiment, the building unit 1120 is specifically configured to:

when the right boundary of the matching block after the Clip is not positioned at the right boundary position of the current CTU, comparing the motion information of the thirteenth sub-block and the fourteenth sub-block, and comparing the motion information of the fifteenth sub-block and the sixteenth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the right to obtain a new matching block;

when the left boundary of the matching block after the Clip is not positioned at the left boundary position of the current CTU, comparing the motion information of the seventeenth sub-block and the eighteenth sub-block, and comparing the motion information of the nineteenth sub-block and the twentieth sub-block, if at least one of the two comparison results is that the motion information is different, horizontally shifting the matching block after the Clip by one unit to the left to obtain a new matching block;

when the lower boundary of the matching block after the Clip is not positioned at the position of the lower boundary of the current CTU, comparing the motion information of the thirteenth sub-block and the twenty-first sub-block, and comparing the motion information of the seventeenth sub-block and the twenty-second sub-block, if at least one of the two comparison results is different in motion information, vertically and downwardly offsetting the matching block after the Clip by one unit to obtain a new matching block;

When the upper boundary of the matching block after the Clip is not positioned at the upper boundary position of the current CTU, comparing the motion information of the fifteenth sub-block and the twenty-third sub-block, and comparing the motion information of the nineteenth sub-block and the twenty-fourth sub-block, if at least one of the two comparison results is different in motion information, vertically and upwardly shifting a unit for the matching block after the Clip to obtain a new matching block;

wherein, the thirteenth sub-block is the upper left corner sub-block of the matching block after the Clip, the fourteenth sub-block is the top right adjacent sub-block of the matching block after the Clip, the fifteenth sub-block is the lower left corner sub-block of the matching block after the Clip, the sixteenth sub-block is the bottom right adjacent sub-block of the matching block after the Clip, the seventeenth sub-block is the upper right corner sub-block of the matching block after the Clip, the eighteenth sub-block is the top left adjacent sub-block of the matching block after the Clip, the nineteenth sub-block is the bottom right adjacent sub-block of the matching block after the Clip, the twentieth sub-block is the bottom left adjacent sub-block of the matching block after the Clip, the twenty-first sub-block is the left most adjacent sub-block right below the matching block after the Clip, the twenty-second sub-block is the right most adjacent sub-block below the matching block after the Clip, the twenty-third sub-block is the left most adjacent sub-block above the matching block after the Clip, the twenty-fourth sub-block is the adjacent sub-block on the rightmost side right above the matched block after the Clip; one unit is the side length of the subblock.

In a possible embodiment, the building unit 1120 is specifically configured to:

when at least one new matching block exists, determining a prediction mode corresponding to the matching block before shifting and a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

when at least one new matching block exists, determining a prediction mode corresponding to the new matching block obtained by shifting as a candidate enhanced temporal motion vector prediction mode;

or the like, or, alternatively,

and when no new matching block exists, determining the prediction mode corresponding to the matching block before shifting as a candidate enhanced temporal motion vector prediction mode.

In a possible embodiment, the prediction unit 1130 is specifically configured to, for any sub-block in the target candidate matching block, prune the sub-block to be within the range of the current CTU; the target candidate matching block is a matching block corresponding to the enhanced temporal motion vector prediction mode;

if the forward motion information and the backward motion information of the pruned subblock are both available, respectively scaling the forward motion information and the backward motion information of the pruned subblock to point to a first frame of List0 and a first frame of List1, and respectively endowing the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block;

If the forward motion information of the pruned sub-block is available but the backward motion information is not available, scaling the forward motion information of the pruned sub-block to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block;

if the backward motion information of the pruned sub-block is available but the forward motion information is not available, the backward motion information of the pruned sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In one possible embodiment, the prediction unit 1130 is further configured to:

if the forward motion information and the backward motion information of the pruned subblock are unavailable, pruning the center position of the target candidate matching block to the range of the current CTU, respectively stretching the forward motion information and the backward motion information of the center position of the pruned target candidate matching block to a first frame pointing to List0 and a first frame pointing to List1 when both the forward motion information and the backward motion information of the center position of the pruned target candidate matching block are available, and respectively endowing the stretched forward motion information and the stretched backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the pruned target candidate matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the pruned target candidate matching block to a first frame pointing to the List0 and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the pruned target candidate matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the pruned target candidate matching block to a first frame pointing to the List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; when the forward motion information and the backward motion information of the center position of the trimmed target candidate matching block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

Or if the forward motion information and the backward motion information of the pruned sub-blocks are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block;

or, if the forward motion information and the backward motion information of the pruned subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are both available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In a possible embodiment, the prediction unit 1130 is specifically configured to, for any sub-block in the target candidate matching block, if both forward motion information and backward motion information of the sub-block are available, scale forward motion information and backward motion information of the sub-block to point to the first frame of List0 and the first frame of List1, respectively, and assign the scaled forward motion information and backward motion information to the sub-block at the corresponding position of the current block;

if the forward motion information of the sub-block is available, but the backward motion information is not available, the forward motion information of the sub-block is extended to the first frame pointing to the List0, and the extended forward motion information is given to the sub-block at the corresponding position of the current block;

if the backward motion information of the sub-block is available but the forward motion information is not available, the backward motion information of the sub-block is scaled to the first frame pointing to the List1, and the scaled backward motion information is assigned to the sub-block at the corresponding position of the current block.

In one possible embodiment, the prediction unit 1130 is further configured to:

if the forward motion information and the backward motion information of the subblock are unavailable, respectively scaling the forward motion information and the backward motion information of the center position of the target candidate matching block to a first frame pointing to List0 and a first frame pointing to List1 when the forward motion information and the backward motion information of the center position of the target candidate matching block are both available, and respectively giving the scaled forward motion information and the scaled backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the center position of the target candidate matching block is available but the backward motion information is not available, scaling the forward motion information of the center position of the target candidate matching block to a first frame pointing to the List0, and assigning the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the center position of the target candidate matching block is available but the forward motion information is not available, scaling the backward motion information of the center position of the target candidate matching block to a first frame pointing to List1 and assigning the scaled backward motion information to a sub-block at a corresponding position of the current block; when the forward motion information and the backward motion information of the central position of the target candidate matching block are unavailable, giving zero motion information to the subblock at the corresponding position of the current block;

Or if the forward motion information and the backward motion information of the sub-block are unavailable, giving zero motion information to the sub-block at the position corresponding to the current block;

or, if the forward motion information and the backward motion information of the subblock are unavailable, when the forward motion information and the backward motion information of the second surrounding block of the current block are available, respectively scaling the forward motion information and the backward motion information of the second surrounding block to a first frame pointing to List0 and a first frame pointing to List1, and respectively assigning the scaled forward motion information and backward motion information to the subblock at the corresponding position of the current block; when the forward motion information of the second bounding box is available but the backward motion information is not available, scaling the forward motion information of the second bounding box to a first frame pointing to List0, and giving the scaled forward motion information to the sub-block at the corresponding position of the current block; when the backward motion information of the second surrounding block is available but the forward motion information is not available, scaling the backward motion information of the second surrounding block to a first frame pointing to List1, and assigning the scaled backward motion information to the sub-block at the corresponding position of the current block; and when the forward motion information and the backward motion information of the second surrounding block are unavailable, giving zero motion information to the sub-block at the corresponding position of the current block.

In an optional embodiment, the constructing unit 1120 is specifically configured to, when the enhanced temporal motion vector prediction technique is enabled for the current block, determine a candidate enhanced temporal motion vector prediction mode based on the matching block and a new matching block obtained by shifting the matching block, and construct a first temporal candidate mode list based on the candidate enhanced temporal motion vector prediction mode.

In one possible embodiment, whether the current block enables an enhanced temporal motion vector prediction technique is controlled using a sequence parameter set level syntax or Slice level syntax.

In a possible embodiment, the constructing unit 1120 is specifically configured to determine that the enhanced temporal motion vector prediction technique is enabled for the current block when the size of the current block is smaller than or equal to a preset maximum block size and is greater than or equal to a preset minimum block size;

when the size of the current block is larger than the size of a preset maximum block or smaller than the size of a preset minimum block, determining that the enhanced temporal motion vector prediction technology is not enabled for the current block.

In a possible embodiment, the size of the preset maximum block is represented using a sequence parameter set level syntax or using Slice level syntax;

And/or the first and/or second light sources,

the size of the preset minimum block is expressed using a sequence parameter set level syntax or using a Slice level syntax.

In a possible embodiment, the encoding device may include a video encoder.

Fig. 12 is a schematic diagram of a hardware structure of an encoding-side device according to an embodiment of the present disclosure. The encoding-side device may include a processor 1201, a machine-readable storage medium 1202 storing machine-executable instructions. The processor 1201 and the machine-readable storage medium 1202 may communicate via a system bus 1203. Also, the processor 1201 may perform the encoding method described above by reading and executing machine executable instructions in the machine readable storage medium 1202 corresponding to the encoding control logic.

The machine-readable storage medium 1202, as referred to herein, may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: a RAM (random Access Memory), a volatile Memory, a non-volatile Memory, a flash Memory, a storage drive (e.g., a hard drive), a solid state drive, any type of storage disk (e.g., an optical disk, a dvd, etc.), or similar storage medium, or a combination thereof.

In some embodiments, there is also provided a machine-readable storage medium having stored therein machine-executable instructions that, when executed by a processor, implement the encoding method described above. For example, the machine-readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and so forth.

In some embodiments, there is also provided a camera device including the encoding apparatus in any of the above embodiments and the decoding apparatus in any of the above embodiments.

It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

91页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种图片封装方法、设备以及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类