Video decoding method and device, and video encoding method and device

文档序号:1078607 发布日期:2020-10-16 浏览:10次 中文

阅读说明:本技术 视频解码方法及其装置以及视频编码方法及其装置 (Video decoding method and device, and video encoding method and device ) 是由 朴银姬 于 2018-03-30 设计创作,主要内容包括:公开了一种视频解码方法,包括如下步骤:确定包括当前块内的所有有效变换系数的四边形扫描区域;按照预定扫描顺序对与四边形扫描区域内的变换系数有关的信息进行扫描;基于被扫描的与变换系数有关的信息获得当前块的变换系数;对当前块的变换系数执行反量化及逆变换来生成当前块的残差块,且基于所生成的残差块恢复当前块。(Disclosed is a video decoding method including the steps of: determining a quadrilateral scan area comprising all significant transform coefficients within the current block; scanning information related to transform coefficients within a quadrilateral scanning area in a predetermined scanning order; obtaining a transform coefficient of the current block based on the scanned information on the transform coefficient; inverse quantization and inverse transformation are performed on the transform coefficients of the current block to generate a residual block of the current block, and the current block is restored based on the generated residual block.)

1. A video decoding method, comprising the steps of:

determining a quadrilateral scan area comprising all significant transform coefficients within the current block;

scanning information related to the transform coefficients within the quadrilateral scanning area in a predetermined scanning order; and

obtaining transform coefficients of the current block based on the scanned information on the transform coefficients;

performing inverse quantization and inverse transformation on the transform coefficients of the current block to generate a residual block of the current block; and

restoring the current block based on the generated residual block.

2. The video decoding method of claim 1,

all significant transform coefficients within the current block are included in the quadrilateral scan area,

the other regions other than the quadrangular scanning region within the current block include only the transform coefficients which are not significant transform coefficients and have a value of 0.

3. The video decoding method of claim 1,

the step of determining the quadrangular scanning area comprises the following steps:

obtaining information on coordinates specifying a quadrangular scanning area from a bitstream; and

determining a quadrangular scanning area including all significant transform coefficients within the current block based on information on coordinates specifying the quadrangular scanning area,

the coordinates specifying the quadrangular scanning area indicate the horizontal-direction coordinates for the significant transform coefficient located at the rightmost side within the current block and the vertical-direction coordinates for the significant transform coefficient located at the lowermost side within the current block.

4. The video decoding method of claim 3,

the step of obtaining information on coordinates specifying the quadrangular scanning area from the bitstream further includes:

performing context model-based binary arithmetic decoding on information related to coordinates specifying a quadrangular scanning area; and

inverse binarization information regarding coordinates of a specified quadrangular scanning area is obtained by performing inverse binarization using a predetermined inverse binarization method on the information subjected to binary arithmetic decoding.

5. The video decoding method of claim 4,

the context model is determined based on at least one of a size of the current block, a color component of the current block, and a binary index.

6. The video decoding method of claim 4,

the predetermined inverse binarization method is at least one of a fixed length inverse binarization method and a truncated unary code inverse binarization method.

7. The video decoding method of claim 1,

the predetermined scan order is an order according to an inverse zigzag scan or an order according to an inverse diagonal scan.

8. The video decoding method of claim 3,

the predetermined scan order is determined based on at least one of the horizontal-direction coordinates of the significant transform coefficient pixel located on the rightmost side within the current block and the vertical-direction coordinates of the significant transform coefficient pixel located on the lowermost side within the current block.

9. The video decoding method of claim 1,

the information related to the transform coefficient includes at least one of flag information indicating whether an absolute value of the transform coefficient is greater than a predetermined value, remaining level information related to the absolute value of the transform coefficient, sign information of the transform coefficient, and binarization parameter information for inverse binarization of the transform coefficient, wherein,

the predetermined value is at least one of 0, 1, and 2.

10. The video decoding method of claim 1,

the step of obtaining the transform coefficient of the current block based on the scanned information on the transform coefficient includes the steps of:

the transform coefficient of the current block is obtained by performing at least one of binary arithmetic decoding and inverse binarization based on a context model related to flag information indicating whether an absolute value of the transform coefficient is greater than a predetermined value.

11. The video decoding method of claim 9,

the flag information indicating whether the absolute value of the transform coefficient is greater than a predetermined value includes a first transform coefficient,

the context model related to flag information indicating whether an absolute value of the first transform coefficient is greater than a predetermined value is determined based on at least one of: information on at least one second transform coefficient previously scanned in the predetermined scanning order, a position and a color component of a first transform coefficient within a current block, information on surrounding transform coefficients on the right or lower side, a scanning position of the first transform coefficient, a relative position of the first transform coefficient in a scanning area, whether or not at the first scanning position.

12. The video decoding method of claim 1,

the information related to the transform coefficient includes flag information indicating whether an absolute value of the transform coefficient is greater than a predetermined value,

wherein the maximum number of flag information (maximum count) obtainable from the bitstream is determined based on the size of the scanning area.

13. The video decoding method of claim 1,

the scanning order of the individual transform coefficients within the quadrilateral scanning area is determined according to a predetermined scanning order,

wherein the step of scanning the information about the transform coefficients within the quadrangular scanning area in a predetermined scanning order comprises the steps of:

and scanning the information related to the transformation coefficients in the quadrilateral scanning area according to the scanning orders of the transformation coefficients in the quadrilateral scanning area.

14. The video decoding method of claim 1,

determining at least one coefficient group including a predetermined plurality of transform coefficients within a quadrilateral scanning area in a predetermined forward scanning order,

whether or not to conceal information on at least one transform coefficient is determined in units of coefficient groups.

15. A video encoding method, said video encoding method comprising the steps of:

obtaining a transformation coefficient of a current block;

determining a quadrilateral scan area comprising all significant transform coefficients within the current block;

scanning information on transform coefficients included in a quadrangular scanning area in a predetermined scanning order;

entropy-encoding information on the basis of the scanned transform coefficient to generate entropy-encoded information; and

a bitstream including the entropy-encoded information is generated.

Technical Field

The present disclosure relates to a video decoding method and a video encoding method. And in particular to entropy decoding and entropy encoding.

Background

With the development and popularization of hardware capable of reproducing and storing high-resolution or high-quality video content, the need for a video codec for efficiently encoding or decoding the high-resolution or high-quality video content is increasing. In the existing video codec, video is encoded according to a limited encoding method based on a tree-structured coding unit.

The image data in the spatial domain is transformed into coefficients in the frequency domain via frequency transformation. In order to perform fast calculation of frequency Transform, a video codec divides an image into blocks having a predetermined size, performs Discrete Cosine Transform (DCT) on each block, and encodes frequency coefficients of a block unit. The coefficients of the frequency domain are easily compressed compared to the image data of the spatial domain. In particular, image pixel values of a spatial domain are expressed as a prediction error through inter prediction or intra prediction of a video codec, so when frequency transform is performed on the prediction error, a large amount of data may be transformed into 0. According to the video codec, the amount of data can be reduced by replacing continuously and repeatedly generated data with small data.

Disclosure of Invention

Technical problem

According to various embodiments, by determining a scanning area based on various elements, scanning information related to a coefficient, performing binarization/inverse binarization and context model-based binary arithmetic encoding/decoding, entropy encoding/decoding efficiency can be improved.

A computer-readable recording medium having recorded thereon a program for executing the method according to various embodiments may be included.

Here, aspects of the various embodiments are not limited thereto, and other technical problems not mentioned will be clearly understood by those skilled in the art from the following description.

Technical scheme

Technical problems of the present invention are not limited to the above-mentioned features, and those skilled in the art can clearly understand the non-mentioned or other technical problems from the following description.

A video decoding method according to various embodiments includes the steps of: determining a quadrilateral scan area comprising all significant transform coefficients within the current block; scanning information related to the transform coefficients within the quadrilateral scanning area in a predetermined scanning order; and obtaining transform coefficients of the current block based on the scanned information related to the transform coefficients; performing inverse quantization and inverse transformation on the transform coefficient of the current block to generate a residual block of the current block; and restoring the current block based on the generated residual block.

All significant transform coefficients within the current block are included within the quadrilateral scanning area, and the other areas within the current block than the quadrilateral scanning area include only transform coefficients that are not significant transform coefficients and have a value of 0.

The step of determining the quadrangular scanning area comprises the following steps: obtaining information on coordinates specifying a quadrangular scanning area from a bitstream; and

a quadrangular scanning area including all the relevant transform coefficients within the current block is determined based on information on coordinates specifying the quadrangular scanning area, which may indicate horizontal-direction coordinates for a significant transform coefficient located at the rightmost side within the current block and vertical-direction coordinates for a significant transform coefficient located at the bottommost side within the current block.

The obtaining of the information on the coordinates specifying the quadrangular scanning area from the bitstream may further include: performing context model-based binary arithmetic decoding on information related to coordinates specifying a quadrangular scanning area; and obtaining inverse binarization information regarding coordinates of the specified quadrangular scanning area by performing inverse binarization using a predetermined inverse binarization method on the binary arithmetically decoded information.

The context model may be determined based on at least one of a size of the current block, a color component (color component) of the current block, and a binary index (bin index).

The predetermined inverse binarization method may be at least one of a fixed length (fixed length) inverse binarization method and a truncated unary inverse binarization method.

The predetermined scan order may be in an order of inverse zig-zag scan or in an order of inverse diagonal scan.

The predetermined scan order may be determined based on at least one of a horizontal-direction coordinate of a significant transform coefficient pixel located on the rightmost side within the current block and a vertical-direction coordinate of a significant transform coefficient pixel located on the lowermost side within the current block.

The information on the transform coefficients includes: at least one of flag information indicating whether an absolute value of the transform coefficient is greater than a predetermined value, which may be at least one of 0, 1, 2, remaining level (remaining level) information related to the absolute value of the transform coefficient, sign information of the transform coefficient, and binarization parameter information for inverse binarization of the transform coefficient.

The step of obtaining the transform coefficient of the current block based on the scanned information on the transform coefficient may include the steps of: the transform coefficient of the current block is obtained by performing at least one of binary arithmetic decoding and inverse binarization based on a context model related to flag information indicating whether an absolute value of the transform coefficient is greater than a predetermined value.

The flag information indicating whether the absolute value of the transform coefficient is greater than a predetermined value includes a first transform coefficient,

the context model related to the flag information indicating whether the absolute value of the first transform coefficient is greater than the predetermined value may be determined based on at least one of: information on at least one second transform coefficient previously scanned in a predetermined scanning order, a position and a color component of a first transform coefficient within a current block, information on surrounding transform coefficients on the right or lower side and a scanning position of the first transform coefficient, a relative position of the first transform coefficient within a scanning area, whether or not it is at the first scanning position.

The information related to the transform coefficient includes flag information indicating whether an absolute value of the transform coefficient is greater than a predetermined value, and a maximum number of flag information (maximum count) that can be obtained from the bitstream is determinable based on a size of the scan area.

The scanning order of each transform coefficient within the quadrangular scanning area is determined in a predetermined scanning order, and the step of scanning the information about the transform coefficients within the quadrangular scanning area in the predetermined scanning order may include the steps of: and scanning the information related to the transformation coefficients in the quadrilateral scanning area according to the scanning order of the transformation coefficients in the quadrilateral scanning area.

At least one coefficient group including a predetermined plurality of transform coefficients within a quadrangular scanning area may be determined in a predetermined forward scanning order,

whether to conceal the information on the sign of the at least one transform coefficient may be determined in units of coefficient groups.

A video decoding apparatus according to various embodiments may include: an entropy decoding unit which determines a quadrangular scanning area including all effective transform coefficients within the current block, scans information related to the transform coefficients within the quadrangular scanning area in a predetermined scanning order, and obtains the transform coefficients of the current block based on the scanned information related to the transform coefficients; and an image restoring unit which performs inverse quantization and inverse transformation on the transform coefficient of the current block to generate a residual block of the current block, and restores the current block based on the generated residual block.

A video encoding method according to various embodiments may include the steps of: obtaining a transformation coefficient of a current block; determining a quadrilateral scan area comprising all significant transform coefficients within the current block; scanning information on transform coefficients included in the quadrangular scanning area in a predetermined scanning order; entropy-encoding information on the basis of the scanned transform coefficient to generate entropy-encoded information; and generating a bitstream including the entropy-encoded information.

According to another aspect of the present disclosure, a computer-readable recording medium has recorded thereon a program for executing the method according to various embodiments.

Advantageous effects

According to various embodiments, by determining a scanning area based on various elements, scanning information related to coefficients, binary arithmetic encoding/decoding based on binarization/inverse binarization and a context model is performed, so that efficiency of entropy encoding/decoding can be improved. In particular, in entropy coding and decoding information related to transform coefficients, a quadrilateral scan area including all valid transform coefficients within a current block may be determined, unnecessary scans may be reduced by scanning transform coefficients within the quadrilateral scan area, and entropy coding and decoding efficiency may be improved by adjacently scanning transform coefficients associated with each other.

Drawings

Fig. 1a illustrates a block diagram of a video decoding device, according to various embodiments.

Fig. 1b illustrates a flow diagram of a video decoding method according to various embodiments.

Fig. 1c illustrates a block diagram of a video encoding device, according to various embodiments.

Fig. 1d illustrates a flow diagram of a video encoding method according to various embodiments.

Fig. 1e illustrates a block diagram of an image decoding section according to various embodiments.

Fig. 1f illustrates a block diagram of an image decoding section according to various embodiments.

Fig. 2 is a diagram for explaining a method of scanning intra-block transform coefficients according to an embodiment.

Fig. 3a is a diagram for explaining a method of scanning intra-block transform coefficients, according to another embodiment.

Fig. 3b is a diagram for explaining an operation of determining a coefficient group (sub-block) within a block and an operation performed in accordance with the coefficient group according to another embodiment.

Fig. 4 is a diagram for explaining a process of determining a context model for context-based binary arithmetic coding of information related to transform coefficients according to an embodiment.

Fig. 5 is a diagram for explaining a process of determining a context model for context-based binary arithmetic coding of information related to transform coefficients according to another embodiment.

Fig. 6a is a diagram for explaining a horizontal-first zigzag scanning order for scanning information on intra-block transform coefficients according to an embodiment.

Fig. 6b is a diagram illustrating a vertical-priority zig-zag scanning order for scanning information relating to intra-block transform coefficients, according to an embodiment.

Fig. 7a is a diagram for explaining a horizontal scanning order in which information on intra-block transform coefficients is scanned, according to an embodiment.

Fig. 7b is a diagram for explaining a vertical scanning order in which information relating to intra-block transform coefficients is scanned, according to an embodiment.

Fig. 8 is a diagram for explaining a diagonal scanning order for scanning information relating to intra-block transform coefficients according to an embodiment.

Fig. 9a to 9c are diagrams for explaining a residual coding syntax structure according to an embodiment.

Fig. 9d to 9f are diagrams for explaining a residual coding syntax structure according to another embodiment.

Fig. 10 illustrates a process of dividing a current coding unit to determine at least one coding unit according to an embodiment.

Fig. 11 illustrates a process of determining at least one coding unit by dividing coding units having a non-square shape according to an embodiment.

Fig. 12 illustrates a process of dividing a coding unit based on at least one of block shape information and division shape information according to an embodiment.

Fig. 13 illustrates a method of determining a predetermined coding unit from an odd number of coding units according to an embodiment.

Fig. 14 illustrates an order in which a plurality of coding units are processed when a current coding unit is divided to determine the plurality of coding units according to an embodiment.

Fig. 15 illustrates a process of determining that a current coding unit is to be divided into an odd number of coding units when the coding units cannot be processed in a predetermined order according to an embodiment.

Fig. 16 illustrates a process of dividing a first coding unit to determine at least one coding unit according to an embodiment.

Fig. 17 illustrates that shapes into which a second coding unit having a non-square shape determined by dividing a first coding unit are limited when the second coding unit satisfies a predetermined condition, according to an embodiment.

Fig. 18 illustrates a process of dividing coding units having a square shape when the division shape information does not indicate that the coding units are divided into four square shapes, according to an embodiment.

Fig. 19 illustrates that the processing order among a plurality of coding units may be changed according to the division process of the coding units according to an embodiment.

Fig. 20 illustrates a process of determining a depth of a coding unit as a shape and a size of the coding unit change when a plurality of coding units are determined by recursively dividing the coding unit according to an embodiment.

Fig. 21 illustrates indexes (PIDs) for distinguishing a depth from a coding unit, which may be determined according to the shape and size of the coding unit, according to an embodiment.

Fig. 22 illustrates determining a plurality of coding units from a plurality of specific data units included in a picture according to an embodiment.

Fig. 23 illustrates a processing block used as a reference for determining the determination order of reference coding units included in a picture according to an embodiment.

Best mode

A video decoding method according to various embodiments includes: determining a quadrilateral scan area comprising all significant transform coefficients within the current block; scanning information related to the transform coefficients within the quadrilateral scanning area in a predetermined scanning order; and obtaining a transform coefficient of the current block based on the scanned transform coefficient-related information; performing inverse quantization and inverse transformation on the transform coefficients of the current block to generate a residual block of the current block; and restoring the current block based on the generated residual block.

A video decoding apparatus according to various embodiments may include: an entropy decoding unit that determines a quadrangular scanning area of all significant transform coefficients within a current block, scans information on the transform coefficients within the quadrangular scanning area in a predetermined scanning order, and obtains transform coefficients of the current block based on the scanned information on the transform coefficients; and an image restoring unit which performs inverse quantization and inverse transformation on the transform coefficient of the current block to generate a residual block of the current block, and restores the current block based on the generated residual block.

A video encoding method according to various embodiments may include: a step of obtaining a transform coefficient of a current block; a step of determining a quadrangular scanning area including all significant transform coefficients within the current block; a step of scanning information on transform coefficients included in the quadrangular scanning area in a predetermined scanning order; a step of performing entropy encoding on the basis of the information on the scanned transform coefficients to generate entropy-encoded information; and a step of generating a bitstream including the entropy-encoded information.

According to another aspect of the present disclosure, a computer-readable recording medium has recorded thereon a program for executing the method according to various embodiments.

Detailed Description

Hereinafter, "image" may indicate a still image or a moving image of a video, i.e., the video itself.

Hereinafter, "sample" indicates data of an object assigned to a sampling position of an image and to be processed. For example, a pixel in an image of the spatial domain may be a sample.

Hereinafter, "Current Block" may indicate a Block of an image to be encoded or decoded.

Fig. 1a illustrates a block diagram of a video decoding device, according to various embodiments.

The image decoding apparatus according to various embodiments may include an entropy decoding unit 105 and an image restoration unit 120.

The entropy decoding unit 105 may obtain syntax element (syntax element) information received from a bitstream and entropy-decode the syntax element information. At this time, syntax element information received from the bitstream may be information on various syntax elements regarding the picture.

The entropy decoding unit 105 may obtain syntax element information related to a current intra-block transform coefficient from a bitstream and may entropy-decode the syntax element information related to the current intra-block transform coefficient. At this time, the current block may be an encoding unit usable in the process of encoding/decoding an image described with reference to fig. 10 to 23.

The entropy decoding unit 105 may scan the entropy-decoded syntax element information on the transform coefficient within the current block in a predetermined scanning order to obtain information on the transform coefficient within the current block. The predetermined scan order may be a reverse of the various scan orders. Here, the reverse scan order may be an order in which pixels from the lower right end in the block to the upper left end in the block are scanned. The order in which the transform coefficients are scanned in order from the upper left transform coefficient pixel to the transform coefficient located on the lower right may be referred to as a forward scan order, and the order in which the transform coefficients are scanned in order from the last transform coefficient located on the lower right to the upper left may be referred to as a reverse scan order. The syntax element information related to the transform coefficient within the current block may be flag (flag) information indicating whether the transform coefficient within the current block is greater than a predetermined value. At this time, the predetermined value may be an integer value greater than or equal to 0. For example, it may be 0, 1 or 2.

Also, the syntax element information related to the current intra-block transform coefficient may be syntax element information indicating a remaining level absolute value (remaining level absolute value). The residual level absolute value may indicate a difference between an absolute value of a level of the transform coefficient and an absolute value of a base level (base level). The absolute value of the base level may be determined based on syntax element information indicating whether the absolute value of the transform coefficient is greater than a predetermined value. For example, the sum of the values of flag information (Greater than 0flag or sig _ coeff _ flag; hereinafter referred to as GT0 flag) indicating whether the absolute value of a transform coefficient is Greater than 0 (or whether the transform coefficient is a significant transform coefficient; where the significant transform coefficient indicates a transform coefficient whose absolute value is Greater than 0), flag information (Greater than 1flag or coeff _ abs _ level _ header 1_ flag; hereinafter referred to as GT 1flag) indicating whether it is Greater than 1, and flag information (Greater than 2flag or coeff _ abs _ level _ header 2_ flag; referred to as GT2 flag) indicating whether it is Greater than 2 may be the absolute value of the base level (layer). Here, the flag information indicating whether the absolute value of the transform coefficient is greater than the predetermined value may have a value of 1 when indicating that it is greater than the predetermined value, and may have a value of 0 when indicating that it is less than the predetermined value. A part of the flag information may not be available from the bitstream. According to an embodiment, when a flag indicating whether the size of a transform coefficient is greater than n (n is an integer) is referred to as GTn, a flag indicating whether the size of a transform coefficient is greater than (n +1) is referred to as GT (n +1), and a flag indicating whether the size of a transform coefficient is greater than (n +2) is referred to as GT (n +2), only flags indicating whether the absolute value of a transform coefficient is greater than relatively smaller n or n +1 of reference values of n, n +1, and n +2 (i.e., GTn and GT (n +1)) may be transmitted without including GT (n +2) in a bitstream.

For example, when the GT 0flag information and the GT 1flag information indicate that the absolute values of the transform coefficients are greater than 0 and 1, respectively, in the case where the GT2flag is not included in the bitstream, the GT2flag is excluded and only the GT 0flag information and the GT 1flag information are used, so the absolute value of the transform coefficient is greater than 1, and thus the entropy decoding unit 105 determines the remaining absolute value of 2 subtracted from the absolute value of the transform coefficient as the remaining level absolute value of the transform coefficient.

When the GT 0flag information, the GT 1flag information, and the GT2flag information indicate greater than 0, 1, and 2, respectively, the entropy decoding unit 105 may determine a remaining absolute value obtained by subtracting 3 from the absolute value of the transform coefficient as the absolute value of the remaining level of the transform coefficient since the absolute value of the transform coefficient is greater than 2. In other words, the residual level absolute value may indicate a difference in absolute value between a predetermined absolute value determined based on information whether the absolute value of the transform coefficient is greater than a predetermined value and the absolute value of the significant transform coefficient.

The entropy decoding unit 105 may obtain syntax element information received from a bitstream, perform binary arithmetic decoding (bin arithmetic decoding) on the syntax element information, and may perform inverse binarization on an output generated by the binary arithmetic decoding (i.e., a binary string). At this time, the binary arithmetic decoding operation may be performed at the binary arithmetic decoding unit 110, and the inverse binarization operation may be performed at the inverse binarization unit 115.

The binary arithmetic decoding unit 110 may perform binary arithmetic decoding based on a predetermined context model (context model) on syntax element information obtained from a bitstream. Here, the context model may be information related to the occurrence probability of a binary (bin). The information on the occurrence probability of the binary value may include information (valMPS) indicating one Symbol of a Symbol having a relatively low occurrence probability (i.e., a Least Probable Symbol (LPS) and a Symbol having a high occurrence probability (i.e., a Most Probable Symbol) opposite thereto among the two symbols 0 and 1, and information on the occurrence probability of the one Symbol. The probability of occurrence has a value between 0 and 1. Accordingly, in determining the probability of one of the MPS and LPS, the information on the occurrence probability of the other symbol is information on a residual probability of subtracting the occurrence probability of the determined symbol from 1, so when determining the occurrence probability for one symbol, the binary arithmetic decoding unit 110 may determine the occurrence probability for the residual symbol. At this time, the occurrence probability for the previously determined one symbol may be an occurrence probability of lps (least Probable symbol). The occurrence probability of the symbol corresponding to the index value may be predetermined on the table, and the information for the occurrence probability of the symbol may be information (pStateIdx) indicating an index indicating the occurrence probability of the determined symbol on the table.

The predetermined context model may be determined based on a binary index (bin index) indicating a position of a binary, occurrence probabilities of both included in a surrounding block including the binary block, various elements of the current block or the surrounding block, and the like.

Alternatively, the binary arithmetic decoding unit 110 may perform binary arithmetic decoding on syntax element information obtained from the bitstream according to a bypass (by-pass) mode. At this time, for the binary value that has been subjected to the binary arithmetic decoding at present, the probability of obtaining 0 or 1 is fixed to 0.5, and the binary arithmetic decoding may be performed on the syntax element information based on such probability.

The inverse binarization unit 115 may perform inverse binarization on an output value (i.e., binary string) generated by performing binary arithmetic decoding. The inverse binarization unit 115 may perform inverse binarization on the binary string based on an inverse binarization method corresponding to a predetermined binarization method. The predetermined binarization methods may include a Fixed Length (Fixed Length) binarization method, a Rice (Rice) binarization method, an Exponential Golomb (explicit-Golomb) binarization method, and a Golomb-Rice (Golomb-Rice) binarization method. Alternatively, the predetermined binarization method may be a binarization method in which the first binarization method and the second binarization method are combined with each other. For example, the inverse binarization unit 115 may perform inverse binarization on a part of the binary string of the syntax element (i.e., a first binary string) based on an inverse binarization method corresponding to a first binarization method, and perform inverse binarization on a part of the binary string of the syntax element (i.e., a second binary string) based on an inverse binarization method corresponding to a second binarization method. The part of the binary string may be a prefix (prefix) or a suffix (suffix) of the binary string of the syntax element.

The binarization method or the inverse binarization method is related to a code word that specifies a one-to-one (1: 1) correspondence of a binary string (bin string) comprising at least one binary value corresponding to a syntax element value. In terms of encoding, a binary string including at least one binary value corresponding to a syntax element value may be determined according to one of the above-described various manners of binarization methods, and in terms of decoding, a syntax element value corresponding to a binary string may be determined according to a reverse binarization method. For example, when a binary string a corresponding to a syntax element value a (a is a real number) is determined according to a predetermined binarization/inverse binarization method, a process of determining the binary string a with reference to the syntax element value a is referred to as a binarization process, and a process of determining the syntax element value a with reference to the binary string a may be referred to as an inverse binarization process. However, as described above, binarization and inverse binarization are basically both used to specify a mapping relationship between a syntax element value and a binary string, and those skilled in the art will readily understand that binarization and inverse binarization are basically the same.

The entropy decoding unit 105 may perform entropy decoding by scanning syntax element information on a transform coefficient within the current block in a predetermined scanning order, thereby obtaining information on the transform coefficient within the current block. The predetermined scan order may be an order according to a reverse zig-zag scan (zigzag scan) or an order according to a reverse diagonal scan (diagonalscan). But not limited thereto, the predetermined scan order may be various scan orders such as an order according to a reverse horizontal scan and an order according to a vertical scan. The predetermined scan order may be determined based on at least one of the following items: a horizontal direction coordinate (for example, an x coordinate (x is an integer) of the rectangular coordinate system) of the significant transform coefficient pixel located at the rightmost side in the current block and a vertical direction coordinate (for example, a y coordinate (y is an integer) of the rectangular coordinate system) of the significant transform coefficient pixel located at the lowermost side in the current block. For example, the entropy decoding unit 105 may determine a predetermined scanning order based on the magnitude of the horizontal direction coordinate value and the magnitude of the vertical direction coordinate value. When the horizontal direction coordinate value is greater than the vertical direction coordinate value, the entropy decoding unit 105 may determine the reverse vertical scanning order as a predetermined scanning order. When the vertical direction coordinate value is greater than the horizontal direction coordinate value, the entropy decoding unit 105 may determine the reverse horizontal scanning order as a predetermined scanning order.

Alternatively, when the horizontal direction coordinate value is greater than the vertical direction coordinate value, the entropy decoding unit 105 may determine a reverse vertical first zigzag scan order (vertical first zigzag scan order) as the predetermined scan order. Details regarding the vertical-first zig-zag scan order are described with reference to fig. 6B. When the vertical direction coordinate value is greater than the horizontal direction coordinate value, the entropy decoding unit 105 may determine a reverse horizontal first zigzag scanning order (horizontal first zigzag scanning order) as a predetermined scanning order. Details of the zigzag scan order for horizontal precedence are described with reference to fig. 6A. When the vertical direction coordinate value is equal to the horizontal direction coordinate value, the entropy decoding unit 105 may determine one of a reverse vertical priority zigzag scanning order and a horizontal priority zigzag scanning order as a predetermined scanning order.

Alternatively, when the horizontal direction coordinate value is greater than the vertical direction coordinate value, the entropy decoding unit 105 may determine a reverse horizontal first zigzag scanning order (horizontal first zigzag scan order) as the predetermined scanning order. When the horizontal direction coordinate value is not greater than the vertical direction coordinate value, the entropy decoding unit 105 may determine a reverse vertical first zigzag scan order (vertical first zigzag scan order) as a predetermined scan order.

Prior to scanning the information related to the transform coefficients, entropy decoding unit 105 may determine a quadrilateral scanning area that includes all significant transform coefficients within the current block. At this time, all significant transform coefficients within the current block are included within the quadrangular scanning area, and the remaining areas within the current block other than the quadrangular scanning area may include only transform coefficients of 0 which are not significant transform coefficients.

The entropy decoding unit 105 may obtain information on coordinates specifying a quadrangular scanning area from the bitstream. The information on the coordinates specifying the quadrangular scanning area may include information on the coordinates in the horizontal direction for the significant transform coefficient located on the rightmost side within the current block and information on the coordinates in the vertical direction for the significant transform coefficient located on the lowermost side within the current block. However, without being limited thereto, the information on the coordinates specifying the quadrangular scanning area may include only information on a larger value of the coordinate values in the horizontal direction for the significant transform coefficient located at the rightmost side within the current block and the coordinate values in the vertical direction for the significant transform coefficient located at the lowermost side within the current block. At this time, the specified quadrangular scanning area is a scanning area of a square, the coordinate value for one direction obtained from the bit stream may be used to determine the coordinate value for the other direction, and the scanning area of the square may be determined based on the determined horizontal direction coordinate value and vertical direction coordinate value. At this time, the determined horizontal direction coordinate value and vertical direction coordinate value may indicate coordinate values of pixels located at a lower right side corner of the quadrangular scanning area. In other words, when only coordinates for one direction are obtained from the bitstream, the entropy decoding unit 105 may determine a scan area that specifies a square, may determine coordinate values that specify a quadrangular scan area based on the obtained coordinate values for one direction, and may determine a square scan area based on the coordinate values that specify the scan area of the square.

The entropy decoding unit 105 may determine a quadrangular scanning area including all significant transform coefficients within the current block based on information on coordinates specifying the quadrangular scanning area.

The entropy decoding unit 105 may perform context model-based binary arithmetic decoding on information related to coordinates specifying a quadrangular scanning area. The entropy decoding unit 105 may perform inverse binarization on the binary arithmetically decoded information based on an inverse binarization method corresponding to a predetermined binarization method to obtain inverse binarization information about coordinates specifying a quadrangular scanning area, and may obtain the coordinates specifying the quadrangular scanning area from the inverse binarization information about the coordinates. At this time, the context model may be determined based on one of a size of the current block, a color component (color component) of the current block, and a binary index (bin index). The color components may include a luminance component and a color difference component. The binary index may be information indicating a position of a binary to be currently binary arithmetic-decoded in a binary string related to the syntax element. The inverse binarization method corresponding to the predetermined binarization method may be at least one of a fixed length (fixed length) inverse binarization method and a truncated unary (truncated unary) inverse binarization method. For example, the entropy decoding unit 105 may perform inverse binarization using a fixed length inverse binarization method on a first binary string in the binary arithmetic decoded information to obtain first inverse binarization information, and perform inverse binarization using a truncated unary code inverse binarization method on a second binary string in the binary arithmetic decoded information to obtain second inverse binarization information. The entropy decoding unit 105 can obtain the coordinates specifying the quadrangular scanning area based on the first inverse binarization information and the second inverse binarization information.

The entropy decoding unit 105 may perform at least one of binary arithmetic decoding and inverse binarization based on a context model related to the transform coefficient to obtain the transform coefficient of the current block based on information related to the effective transform coefficient scanned in the quadrangular scan area.

When the information related to the transform coefficient is information indicating whether the absolute value of the current transform coefficient is greater than a predetermined value, the remaining level absolute value information, and the sign information of the current transform coefficient, the entropy decoding unit 105 may perform context model-based binary arithmetic decoding on the scanned information related to the transform coefficient. The entropy decoding unit 105 may perform inverse binarization on the transform coefficient-related information that is binary arithmetic decoded to obtain the transform coefficient of the current block. When the first information related to the transform coefficient is information indicating whether the absolute value of the current transform coefficient is greater than the pre-customized information, the residual level absolute value information, and the sign information of the current transform coefficient, and the second information related to the transform coefficient is binarization parameter information related to the current transform coefficient, the entropy decoding unit 105 may perform context model-based binary arithmetic decoding on the first information related to the transform coefficient.

The entropy decoding unit 105 may perform inverse binarization based on binarization parameter information included in the second information on the binary arithmetic decoded first information to obtain transform coefficients of the current block. The binarization Parameter information may be Rice Parameter (Rice Parameter) information for the current transform coefficient. The rice parameter may be information for determining the length of a prefix included in the binary string. However, without being limited thereto, the binarization parameter information may be various binarization parameter information for the current transform coefficient.

When the first information on the transform coefficient is the remaining level absolute value information of the current transform coefficient, the entropy decoding unit 105 may obtain a value on the remaining level absolute value of the current transform coefficient using an inverse binarization method corresponding to a (truncated) rice binarization method and an inverse binarization method corresponding to an exponential golomb binarization method for the remaining level absolute value information of the current transform coefficient. For example, the entropy decoding unit 105 may perform inverse binarization using an inverse binarization method corresponding to a rice binarization method based on a rice parameter on a prefix (prefix) of a binary string of remaining level absolute value information of the current transform coefficient to obtain a first value related to a remaining level absolute value of the current transform coefficient, perform inverse binarization using an inverse binarization method corresponding to an exponential golomb method on a suffix (suffix) of the binary string of remaining level absolute value to obtain a second value related to a remaining level absolute value of the current transform coefficient, and obtain a value related to a remaining level absolute value of the current block based on the first value related to the remaining level absolute value of the current transform coefficient and the second value related to the remaining level absolute value of the current transform coefficient.

Among the context models related to the transform coefficients, the context model related to the first transform coefficient may be determined based on at least one of: information on at least one second transform coefficient, a position and a color component of a first transform coefficient within a current block, information on surrounding transform coefficients on the right or lower side, and a scanning position of the first transform coefficient, which are previously scanned in a predetermined scanning order.

For example, the context model related to the flag information indicating whether the first transform coefficient is greater than 0 may be determined based on the number of transform coefficients on the right side or lower side of the predetermined position whose absolute value is greater than 0 among the transform coefficients on the right side or lower side.

However, without being limited thereto, the context model related to the flag information indicating whether the first transform coefficient is greater than 0 may be determined based on the number of significant transform coefficients having an absolute value greater than 0 among n (n is a positive integer) transform coefficients previously scanned in a predetermined scanning order.

Alternatively, the context model related to flag information indicating whether the first transform coefficient is greater than 0 may be determined based on the position of the first transform coefficient within the corresponding coefficient group and the flag of the significant coefficient group on the right or lower side of the periphery corresponding thereto.

Alternatively, flag information (GT 0flag information) indicating whether the first transform coefficient is greater than 0 may be determined based on at least one of: GT 0flag information for n (n is a positive integer) transform coefficients previously scanned in a predetermined scan order and GT 0flag information for n (n is a positive integer) transform coefficients at the right or lower side of the position, color component and neighborhood of the first transform coefficient within the current block, whether the first transform coefficient is the transform coefficient at the initial position in the scan area in the scan order, whether the first transform coefficient is the transform coefficient at the final position in the scan area in the scan order.

The context model related to the flag information indicating whether the absolute value of the first transform coefficient is greater than 1 may be determined based on the number of significant transform coefficients to the right or lower side of the predetermined position whose absolute value is greater than 1.

However, without being limited thereto, the context model related to the flag information indicating whether the absolute value of the first transform coefficient is greater than 1 may be determined based on the number of significant transform coefficients having an absolute value greater than 1 among n (n is a positive integer) transform coefficients previously scanned in a predetermined scanning order.

Alternatively, a context model related to flag information indicating whether the absolute value of the first transform coefficient is greater than 1 may be determined based on GT 1flag information previously decoded within a corresponding coefficient group and GT 1flag information within a previously decoded group.

Alternatively, flag information whether the absolute value of the first transform coefficient is greater than 1 may be determined based on at least one of: GT 1flag information for n (n is a positive integer) transform coefficients previously scanned in a predetermined scanning order and GT 1flag information for the position, color component, and adjacent right or lower n (n is a positive integer) transform coefficients within the current block, whether the first transform coefficient is the transform coefficient at the initial position within the scanning region, whether the first transform coefficient is the transform coefficient at the final position within the scanning region.

The context model related to the flag information indicating whether the absolute value of the first transform coefficient is greater than 2 may be determined based on the number of significant transform coefficients on the right or lower side of which the absolute value is greater than 2.

But not limited thereto, the context model related to the flag information indicating whether the first transform coefficient is greater than 2 may be determined based on the number of significant transform coefficients having an absolute value greater than 2 among n (n is a positive integer) transform coefficients previously scanned in a predetermined scan order.

Alternatively, a context model related to flag information indicating whether the absolute value of the first transform coefficient is greater than 2 may be determined based on GT2flag information previously decoded within a corresponding coefficient group and GT2flag information within a previously decoded group.

Alternatively, flag information indicating whether the absolute value of the first transform coefficient is greater than 2 (hereinafter referred to as GT2flag information) may be determined based on at least one of the following items: GT2flag information for n (n is a positive integer) transform coefficients previously scanned in a predetermined scanning order and GT2flag information for the position, color component, and adjacent right or lower n (n is a positive integer) transform coefficients within the current block, whether the first transform coefficient is the transform coefficient at the initial position within the scanning region, whether the first transform coefficient is the transform coefficient at the final position within the scanning region.

Alternatively, flag information indicating whether the first transform coefficient is greater than m (m is an integer greater than 2) may be determined based on at least one of: GTm flag information of n (n is a positive integer) transform coefficients previously scanned in a predetermined scanning order, GTm flag information of the position, color component, and adjacent right or lower n (n is a positive integer) transform coefficients of the first transform coefficient in the current block, whether the first transform coefficient is a transform coefficient at an initial position in the scanning area, and whether the first transform coefficient is a transform coefficient at a final position in the scanning area.

The binarization parameter information related to the remaining level value of the first transform coefficient may be determined based on the level absolute value of the surrounding significant transform coefficient on the right or lower side of the first transform coefficient.

For example, the binarization parameter information related to the first transform coefficient may be determined based on the sum of the level absolute values of predetermined surrounding significant transform coefficients to the right or lower side of the first transform coefficient.

Alternatively, binarization parameter information regarding the remaining level values of the first transform coefficient may be determined based on a level value previously encoded.

Alternatively, the binarization parameter information related to the first transform coefficient may be determined based on at least one of: a level of n (n is an integer) transform coefficients previously scanned in a predetermined scan order, a position of a first transform Coefficient in a current block, a color component, and a level of n (n is a positive integer) transform coefficients adjacent to a right side or a lower side, whether the first transform Coefficient is a transform Coefficient at an initial position in a scan area in the scan order, whether the first transform Coefficient is a transform Coefficient at a final position in the scan area in the scan order, whether the first transform Coefficient is a Coefficient at an initial position in a Coefficient Group (Coefficient Group) in the scan order, a relative position of the first transform Coefficient position in the scan area.

When the position of the transform coefficient currently scanned within the scan area is [ SRx,0] (SRx is an integer, SRx indicates a horizontal direction coordinate value of a pixel at the right side boundary of the scan area with the left upper corner coordinate of the scan area as a reference), and the transform coefficients previously scanned in a predetermined scan order [ SRx, Y ] (Y is an integer greater than 0 and less than or equal to SRy, SRy indicates a vertical direction coordinate value at the lower side boundary of the scan area with the left upper corner coordinate of the scan area) are all coefficients of 0, the entropy decoding unit 105 may determine the value of the GT 0flag information to be 1 without obtaining the GT 0flag information of the transform information currently scanned from the bitstream.

Similarly, when the position of the currently scanned transform coefficient in the scan region is [0, SRy ] (SRy is an integer, SRy is a vertical direction coordinate value at the lower side boundary of the scan region with respect to the coordinates of the upper left end corner of the scan), and the transform coefficients at [ X, SRy ] (X is an integer greater than 0 and less than or equal to SRx, which is a vertical direction coordinate value of a pixel at the right side boundary of the scan region with respect to the coordinates of the upper left end corner of the scan region) previously scanned in a predetermined scan order are all coefficients of 0, the entropy decoding unit 105 may determine the value of the GT 0flag information as 1 without obtaining the GT 0flag information of the currently scanned transform coefficient from the bitstream.

In addition, the entropy decoding unit 105 may determine the maximum number of GT 1flag information for significant transform coefficients within the current block, and may receive GT 1flag information within the determined maximum coefficients of the significant transform coefficients from the bitstream within the current block. In other words, after the entropy decoding unit 105 receives the GT 1flag information of up to the maximum number of significant transform coefficients received from the bitstream, the entropy decoding unit 105 may no longer confirm whether the GT 1flag information of the significant transform coefficients exists in the bitstream.

The entropy decoding unit 105 may determine all significant transform coefficients other than 0 as the maximum number of GT 1flag information within the current block. Alternatively, the entropy decoding unit 105 may determine the maximum amount of GT 1flag information within the current block based on the size of the scan area. For example, the entropy decoding unit 105 may determine the maximum number of GT 1flag information, MaxCount _ GT1, based on the following mathematical expression.

[ mathematical formula 1]

Figure BDA0002631097650000161

At this time, the sizeSR may refer to the size (width) of the quadrangular scanning region, and the sizeSR may be (Sr _ x +1) × (Sr _ y + 1). Sr _ x may indicate the horizontal direction coordinate of the rightmost significant transform coefficient pixel with reference to the coordinate of the left upper end corner of the scan area. In other words, Sr _ x may indicate the horizontal direction coordinate of the pixel at the right side boundary with the coordinate of the corner at the upper left end of the scanning area as a reference. Sr _ y may indicate the vertical-direction coordinate of the lowermost effective transform coefficient pixel with reference to the coordinate of the upper-left corner of the scan area. In other words, Sr _ y may indicate the vertical-direction coordinate of the pixel at the lower boundary with the coordinate of the left upper end corner of the scan area as a reference.

K1 may be the adjustment factor between the size of the scan area and the GT1 marker. For example, K1 may be an integer greater than 1. Th1 may be a predetermined threshold. For example, Th1 may be 16, 8. However, without being limited thereto, those skilled in the art should readily appreciate that K1 and Th1 may have various values.

The entropy decoding unit 105 may determine the maximum amount of GT2flag information within the current block, and may receive GT2flag information within the current block within the determined maximum coefficients of the significant transform coefficients from the bitstream. In other words, after the entropy decoding unit 105 receives the GT2flag information of up to the maximum number of significant transform coefficients received from the bitstream, the entropy decoding unit 105 may no longer confirm whether the GT2flag information of the significant transform coefficients exists in the bitstream.

The entropy decoding unit 105 may determine all significant transform coefficients other than 0 as the maximum number of GT2flag information within the current block. Alternatively, the entropy decoding unit 105 may determine the maximum amount of GT2flag information within the current block based on the size of the scan area. For example, the entropy decoding unit 105 may determine the maximum number of GT2flag information MaxCount _ GT2 based on the following mathematical formula 2.

[ mathematical formula 2]

Figure BDA0002631097650000171

At this time, the sizeSR may refer to the size (width) of the quadrangular scanning region, and the sizeSR may be (Sr _ x +1) × (Sr _ y + 1). Sr _ x may indicate the horizontal-direction coordinate of the effective transform coefficient pixel at the rightmost side with reference to the coordinate of the upper-left end corner of the scan area. In other words, Sr _ x may indicate the horizontal direction coordinate of the pixel at the right side boundary with reference to the coordinate of the corner at the upper left end of the scan area. Sr _ y may indicate the vertical-direction coordinate of the effective transform coefficient pixel at the lowermost side with reference to the coordinate of the upper-left end corner of the scan area. In other words, Sr _ y may indicate the vertical-direction coordinate of the pixel at the lower boundary with the coordinate of the left upper end corner of the scan area as a reference. K2 may be an adjustment factor between the size of the scan area and the GT2 marker. For example, it may be an integer greater than 1. K2 may be a predetermined threshold. For example, Th2 may be 16, 8. However, without being limited thereto, those skilled in the art should readily appreciate that K2 and Th2 may have various values.

The entropy decoding unit 105 may obtain information on the coefficient group within the current block from the bitstream. Here, the information on the coefficient group may be flag information indicating whether at least one significant transform coefficient is included in the coefficient group (or whether only a transform coefficient that is 0 is included in the coefficient group). The flag information may be referred to as significant coefficient group flag (significant coefficient group flag) information. The entropy decoding unit 105 may scan information on transform coefficients within the current coefficient group based on information on the current coefficient group obtained from the bitstream. For example, when the information related to the coefficient group indicates that the coefficient group does not include at least one significant transform coefficient, entropy decoding unit 105 may induce the value of the transform coefficient within the coefficient group to 0 without scanning the information related to the transform coefficient. When the information on the coefficient group indicates that at least one significant transform coefficient is included in the coefficient group, the entropy decoding unit 105 may scan the information on the transform coefficients in a predetermined scan order to obtain the transform coefficients within the coefficient group.

At this time, for the transform coefficients included in the scan region, the entropy decoding unit 105 may determine one coefficient group from each predetermined K (K is an integer) transform coefficients scanned in a predetermined scan order. In other words, one coefficient group may include K transform coefficients. At this time, the scan order is a forward scan order in a direction opposite to the predetermined reverse scan order.

Accordingly, the entropy decoding unit 105 may determine the coefficient group by scanning in a forward scanning order from the coefficient (i.e., the DC coefficient) adjacent to the upper left corner within the current block. At this time, when the number of transform coefficients included in the current block is not an integer multiple of K, the final coefficient group may include a number of transform coefficients less than K among the coefficient groups scanned in the forward scan order.

But is not limited thereto, it should be easily understood by those skilled in the art that the entropy decoding unit 105 may determine the coefficient group by scanning in a reverse scan order from a coefficient located at a lower right end among transform coefficients included in the current block to a coefficient located at an upper left end among transform coefficients included in the current block.

But is not limited thereto, those skilled in the art should readily appreciate that the entropy decoding unit 105 may determine the coefficient group in a predetermined scan order without obtaining information about the coefficient group within the current block from the bitstream. The reason why the coefficient group is determined is to cause entropy decoding section 105 to perform an operation of performing processing according to the coefficient group (an operation such as Sign data hiding (Sign data)). When one coefficient group among coefficient groups scanned in a predetermined scanning order includes only one transform coefficient, the entropy decoding unit 105 may obtain the GT 0flag information from the bitstream, instead of obtaining information about the coefficient group from the bitstream.

The entropy decoding unit 105 may determine a sign (sign) in which at least one transform coefficient is hidden for each coefficient group included in the current block. For example, the entropy decoding unit may determine a sign that one or two transform coefficients are hidden for each group of coefficients included within the current block.

The entropy decoding unit 105 may scan information about transform coefficients within a quadrangular scanning area in a predetermined scanning order. The predetermined scan order may include a reverse zigzag scan order, a reverse diagonal scan order, a reverse vertical scan order, and a horizontal scan order. However, it should be easily understood by those skilled in the art that the predetermined scan order is not limited to the reverse scan order mentioned above but may include various reverse scan orders.

The entropy decoding unit 105 may obtain the transform coefficient of the current block based on the scanned information about the transform coefficient.

The image restoring unit 120 may inverse-quantize and inverse-transform the transform coefficient of the current block to generate a residual block of the current block.

The image restoring unit 120 may restore the current block based on the residual block of the current block.

The image restoring unit 120 may perform inter prediction or intra prediction on the current block to generate a prediction block of the current block. The image restoring unit 120 may restore the current block based on the prediction block of the current block and the residual block of the current block. In other words, the image restoring unit 120 may add values of pixels included in the prediction block and values of pixels included in the residual block to restore values of pixels included in the current block.

The video decoding apparatus 100 may include an image decoding part (not shown), which may include an entropy decoding unit 105 and an image restoration unit 120. The image decoding unit is described with reference to fig. 1 e.

Fig. 1b illustrates a flow diagram of a video decoding method according to various embodiments.

In step S105, the video decoding apparatus 100 may determine a quadrangular scanning area including all the significant transform coefficients within the current block. At this time, the coordinates specifying the quadrangular scanning area may indicate the horizontal-direction coordinates of the significant transform coefficient pixel located on the rightmost side among the significant transform coefficients in the current block and the vertical-direction coordinates of the significant transform coefficient pixel located on the lowermost side among the significant transform coefficients in the current block. The video decoding apparatus 100 may obtain information on the coordinates specifying the quadrangular scanning area from the bitstream, and may determine the quadrangular scanning area based on the obtained coordinates specifying the quadrangular scanning area.

In step S110, the video decoding apparatus 100 may scan information on the transform coefficients within the quadrangular scanning area in a predetermined scanning order. The predetermined scan order may comprise a reverse zig-zag scan order or a reverse diagonal scan order. The zigzag scan order may include a vertical-first zigzag scan order or a horizontal-first zigzag scan order.

In step S115, the video decoding apparatus 100 may obtain the transform coefficient of the current block based on the scanned transform coefficient-related information.

In step S120, the video decoding apparatus 100 may perform inverse quantization and inverse transformation on the transform coefficient of the current block to generate a residual block of the current block.

In step S125, the video decoding apparatus 100 may restore the current block based on the residual block. The video decoding apparatus 100 may generate a prediction block of the current block by performing inter prediction or intra prediction. Also, the video decoding apparatus 100 may further add pixel values included in a prediction block of the current block and pixel values of the residual block to generate pixel values of a restored block of the current block.

Fig. 1c illustrates a block diagram of a video encoding device, according to various embodiments.

The video decoding apparatus 150 according to various embodiments includes an entropy encoding unit 155 and a bitstream generation unit 170.

The entropy-encoding unit 155 may entropy-encode a syntax element related to a transform coefficient within the current block. The entropy encoding unit 155 generates one-dimensional arrangement information on the transform coefficients within the current block by scanning the two-dimensional arrangement information on the transform coefficients within the current block in a predetermined scanning order, and entropy encodes the one-dimensional arrangement information on the transform coefficients within the current block.

The syntax element related to the transform coefficient may be a flag indicating whether the transform coefficient is greater than a predetermined value. At this time, the predetermined value may be a value greater than or equal to 0. For example, it may be 0, 1 or 2. Also, the syntax element may indicate the absolute value of the remaining level (remaining level) of the transform coefficient. In other words, the remaining level absolute value may indicate an absolute value difference between the transform coefficient absolute value and a predetermined absolute value determined based on whether or not it is greater than a predetermined value. Also, the syntax element related to the significant transform coefficient may be a syntax element related to a symbol of the significant transform coefficient.

First, the entropy encoding unit 155 may generate entropy-encoded information for syntax elements by performing binarization for the syntax elements to generate a binary string (binding) and performing binary arithmetic coding for the binary string. At this time, binarization may be performed at the binarization unit 160, and binary arithmetic encoding may be performed at the binary arithmetic encoding section 165.

The binarization unit 160 may perform binarization on a predetermined syntax element to generate a binary string. The binarization unit 160 may perform binarization on a predetermined syntax element based on a predetermined binarization method. The predetermined binarization methods may include a Fixed Length (Fixed Length) binarization method, a Rice (Rice) binarization method, an exponential Golomb (exponential Golomb) binarization method, and a Golomb-Rice (Golomb-Rice) binarization method. Alternatively, the predetermined binarization method may be a method in which the first binarization method and the second binarization method are combined with each other. For example, the binarization unit 160 may binarize a part of the syntax element based on a first binarization method to generate a first binary string (bin string), and may perform binarization on another part of the syntax element based on a second binarization method to generate a second binary string (bin string). At this time, the first binary string may be a part of a binary string of the syntax element, and the second binary string may be another part of the binary string of the syntax element. Part of the binary string may be a prefix (prefix) or suffix (suffix).

The binary arithmetic coding unit 165 may perform binary arithmetic coding based on a predetermined context model on a binary string related to a predetermined syntax element. Alternatively, the binary arithmetic coding unit 165 may perform binary arithmetic coding on a binary string related to a predetermined syntax element without using a predetermined context model. At this time, for the binary (bin) currently being binary arithmetic coded, the probability of obtaining 0 or 1 is fixed to 0.5, and binary arithmetic coding may be performed based on such probability.

The entropy encoding unit 155 may obtain a transform coefficient of the current block. In other words, the video decoding device 150 may generate a prediction block of the current block by performing inter prediction or intra prediction, and generate a residual block of the current block based on the original block of the current block and the prediction block of the current block. The video encoding apparatus 150 may generate a transform coefficient of the current block by performing transform and quantization on a residual block of the current block. The entropy encoding unit 155 may obtain the generated transform coefficient of the current block.

The entropy encoding unit 155 may determine information related to the transform coefficient of the current block, scan the information related to the transform coefficient of the current block in a predetermined scan order to generate one-dimensional arrangement information related to the transform coefficient of the current block, and perform entropy encoding on the one-dimensional arrangement information. The predetermined scan order may be an order according to an inverse zigzag scan or an order according to an inverse diagonal scan. However, the present invention is not limited to this, and various scanning orders such as an order based on reverse horizontal scanning and an order based on reverse vertical scanning may be used. The predetermined scan order may be determined based on at least one of horizontal-direction coordinates of a significant transform coefficient pixel located at the rightmost side within the current block and vertical-direction coordinates of a significant transform coefficient pixel located at the lowermost side within the current block. The entropy encoding unit 155 may determine a predetermined scanning order based on the magnitude of the horizontal direction coordinate value and the magnitude of the vertical direction coordinate value. For example, when the horizontal direction coordinate value is greater than the vertical direction coordinate value, the entropy encoding unit 155 may determine the reverse vertical scanning order as the predetermined scanning order. When the vertical direction coordinate value is greater than the horizontal direction coordinate value, the entropy encoding unit 155 may determine the reverse horizontal scanning order as the predetermined scanning order.

Alternatively, when the horizontal direction coordinate value is greater than the vertical direction coordinate value, the entropy encoding unit 155 may determine the reverse vertical-priority zigzag scanning order as the predetermined scanning order. When the vertical direction coordinate value is greater than the horizontal direction coordinate value, the entropy encoding unit 155 may determine the reverse horizontal-priority zigzag scanning order as a predetermined scanning order. When the vertical direction coordinate value is the same as the horizontal direction coordinate value, the entropy encoding unit 155 may determine one of a reverse vertical priority zigzag scanning order and a horizontal priority zigzag scanning order as the predetermined scanning order.

Alternatively, when the horizontal direction coordinate value is greater than the vertical direction coordinate value, the entropy encoding unit 155 may determine the reverse horizontal-priority zigzag scanning order as the predetermined scanning order. When the horizontal direction coordinate value is not greater than the vertical direction coordinate value, the entropy encoding unit 155 may determine a reverse horizontal first zigzag scanning order (horizontal first zigzag scanning order) as a predetermined scanning order.

First, the entropy encoding unit 155 may determine a quadrangular scanning area including all significant transform coefficients within the current block. At this time, all significant transform coefficients within the current block may be included within the quadrangular scan area, and the remaining areas within the current block other than the quadrangular scan area may include only transform coefficients having a value of 0 that are not significant transform coefficients. The entropy encoding unit 155 may determine coordinates specifying the quadrangular scanning area and entropy encode information on the coordinates specifying the quadrangular scanning area. In other words, the entropy encoding unit 155 may perform binarization based on a predetermined binarization method on syntax elements related to coordinates specifying a quadrangular scanning area to generate a binary string (bin string). Here, the predetermined binarization method may be at least one of a fixed length binarization method or a truncated unary code binarization method.

The entropy encoding unit 155 may perform context model-based binary arithmetic encoding on a binary string regarding a syntax element related to coordinates specifying a quadrangular scanning area. At this time, the context model may be determined based on at least one of a size of the current block, a color component of the current block, and a binary index. The color components may include a luminance component and a color difference component. The binary index may be information indicating a position of a binary currently being binary arithmetic-coded in a binary string related to the syntax element.

The entropy encoding unit 155 may generate the entropy-encoded information by performing at least one of binarization based on the scanned information about the transform coefficient and binary arithmetic encoding based on a context model about the transform coefficient.

When the first information related to the transform coefficient is information indicating whether the absolute value of the current transform coefficient is greater than 1, the remaining level absolute value information, and the sign information of the current transform coefficient, and the second information related to the transform coefficient is binarization parameter information, the entropy encoding unit 155 may perform binarization based on the second information on the first information related to the transform coefficient to generate a binary string (bin string) related to the first information. The entropy-encoding unit 155 may perform binary arithmetic encoding on a binary string (bin string) related to the first information to generate entropy-encoded information.

The second information may be determined based on at least one of: information on at least one second transform coefficient previously scanned in a predetermined scanning order, a position of the first transform coefficient within the current block, a color component, information on surrounding transform coefficients on the right or lower side, and a scanning position of the first transform coefficient. For example, the binarization parameter information related to the first transform coefficient may be determined based on the level absolute value of the surrounding significant transform coefficient on the right or lower side of the first transform coefficient. The binarization parameter information related to the first transform coefficient may be determined based on a sum of level absolute values of surrounding effective transform coefficients on a right side or a lower side of the first transform coefficient.

Alternatively, the binarization parameter information related to the first transform coefficient may be determined based on at least one of: the level of n (n is an integer) transform coefficients previously scanned in a predetermined scan order, the position of the first transform Coefficient in the current block, the color component, and the level of n (n is a positive integer) transform coefficients adjacent to the right or lower side, whether the first transform Coefficient is a transform Coefficient at the initial position in the scan area in the scan order, whether the first transform Coefficient is a transform Coefficient at the final position in the scan area in the scan order, and whether the first transform Coefficient is a Coefficient at the first position in a Coefficient Group (Coefficient Group) in the scan order.

When the information related to the transform coefficient is information indicating whether the absolute value of the current transform coefficient is greater than a predetermined value, remaining level absolute value information, and sign information of the current transform coefficient, the entropy encoding unit 155 may perform binarization on the information related to the transform coefficient to generate a binary string (bin string). The entropy encoding unit 155 may generate binary arithmetic encoded information by performing binary arithmetic encoding based on a context model related to a transform coefficient on a binary string (binding). Among the context models related to the transform coefficients, the context model related to the first transform coefficient may be determined based on at least one of: information on at least one second transform coefficient previously scanned in a predetermined scanning order, a position of the first transform coefficient within the current block, a color component, information on surrounding transform coefficients on the right or lower side, and a scanning position of the first transform coefficient.

The context model related to the flag information indicating whether the first transform coefficient is greater than 0 may be determined based on the number of significant transform coefficients on the right or lower side whose absolute value is greater than 0.

However, without being limited thereto, the context model related to the flag information indicating whether the first transform coefficient is greater than 0 may be determined based on the number of significant transform coefficients having an absolute value greater than 0 among n (n is a positive integer) transform coefficients previously scanned in a predetermined scanning order.

The context model related to flag information indicating that the first transform coefficient is greater than 1 may be determined based on the number of significant transform coefficients on the right or lower side whose absolute value is greater than 1.

But not limited thereto, the context model related to the flag information indicating whether the first transform coefficient is greater than 1 may be determined based on the number of significant transform coefficients having an absolute value greater than 1 among n (n is a positive integer) transform coefficients previously scanned in a predetermined scanning order.

The context model related to flag information indicating that the first transform coefficient is greater than 2 may be determined based on the number of significant transform coefficients on the right or lower side whose absolute value is greater than 2.

But not limited thereto, the context model related to the flag information indicating whether the first transform coefficient is greater than 2 may be determined based on the number of significant transform coefficients having an absolute value greater than 2 among n (n is a positive integer) transform coefficients previously scanned in a predetermined scanning order.

When the position of the currently scanned transform coefficient within the scan area is [ SRx,0] (SRx is an integer, SRx indicates a horizontal direction coordinate value of a pixel at the right side boundary of the scan area with the coordinate of the upper left corner of the scan area as a reference), and the transform coefficients of the positions of [ SRx, Y ] (Y is an integer greater than 0 and less than or equal to SRy, SRy indicates a vertical direction coordinate value of a pixel at the lower side boundary of the scan area with the coordinate of the upper left corner of the scan area) previously scanned in a predetermined scan order are all coefficients of 0, the entropy encoding unit 155 may not generate the GT 0flag information of the currently scanned transform coefficient.

Similarly, when the position of the transform coefficient currently scanned in the scan region is [0, SRy ] (SRy is an integer, SRy indicates a vertical direction coordinate value of a lower boundary of the scan region with the coordinate of an upper left corner of the scan region as a reference), and the transform coefficients of the positions [ X, SRy ] (X is an integer greater than 0 and less than or equal to SRx indicating a vertical direction coordinate value of a pixel at a right boundary of the scan region with the coordinate of an upper left corner of the scan region as a reference) previously scanned in a predetermined scan order are all coefficients of 0, the entropy encoding unit 155 may not generate GT 0flag information of the transform coefficient currently scanned.

In addition, the entropy encoding unit 155 may determine the maximum number of GT 1flag information for the significant transform coefficients within the current block and generate GT 1flag information within the determined maximum coefficients of the significant transform coefficients within the current block. In other words, after the entropy encoding unit 155 generates the GT 1flag information for up to the maximum number of significant transform coefficients, the entropy encoding unit 155 may not generate the GT 1flag information for the significant transform coefficients any more.

The entropy encoding unit 155 may determine all significant transform coefficients other than 0 as the maximum number of GT 1flag information within the current block. Alternatively, the entropy encoding unit 155 may determine the maximum amount of the GT 1flag information within the current block based on the size of the scanning area. For example, the entropy encoding unit 155 may determine the maximum number of GT 1flag information, MaxCount _ GT1, based on the following mathematical formula 3.

[ mathematical formula 3]

Figure BDA0002631097650000241

At this time, the sizeSR may indicate the size (width) of the quadrangular scanning region, and the sizeSR may be (Sr _ x +1) × (Sr _ y + 1). Sr _ x may indicate the horizontal direction coordinate of the rightmost significant transform coefficient pixel with reference to the coordinate of the left upper end corner of the scan area. In other words, Sr _ x may indicate the horizontal direction coordinate of the pixel at the right side boundary with the coordinate of the corner at the upper left end of the scanning area as a reference. Sr _ y may indicate the vertical-direction coordinate of the lowermost effective transform coefficient pixel with reference to the coordinate of the upper-left corner of the scan area. In other words, Sr _ y may indicate the vertical-direction coordinate of the pixel at the lower boundary with the coordinate of the left upper end corner of the scanning area as a reference. K1 may be the adjustment factor between the size of the scan area and the GT1 marker. For example, K1 may be an integer greater than 1. Th1 may be a predetermined threshold value. For example, Th1 may be 16, 8. However, without being limited thereto, one of ordinary skill should readily appreciate that K1 and Th1 may have various values.

The entropy encoding unit 155 may determine the maximum amount of GT2flag information within the current block and generate GT2flag information within the determined maximum coefficients of the significant transform coefficients within the current block. In other words, after the entropy encoding unit 155 generates the GT2flag information for up to the maximum number of significant transform coefficients, the entropy encoding unit 155 may not generate the GT2flag information for the significant transform coefficients any more.

The entropy encoding unit 155 may determine all significant transform coefficients other than 0 as the maximum number of GT2flag information within the current block. Alternatively, the entropy encoding unit 155 may determine the maximum amount of the GT2flag information within the current block based on the size of the scanning area. For example, the entropy encoding unit 155 may determine the maximum number of GT2flag information, MaxCount _ GT2, based on the following mathematical formula 4.

[ mathematical formula 4]

At this time, the sizeSR may indicate the size (width) of the quadrangular scanning region, and the sizeSR may be (Sr _ x +1) × (Sr _ y + 1). Sr _ x may indicate the horizontal direction coordinate of the rightmost significant transform coefficient pixel with reference to the coordinate of the left upper end corner of the scan area. In other words, Sr _ x may indicate the horizontal direction coordinate of the pixel at the right side boundary with reference to the coordinate of the corner at the upper left end of the scan area. Sr _ y may indicate the vertical-direction coordinate of the lowermost effective transform coefficient pixel with reference to the coordinate of the upper-left corner of the scan area. In other words, Sr _ y may indicate the vertical-direction coordinate of the pixel at the lower boundary with the coordinate of the left upper end corner of the scan area as a reference. K2 may be an adjustment factor between the size of the scan area and the GT2 marker. For example, K2 may be an integer greater than 1. Th2 may be a predetermined threshold. For example, Th2 may be 16, 8. However, without being limited thereto, those skilled in the art should readily appreciate that K2 and Th2 may have various values.

The entropy encoding unit 155 may generate information on the coefficient group within the current block. The information on the coefficient group may be flag information indicating whether the coefficient group includes at least one significant transform coefficient. At this time, for the transform coefficients included in the scan area, the entropy encoding unit 155 may determine one coefficient group from each predetermined K (K is an integer) transform coefficient scanned in the scan order. In other words, one coefficient group may include K transform coefficients. At this time, the scan order is a forward scan order in a direction opposite to the predetermined reverse scan order. Here, the predetermined reverse scan order may indicate an order in which scanning is performed from a coefficient located at a lower right end among transform coefficients included in the current block to a coefficient located at an upper left end among transform coefficients included in the current block.

Accordingly, the entropy encoding unit 155 scans in the forward scanning order starting from the coefficient (i.e., DC coefficient) adjacent to the upper left corner in the current block to determine the coefficient group. At this time, when the number of transform coefficients included in the current block is not an integer multiple of K, a final coefficient group among coefficient groups scanned in the forward scan order may include a number of transform coefficients less than K. But is not limited thereto, it should be easily understood by those skilled in the art that the coefficient group may be determined in a reverse scan order in which the coefficients from the lower right-side coefficient to the upper left-side coefficient among the transform coefficients included in the current block are scanned.

When one coefficient group of coefficient groups scanned in a predetermined scanning order includes only one transform coefficient, the entropy encoding unit 155 may directly generate the GT 0flag information without generating information about the coefficient group.

The entropy-encoding unit 155 may determine whether to conceal a sign (sign) of at least one transform coefficient for each coefficient group included in the current block. For example, the entropy encoding unit 155 may determine a sign of concealing one or two transform coefficients for each coefficient group included in the current block according to circumstances.

The entropy encoding unit 155 may scan information on the transform coefficients within the quadrangular scanning area in a predetermined scanning order. The predetermined scan order may include a reverse zigzag scan order, a reverse diagonal scan order, a reverse vertical scan order, and a horizontal scan order. However, those skilled in the art should readily appreciate that the predetermined scan order is not limited to the mentioned reverse scan order but may include various reverse scan orders.

The bitstream generation unit 170 may generate a bitstream including the entropy-encoded information. In other words, the bitstream generation unit 170 may generate a bitstream including information that is scanned at the entropy encoding unit 155 for information related to transform coefficients and entropy-encoded based on the scanned information related to transform coefficients.

The bitstream generation unit 170 may generate a bitstream including information on specifying a quadrangular scanning area, which is an area for scanning information on transform coefficients. At this time, the information on the coordinates specifying the quadrangular scanning area may include information indicating the horizontal direction coordinate value of the significant transform coefficient pixel located at the rightmost side among the significant transform coefficient pixels included in the current block and the vertical direction coordinate value of the significant transform coefficient pixel located at the lowermost side among the significant transform coefficient pixels included in the current block. However, without being limited thereto, the information on the coordinates specifying the quadrangular scanning area may include information indicating that the horizontal direction coordinate value of the significant transform coefficient pixel located on the rightmost side among the significant transform coefficient pixels included in the current block has a larger coordinate value among the vertical direction coordinate values of the significant transform coefficient pixel located on the lowermost side among the significant transform coefficient pixels included in the current block. In other words, the bitstream generation unit 170 does not determine a rectangular scan area according to circumstances even if the horizontal direction coordinate value of the significant transform coefficient pixel located on the rightmost side among the significant transform coefficient pixels and the vertical direction coordinate value of the significant transform coefficient located on the lowermost side among the significant transform coefficient pixels included in the current block are different from each other, but may determine a scan area of a square based on a larger value thereof. At this time, a square scan area larger than a rectangular scan area may be determined, however, the video encoding apparatus 150 may reduce the number of bits when transmitting a signal by transmitting only information indicating a value about one directional coordinate.

The video encoding apparatus 150 may include an image encoding unit (not shown), which may include an entropy encoding unit 155 and a bitstream generating unit 170. The image encoding unit is described with reference to fig. 1 f.

Fig. 1d illustrates a flow diagram of a video encoding method according to various embodiments.

Referring to fig. 1d, the video encoding apparatus 150 may obtain the transform coefficient of the current block in step S150. The current block may be a data unit usable in the process of encoding/decoding an image described with reference to fig. 10 to 23.

The video encoding apparatus 150 may generate a prediction block of the current block by performing inter prediction or intra prediction, and may generate a residual block of the current block based on the original block of the current block and the prediction block of the current block. The video encoding apparatus 150 may perform transformation and quantization on the residual block of the current block to generate a transform coefficient of the current block.

In step S155, the video encoding device 150 may determine a quadrilateral scanning area including all significant transform coefficients within the current block. At this time, all significant transform coefficients within the current block may be included within the quadrangular scanning area, and other areas than the quadrangular scanning area within the current block may include only coefficients having a value of 0 that are not significant transform coefficients. The video encoding device 150 may determine coordinates specifying the quadrangular scanning area and entropy encode information related to the coordinates specifying the quadrangular scanning area. In other words, the video encoding device 150 may perform binarization based on a predetermined binarization method on a syntax element related to coordinates specifying a quadrangular scanning area, to generate a binary string (bin string). Here, the predetermined binarization method may be at least one of a fixed length binarization method or a truncated unary code binarization method. The video encoding apparatus 150 may perform context model-based binary arithmetic encoding on a binary string regarding a syntax element related to coordinates specifying a quadrangular scanning area. At this time, the context model may be determined based on at least one of a size of the current block, a color component of the current block, and a binary index.

In step S160, the video decoding apparatus 150 may scan the information on the transform coefficients included in the quadrangular scan area in a predetermined scan order. The information related to the transform coefficient may include flag information indicating whether the transform coefficient is greater than a predetermined value. At this time, the predetermined value may be at least one of 0, 1, and 2. May include at least one of remaining level absolute value information of the transform coefficient, sign information of the transform information, and binarization parameter information. The binarization parameter may include a rice parameter and at least one of various binarization parameters. The predetermined scan order may include a reverse zigzag scan order, a reverse diagonal scan order, a reverse vertical scan order, and a horizontal scan order. However, those skilled in the art should readily appreciate that the predetermined scan order is not limited to the mentioned reverse scan order but may include various reverse scan orders.

In step S165, the video encoding apparatus 150 may perform entropy encoding based on the scanned transform coefficient-related information to generate entropy-encoded information. The video encoding apparatus 150 may generate entropy-encoded information by performing at least one of binarization and context model-based binary arithmetic encoding based on the scanned information about the transform coefficients. When the first information related to the transform coefficient is information indicating that the absolute value of the current transform coefficient is greater than a predetermined value, the remaining level absolute value information, and the sign information of the current transform coefficient, and the second information related to the significant transform coefficient is binarization parameter information, the video encoding device 150 may perform binarization based on the second information on the first information related to the significant transform coefficient to generate a binary string (bin string) related to the first information. The video encoding device 150 may generate entropy-encoded information by performing binary arithmetic encoding on a binary string (bin string) related to the first information.

The second information may be determined based on at least one of: information on at least one second transform coefficient, a position and a color component of a first transform coefficient within a current block, information on surrounding transform coefficients on the right or lower side, and a scanning position of the first transform coefficient, which are previously scanned in a predetermined scanning order. The binarization parameter information related to the first transform coefficient may be determined based on the level absolute value of the surrounding significant transform coefficient on the right or lower side of the first transform coefficient. For example, the binarization parameter information related to the first transform coefficient may be determined based on the sum of the level absolute values of the surrounding significant transform coefficients on the right or lower side of the first transform coefficient.

When the information on the transform coefficient is information indicating whether the absolute value of the currently significant transform coefficient is greater than a predetermined value, absolute value information of the remaining level, and sign information of the currently significant transform coefficient, the video encoding device 150 may generate a binary string (bin string) by performing binarization on the information on the transform coefficient. The video encoding device 150 may generate binary arithmetic encoded information by performing binary arithmetic encoding based on a context model related to the significant transform coefficient on a binary string (bin string). Among the context models related to significant transform coefficients, a context model related to a first transform coefficient is determined based on at least one of information related to at least one second transform coefficient previously scanned in a predetermined scanning order, a position and a color component of the first transform coefficient within a current block, information related to surrounding transform coefficients on the right or lower side, and a scanning position of the first transform coefficient.

In step S170, the video encoding apparatus 150 may generate a bitstream including the entropy-encoded information. The video encoding apparatus 150 may generate a bitstream including information that is scanned at the entropy encoding unit 155 and entropy-encoded based on the scanned information about the transform coefficients.

Also, the video encoding apparatus 150 may generate a bitstream including information on coordinates specifying a quadrangular scanning area that is an area for scanning information on transform coefficients.

Fig. 1e illustrates a block diagram of an image decoding unit 6000 according to various embodiments.

The image decoding unit 6000 according to various embodiments performs operations that are passed when image data is encoded by an image decoding unit (not shown) of the video decoding apparatus 100.

Referring to fig. 1e, the entropy decoding unit 6150 parses encoded image data as a decoding object and encoding information necessary for decoding from the bitstream 6050. The encoded image data is quantized transform coefficients, and the inverse quantization unit 6200 and the inverse transform unit 6250 restore residual data from the quantized transform coefficients. The entropy decoding unit 6150 of fig. 1e may correspond to the entropy decoding unit 105 of fig. 1 a.

The intra prediction unit 6400 performs intra prediction by block. The inter prediction unit 6350 performs inter prediction using a reference image obtained from the restored picture buffer 6300 by block. The deblocking unit 6450 and the SAO performing unit 6500 may perform loop filtering on the restored data of the spatial domain to output a filtered restored image 6600 by restoring data of the spatial domain of the block of the current image 6050 by adding prediction data and residual data of each block generated by the intra prediction unit 6400 or the inter prediction unit 6350. Also, the restored image stored in the restored picture buffer 6300 may be output as a reference image.

In order to decode image data at a decoding unit (not shown) of the video decoding apparatus 100, the operations of the image decoding unit 6000 according to various embodiments in the respective steps may be performed in blocks.

Fig. 1f illustrates a block diagram of an image encoding unit according to various embodiments.

The image encoding unit 7000 according to various embodiments performs operations that are passed when image data is encoded by an image encoding unit (not shown) of the video encoding apparatus 150.

In other words, the intra prediction unit 7200 performs intra prediction on the current image 7050 by block, and the inter prediction unit 7150 performs inter prediction by block using the current image 7050 and a reference image obtained from the restored picture buffer 7100.

The prediction data for each block output from the intra prediction unit 7200 or the inter prediction unit 7150 may be subtracted from the data for the encoded block of the current image 7050 to generate residual data, and the transformation unit 7250 and the quantization unit 7300 may output the transform coefficients quantized by block by performing transformation and quantization on the residual data. The inverse quantization unit 7450 and the inverse transformation unit 7500 may restore residual data of a spatial domain by performing inverse quantization and inverse transformation on the quantized transform coefficients. The restored residual data of the spatial domain is added to the prediction data for each block output from the intra prediction unit 7200 or the inter prediction unit 7200 to be restored as data of the spatial domain for the block of the current image 7050. The deblocking unit 7550 and the SAO execution unit generate a filtered restored image by loop filtering data of the restored spatial domain. The generated restored image is stored in the restored picture buffer 7100. The restored image stored in the restored picture buffer 7100 can be used as a reference image for inter prediction of other images. Entropy encoding unit 7350 may entropy encode the quantized transform coefficients, which may be output as bitstream 7400. The entropy encoding unit 7350 of fig. 1f may correspond to the entropy encoding unit 155 of fig. 1 c.

Operations of the respective steps of the image encoding unit 7000 according to various embodiments may be performed in blocks to apply the image encoding section 7000 according to various embodiments to the video encoding apparatus 150.

Hereinafter, a method of scanning the transform coefficients in the block will be described in detail on the assumption that '0' and '1' shown for the pixels in the current block in fig. 2 and 3a to 3b are values of a significant transform coefficient flag (a flag indicating whether or not the absolute value of the coefficient is greater than 0).

Fig. 2 is a diagram for explaining a method of scanning transform coefficients within a block according to an embodiment.

Referring to fig. 2, a current block 200 is a block including a plurality of transform coefficients. The current block 200 is a data unit that performs an inverse transform operation and may be a block having an M × N (M, N is a positive integer) size. For example, as shown in fig. 2, the current block 200 may be a block having a size of 16 × 16.

The video decoding apparatus 100 may divide the current block 200 into coefficient groups (or sub-blocks) 205 having a predetermined size. The coefficient group may be a block having a size of X × Y (X, Y is a positive integer). For example, as shown in fig. 2, the coefficient group 205 may be a block having a size of 4 × 4.

The video decoding apparatus 100 may obtain, from the bitstream, information on the coordinates of the pixel 210 of the final significant transform coefficient among all the significant transform coefficients scanned in scanning the current block 200 in the forward scanning order from the transform coefficient at the upper left end to the transform coefficient at the lower right end. The information on the coordinates of the pixel 210 of the final significant transformation coefficient may include information on the horizontal direction coordinates of the pixel of the final significant transformation coefficient and information on the vertical direction coordinates of the pixel of the final significant transformation coefficient.

The video decoding apparatus 100 may obtain the coordinates of the pixel 210 of the final significant transform coefficient based on the information on the coordinates of the pixel 210 of the final significant transform coefficient, and scan the information of the transform coefficients in a predetermined inverse scan order 215 from the significant transform coefficient related to the final significant transform coefficient 210 based on the coordinates of the pixel 210 of the final significant transform coefficient. For example, as shown in FIG. 2, the predetermined reverse scan order 215 may be a reverse diagonal scan order.

The video decoding apparatus 100 may perform scanning on the coefficient groups 205 in a predetermined scanning order, and perform scanning on each coefficient group 205 in a predetermined scanning order. For the coefficient group, the video decoding apparatus 100 may scan the information on the coefficient group in a predetermined scanning order. Information about the coefficient groups may be obtained or derived from the bitstream. The information related to the coefficient groups may be derived to indicate that at least one significant transform coefficient is included for the first coefficient group and the last coefficient group scanned in the forward (reverse) scan order. In other words, the information about the coefficient groups may be excluded from the bitstream for the first coefficient group and the last coefficient group that are scanned in the forward (reverse) scanning order.

At this time, the video decoding apparatus 100 may determine the predetermined reverse scan order based on at least one of the size of the block and the prediction mode. For example, when the size of the current block is 4 × 4 and the prediction mode of the current block is the intra prediction mode, the video decoding apparatus 100 may determine one of a reverse horizontal scan order, a vertical scan order, or a diagonal scan order as a predetermined reverse scan order. Also, when the size of the current block is 8 × 8 and the prediction mode of the current block is the intra prediction mode, the video decoding apparatus 100 may determine one of a reverse horizontal scan order, a vertical scan order, or a diagonal scan order as a predetermined reverse scan order.

When the size of the current block is not 4 × 4 or 8 × 8 or the prediction mode of the current block is not the inter prediction mode, the video decoding apparatus 100 may determine the reverse diagonal scan order as a predetermined reverse scan order.

When the information on the coefficient group 205 indicates that at least one significant coefficient is included in the coefficient group 205, the video decoding apparatus 100 may scan the information on the transform coefficients within the coefficient group 205 in a reverse predetermined scan order.

The video decoding apparatus 100 may determine a sign (sign) that conceals at least one of the significant transform coefficients included in each coefficient group 205.

Fig. 3a is a diagram for explaining a method of scanning transform coefficients within a block, according to another embodiment.

Referring to fig. 3a, a current block 300 is a block including a plurality of transform coefficients. The current block 300 is a data unit performing an inverse transform operation, and may be a block having a size of M × N (M and N are positive integers). For example, as shown in FIG. 3a, the current block 300 may be a block having a size of 16 × 16.

Referring to fig. 3a, the video decoding apparatus 100 may determine a quadrangular scanning area 340 including all significant transform coefficients existing in the current block 300. At this time, the quadrangular scanning area 340 may be determined based on the coordinates of the pixels 330 located at the right lower end of the quadrangular scanning area 340. The horizontal direction coordinate of the pixel 330 located at the right lower end of the scan area 340 may be the horizontal direction coordinate of the pixel 320 of the significant transform coefficient located at the rightmost side among all the significant transform coefficients within the current block 300. The vertical-direction coordinate of the pixel 330 located at the right-hand lower end of the scan area 340 may be a vertical-direction coordinate of the significant transform coefficient pixel 310 located at the lowermost side among all the significant transform coefficients in the current block 300. The information about the pixel 330 may include information about the position of the upper left-side corner of the pixel 330. However, without being limited thereto, the information on the pixel 330 may further include information on the position of the right lower end corner of the pixel 330.

The video decoding apparatus 100 may receive information on coordinates of the pixels 330 specifying the quadrangular scanning area 340 from the bitstream, and may determine the quadrangular scanning area 340 based on the received information on the pixels 330. The video decoding apparatus 100 may scan the information on the transform coefficients in a predetermined reverse scan order 305 from the pixel 330 to the pixel 345 at the upper left end of the quadrangular scan area 340. At this time, the coefficient group may be determined every K (K is a positive integer) scan coefficients scanned in a predetermined forward scan order from the transform coefficient at the upper left end of the quadrangular scan area. However, without being limited thereto, it may be determined every K (K is a positive integer) scan coefficients that are scanned in a predetermined reverse scan order from the transform coefficient at the lower right end, as will be readily understood by those skilled in the art. Details regarding the coefficient group are described later with reference to fig. 3B.

The video decoding apparatus 100 may determine to conceal the Sign (Sign) for at least one significant transform coefficient for each coefficient group, and may restore the concealed Sign for at least one significant transform coefficient. Details regarding the operation of recovering the hidden symbol performed on each coefficient group will be described later with reference to fig. 3 b.

Fig. 3b is a diagram for explaining an operation of determining a coefficient group (sub-block) within a block and an operation performed in accordance with the coefficient group according to another embodiment.

The video decoding apparatus 100 may determine the coefficient group by every K (K is a positive integer) transform coefficient pixels in the predetermined scan order 355 in the forward direction within the current block 350. The video decoding apparatus 100 may determine a coefficient group including K transform coefficient pixels. For example, as shown in fig. 3b, the video decoding apparatus 100 may determine the coefficient group 360 every 16 pixels.

The video decoding apparatus 100 may determine that each coefficient group 360 hides a Sign (Sign) for at least one significant transform coefficient, and may determine that each coefficient group 360 hides a Sign (Sign) for at least one significant transform coefficient. For example, the video decoding apparatus 100 may determine that the current coefficient group hides a sign for at least one significant transform coefficient based on a distance between previously decoded significant transform coefficients within the current coefficient group. Specifically, the video decoding apparatus 100 may determine that a Sign (Sign) for at least one significant transform coefficient is hidden for the current coefficient group based on a distance between an initial significant transform coefficient and a final significant transform coefficient scanned in a predetermined inverse scan order. At this time, the distance may be a difference between positions of the coefficients scanned in the reverse scan order.

For example, when the distance between the first significant transform coefficient and the last significant transform coefficient scanned in a predetermined inverse scan order is greater than a predetermined value, the video decoding apparatus 100 may determine that a Sign (Sign) for at least one significant transform coefficient is hidden for the current coefficient group. At this time, the predetermined value may be various integer values. For example, the predetermined value may be 3.

When the video decoding apparatus 100 determines that a symbol (Sign) for at least one significant transform coefficient is hidden for the current coefficient group, the video decoding apparatus 100 recovers, for the current coefficient group, the symbol hidden for the at least one significant transform coefficient within the current coefficient group without obtaining information about the symbol from the bitstream. For example, when a parity sum (parity sum) for a level of a significant transform coefficient within a current coefficient group is odd or even, the video decoding apparatus 100 may determine a symbol related to at least one significant transform coefficient as 0 or 1. At this time, the sign of the recovered significant transform coefficient may include a sign related to the last significant transform coefficient scanned in the reverse scan order. However, without being limited thereto, those skilled in the art can easily understand that the symbols located at predetermined positions within the coefficient group can be restored in a predetermined scan order.

The video decoding apparatus 100 may obtain information on the corresponding coefficient group from the bitstream in a reverse predetermined scanning order, and when the information on the corresponding coefficient group indicates that at least one transform coefficient is included in the coefficient group, the video decoding apparatus 100 may scan the information on the transform coefficient within the corresponding coefficient group. When the information on the respective coefficient groups indicates that only the transform coefficient having a value of 0 is included in the coefficient groups, the transform coefficients within the respective coefficient groups may each be determined to be 0.

When the number of transform coefficients included in the quadrangular scan area within the current block 300 is not an integer multiple of K, there may be a number of transform coefficient pixels less than K in determining the coefficient group located at the last in the scan order in the forward direction. At this time, the video decoding apparatus 100 may determine the coefficient group 365 including the number of transform coefficient pixels less than K. For example, as shown in fig. 3b, the video decoding apparatus 100 may determine a coefficient group 365 including two transform coefficients.

The video decoding apparatus 100 can obtain information about the coefficient groups 355 and 360 from the bitstream. Here, the information related to the coefficient groups 355 and 360 may be flag information (significant coefficient group flag information) indicating whether at least one of the transform coefficients included in the coefficient groups 355 and 360 is a significant transform coefficient or whether the coefficient groups 355 and 360 include only a transform coefficient of 0.

When a coefficient group including only one transform coefficient is determined, the video decoding apparatus 100 may obtain only information related to one transform coefficient from a bitstream, without obtaining information related to the coefficient group (e.g., significant coefficient group flag information).

The video decoding apparatus 100 may determine a context model for binary arithmetic decoding information on a current coefficient group based on information on the coefficient group of the coefficient group scanned before the current coefficient group in a reverse predetermined scanning order 355.

Fig. 4 is a diagram for explaining a process of determining a context model for context-based binary arithmetic coding of information related to transform coefficients according to an embodiment.

Referring to fig. 4, the video decoding apparatus 100 may determine a context model of information related to a transform coefficient pixel 405 currently being scanned based on surrounding transform coefficient pixels 410. In fig. 4, the surrounding transform coefficient pixel 405 may be five pixels existing at a predetermined position at the right side or lower side of the transform coefficient pixel 405 currently being scanned. However, without being limited thereto, those skilled in the art should readily understand that the surrounding transform coefficient pixel 405 may be n (n is a positive integer) pixels located at a predetermined position on the right or lower side.

For example, the video decoding apparatus 100 may determine the context model of the flag information indicating whether the level absolute value of the transform coefficient pixel 405 currently being scanned is greater than 0, based on the number of coefficient pixels whose level absolute value is greater than 0 among the surrounding transform coefficient pixels 410.

Also, the video decoding apparatus 100 may determine a context model of flag information indicating whether the level absolute value of the transform coefficient pixel 405 currently being scanned is greater than 1, based on the number of coefficient pixels whose absolute values are greater than 1 among the surrounding transform coefficient pixels 410.

The video decoding apparatus 100 may determine a context model of flag information indicating whether the level absolute value of the transform coefficient pixel 405 currently being scanned is greater than 2, based on the number of coefficient pixels whose absolute values are greater than 2 among the surrounding transform coefficient pixels 410.

The video decoding apparatus 100 may determine a context model of flag information indicating whether the level value of the transform coefficient pixel 405 currently being scanned is greater than N based on the number of coefficient pixels whose absolute values are greater than N (N is an integer greater than 2) among the surrounding transform coefficient pixels 410.

The video decoding apparatus 100 may determine a parameter for binarizing the remaining level absolute value of the transform coefficient pixel 405 currently being scanned, based on the sum of the level absolute values of the surrounding transform coefficient pixels 410. At this time, the binarization parameter may be a rice parameter.

Fig. 5 is a diagram for explaining a process of determining a context model for context-based binary arithmetic coding of information on transform coefficients according to an embodiment.

Referring to fig. 5, the video decoding apparatus 100 may determine a context model of information related to a transform coefficient pixel 505 currently being scanned based on a transform coefficient pixel 510 previously scanned in a predetermined reverse scan order 515. As shown in fig. 5, the transform coefficient pixel 510 may be five pixels scanned in a predetermined reverse scan order 515 before the transform coefficient pixel 505 currently being scanned. However, without being limited thereto, those skilled in the art will readily appreciate that the transform coefficient pixels 505 may be n (n being a positive integer) pixels previously scanned in a predetermined reverse scan order 515. As shown in fig. 5, the predetermined reverse scan order 515 may be a reverse zigzag scan order, however, it is not limited thereto, and may include a horizontal scan order, a vertical scan order, a diagonal scan order, etc., as will be readily understood by those skilled in the art. In particular, the predetermined reverse scan order may be determined as one of a plurality of scan orders based on a magnitude of a horizontal direction coordinate value and a magnitude of a vertical direction coordinate value of the specific scan region.

For example, the video decoding apparatus 100 may determine the context model of the flag information indicating whether the level absolute value of the transform coefficient pixel 505 currently being scanned is greater than 0 based on the number of coefficient pixels whose absolute values are greater than 0 among the surrounding transform coefficient pixels 510.

Also, the video decoding apparatus 100 may determine a context model of flag information indicating whether the level absolute value of the transform coefficient pixel 505 currently being scanned is greater than 1, based on the number of coefficient pixels whose absolute values are greater than 1 among the transform coefficient pixels 510.

The video decoding apparatus 100 may determine a context model of flag information indicating whether the level absolute value of the transform coefficient pixel 505 currently being scanned is greater than 2, based on the number of coefficient pixels whose absolute values are greater than 2 among the surrounding transform coefficient pixels 510.

The video decoding apparatus 100 may determine a context model of flag information indicating whether the level absolute value of the currently passed transform coefficient pixel 505 is greater than N (N is an integer greater than 2) based on the number of coefficient pixels whose absolute values are greater than N among the surrounding transform coefficient pixels 510.

The video decoding apparatus 100 may determine a parameter for binarizing the remaining level absolute value of the transform coefficient pixel 505 currently being scanned, based on the sum of the absolute value levels of the surrounding transform coefficient pixels 510. At this time, the binarization parameter may be a rice parameter.

Fig. 6a is a diagram for explaining a zigzag scanning order for scanning information on significant transform coefficients within a block according to an embodiment.

Referring to fig. 6a, the video decoding apparatus 100 may scan pixels from a transform coefficient at the lower right end to the transform coefficient at the upper left end of the current block 600 in a zigzag scanning order 605 to scan information related to the transform coefficient of the current block 600.

Specifically, according to the zigzag scanning order 605, the video decoding apparatus 100 scans the left-side pixel 615 of the pixel 610 after scanning the transform coefficient pixel 610 at the lower right end, and scans the diagonally-oriented pixel 620 at the upper right end after scanning the pixel 615. The video decoding apparatus 100 scans the pixel 625 on the upper side adjacent to the pixel 620. The video decoding apparatus 100 scans the diagonal pixels 630 located at the left and lower ends of the pixels. The video decoding device 100 scans the pixel 635 located at the left side of the pixels adjacent to the boundary of the current block 600 among the pixels 630. In a similar manner, the video decoding apparatus 100 may scan the information about the remaining transform coefficient pixels in a zig-zag scan order 605.

In addition, in the zigzag scanning order 605 according to an embodiment, the pixel 615 located in the right-left direction of the direct horizontal direction is scanned after the pixel 610 is scanned, and thus the zigzag scanning order 605 may be referred to as a horizontal-priority zigzag order.

Fig. 6b is a diagram for explaining a zigzag scanning order for scanning information on transform coefficients within a block according to another embodiment.

Referring to fig. 6b, the video decoding apparatus 100 may scan the transform coefficient pixels from the lower right end to the upper left end of the current block 600 in the zigzag scanning order 635 to scan information about the transform coefficients of the current block 600.

Specifically, according to the zigzag scanning order 635, the video decoding apparatus 100 scans the upper-side pixels 650 of the pixels 645 after scanning the transform coefficient pixels 645 at the right-lower end, and scans the diagonal-direction pixels 655 located at the left-lower end after scanning the pixels 650. The video decoding apparatus 100 scans the left pixel 660 adjacent to the pixel 655. The video decoding apparatus 100 scans a pixel 665 located in a diagonal direction at the upper right end of the pixel. The video decoding apparatus 100 scans a pixel 670 located at the upper side of the pixels of the boundary of the current block 500 among the pixels 665. In a similar manner, the video decoding apparatus 100 may scan the information about the remaining transform coefficient pixels in a zig-zag scan order 605.

In the zigzag scanning order 605 according to an embodiment, the pixels 645 are scanned and then the pixels 650 in the upper direction which is directly in the vertical direction are scanned, and thus the zigzag scanning order 605 may be referred to as a vertical priority zigzag order.

Fig. 7a is a diagram for explaining a horizontal scanning order in which information on transform coefficients within a block is scanned, according to an embodiment.

Referring to fig. 7a, the video decoding apparatus 100 may scan, in a horizontal scanning order 705, transform coefficient pixels from a lower right end to an upper left end of the current block 700 to scan information related to the transform coefficients of the current block 700.

Specifically, according to the horizontal scanning order 705, the video decoding apparatus 100 sequentially scans pixels 715 located in a horizontal direction (i.e., a left direction) after scanning transform coefficient pixels 710 at a lower right end, and scans rightmost pixels 720 of a line immediately above the pixels 715 after scanning pixels adjacent to a left boundary of the current block 700, among the pixels 715. The video decoding apparatus 100 scans the pixels 725 located in the horizontal direction (i.e., the left direction) in the same manner as the pixels 715 of the previous line are scanned. In a similar manner, the video decoding apparatus 100 may scan the information related to the remaining transform coefficient pixels in the horizontal scanning order 605.

Fig. 7b is a diagram for explaining a vertical scanning order in which information on transform coefficients within a block is scanned, according to an embodiment.

Referring to fig. 7b, the video decoding apparatus 100 may scan, in a vertical scanning order 730, transform coefficient pixels from a lower right end to an upper left end of the current block 700 to scan information related to the transform coefficients of the current block 700.

Specifically, according to the vertical scanning order 730, the video decoding apparatus 100 sequentially scans the pixels 740 located in the vertical direction (i.e., the upper direction) after scanning the transform coefficient pixels 735 at the lower right end, and scans the lowermost pixel 745 of the column directly to the left of the pixels 740 after scanning the pixels adjacent to the upper boundary of the current block 700. The video decoding apparatus 100 scans the pixel 750 located in the vertical direction (i.e., the upper direction) in the same manner as the pixel 740 of the previous column is scanned. In a similar manner, video decoding apparatus 100 may scan information related to the remaining transform coefficient pixels in vertical scan order 730.

Fig. 8 is a diagram for explaining a diagonal scanning order for scanning information on transform coefficients within a block according to an embodiment.

Referring to fig. 8, the video decoding apparatus 100 may scan, in a diagonal scan order 805, transform coefficient pixels from a lower right end to an upper left end of a current block 800 to scan information related to transform coefficients of the current block 800.

Specifically, according to the diagonal scanning order 805, the video decoding apparatus 800 scans the upper side pixel 815 of the pixel 810 after scanning the transform coefficient pixel 810 at the lower right end, and scans the pixel 820 in the diagonal direction at the lower left end after scanning the upper side pixel 815. The video decoding apparatus 100 scans a pixel 825 adjacent to the upper side of the pixel 815. The video decoding apparatus 100 scans the pixel 830 in the diagonal direction located at the left and lower end of the pixel 825. In a similar manner, video decoding apparatus 100 may scan the confidence associated with the remaining transform coefficient pixels in a diagonal scan order.

Various scanning orders for scanning information on significant transform coefficients within a block are described above with reference to fig. 6a to 8. The scan order described with reference to fig. 6a to 8 is a reverse scan order, but is not limited thereto, and a forward scan order in which scanning is performed in an order opposite to the reverse scan order can be easily understood by those skilled in the art.

Fig. 9a to 9c are diagrams for explaining a residual coding syntax structure according to an embodiment.

Referring to fig. 9a to 9b, the video decoding apparatus 100 scans syntax element information on transform coefficients, and decodes the transform coefficients based on the scanned syntax element information to restore the transform coefficients.

First, the video decoding apparatus 100 may obtain syntax element information scan _ region _ x and scan _ region _ y indicating coordinates specifying a quadrangular scanning area from a bitstream. At this time, the syntax element information scan _ region _ x may indicate a horizontal direction (x-axis direction) coordinate value of the significant transform coefficient located at the rightmost side within the current block with reference to the coordinates at the upper left side corner of the current block, and the syntax element information scan _ region _ y may indicate a vertical direction (y-axis direction) coordinate value of the significant transform coefficient located at the bottommost side within the current block with reference to the coordinates at the upper left side corner of the current block. The video decoding apparatus 100 may determine the coordinates srX and srY specifying the scan region based on the syntax element information scan _ region _ x and scan _ region _ y.

The video decoding apparatus 100 can determine the Scan Order (Srx +1, srY +1)) of the transform coefficient within the Scan region determined according to the Scan Order based on the coordinates srX and srY specifying the Scan region.

The video decoding apparatus 100 may determine index information lastSet indicating a coefficient group that is scanned last in the scan order in the forward direction based on the coordinates srX and srY specifying the scan region. In other words, the video decoding apparatus 100 may determine the index information (lastSet) of the coefficient group that was scanned last by performing an operation of shifting a value of subtracting 1 from the size value ((srX +1) × (srY +1)) of the scan region to the right by 4 (> > 4). In other words, the output value of the operation of shifting 4 to the right may be the same as the value determined by the operation of dividing by 16. In other words, since the video decoding apparatus 100 determines one coefficient group every 16 transform coefficients, when dividing the total transform coefficient scanned in the scan region by 16, the total coefficient group (or a value of adding 1 to the index value of the coefficient group scanned last in the scan order in the forward direction) can be determined.

The video decoding apparatus 100 may determine the position lastScanPos of the coefficient last scanned in the scanning order in the forward direction based on the coordinates srX and srY specifying the scanning area, and determine the position Pos of the current transform coefficient as the position lastScanPos of the coefficient last scanned in the scanning order in the forward direction.

The video decoding apparatus 100 may initialize i to lastset, decrease i by 1 when i is greater than 0, and may perform the operation included in the repeat sentence (for sentence) 905. The video decoding apparatus 100 may perform the operation included in the repeat sentence (for sentence) 905 until i is a value smaller than 0. At this time, i may be an index indicating a coefficient group. In other words, the video decoding apparatus 100 can perform an operation on one coefficient group each time the operation in the repeat term (for term) 905 is performed.

The video decoding apparatus 100 may determine setPos by performing an operation of multiplying i by 16(i < < 4). setPos may indicate index information of a transform coefficient located at a first position of a transform group.

The video decoding apparatus 100 may determine n as a value (lastScanPos-setPos) of subtracting setPos from lastScanPos when i is the last coefficient group (i ═ lastSet), otherwise determine n as 15, and decrease i by 1 when n is equal to or greater than 0, and perform the operation included in the repeat sentence (for sentence) 910. The video decoding apparatus 100 may perform the operation (operation related to sig _ flag) included in the repeat sentence (for sentence) 910 until i becomes a value smaller than 0. At this time, n may be index information indicating a position of the transform coefficient within the coefficient group in the forward scanning order.

For example, when the total number of transform coefficients included in the scan region is not an integer multiple of 16, a value of setPos subtracted from lastScanPos for a coefficient group (i) scanned last in the scanning order in the forward direction has a value equal to or greater than 0 and less than or equal to 15, and its value may be determined as n and a repeat statement (for statement) 910 may be executed. The video decoding apparatus 100 can determine the position blkpos of the current transform coefficient as a value of n plus the index information setPos indicating the position of the first coefficient of the current coefficient group in the ScanOrder arrangement. The ScanOrder arrangement may be an arrangement of positions of transform coefficients in a forward scan order.

The video decoding apparatus 100 may determine the value sx of the coordinates in the horizontal direction (x-axis direction) currently being scanned based on the position (blkpos) of the current transform coefficient and the width of the scanning area. The video decoding apparatus 100 can determine the value sy of the coordinate in the vertical direction (y-axis direction) based on the position blkpos of the transform coefficient currently being decoded and the width log2width of the scanning area.

In a case where sx is 0, sy is srY, and is _ last _ y is 0 (sx ═ 0& & sy & & srY & & is _ last _ y ═ 0), or in a case where sy is 0, sx is srX, and is _ last _ x is 0 (sy ═ 0& & sx & & srX & & is _ last _ x ═ 0), the video decoding apparatus 100 may not obtain an effective transform coefficient flag sig _ flag (sig _ flag [ blkpos ]) related to the current transform coefficient from the bitstream but determine a value of sig _ flag related to the current coding coefficient as 1. Here, is _ last _ x may be a value indicating whether there is a significant transform coefficient whose absolute value is greater than 0 among transform coefficients scanned before the current transform coefficient among transform coefficients included in the lowermost row in the scan area. is _ last _ y may be a value indicating whether there is a significant transform coefficient whose absolute value is greater than 0 among transform coefficients scanned before the current transform coefficient among transform coefficients included in the rightmost column of the scan area.

In the case where sx is not 0, sy is srY, and is _ last _ y is 0 (sx ═ 0& & sy & & srY & & is _ last _ y ═ 0), or in the case where sx is not sy is 0, sx is srX, and is _ last _ x is 0 (sy ═ 0& & sx & & srX & & is _ last _ x ═ 0), the video decoding apparatus 100 may obtain an effective transform coefficient flag sig _ flag (sig _ flag [ kpos ]) related to the currently decoded transform coefficient from the bitstream.

When the significant transform coefficient flag sig _ flag of the current transform coefficient is 1, the video decoding apparatus 100 may set the is _ last _ x to 1 if the coordinate value sx of the current transform coefficient in the x-axis direction is equal to srX (sx ═ srX), and the is _ last _ y to 1 if the coordinate value sy of the current transform coefficient in the y-axis direction is equal to srY (sy ═ srY).

Also, when lastSigScanPos is-1 (i.e., is an initial value), the video decoding apparatus 100 may determine lastSigScanPos as n. In other words, since the index n indicating the position where the first significant transform coefficient of the coefficient group in the reverse scan order exists is determined as lastSigScanPos, lastSigScanPos may be index information indicating the position where the last significant transform coefficient of the coefficient group in the forward scan order exists. The video decoding apparatus 100 may determine firstSigScanPos as n. When the significant transform coefficient flag sig _ flag of the current transform coefficient is 1, the firstSigScanPos is continuously updated to index information (i.e., n) indicating the position of the current transform coefficient, and thus, when an operation is finally performed on the transform coefficients included in a specific coefficient group i, the firstSigScanPos may be index information indicating the position where the first significant transform coefficient exists within the coefficient group in the forward scanning order.

The video decoding apparatus 100 increases cnt _ nz by 1. In other words, cnt _ nz, which indicates the number of coefficients having a value other than 0 within the current block (or scan area), is updated every time the current transform coefficient is a significant transform coefficient. Also, the video decoding apparatus 100 increases cg _ nz [ i ] indicating the number of coefficients having a value other than 0 within the current coefficient group by 1. In other words, cg _ nz [1], which indicates the number of coefficients having a value other than 0 within the current coefficient group, is updated every time the current transform coefficient is a significant transform coefficient.

When i is the last coefficient group (i ═ lastset), the video decoding apparatus 100 may determine n as a value (lastScanPos-setPos) obtained by subtracting setPos from lastScanPos, otherwise determine n as 15, reduce i by 1 when n is greater than or equal to 0, and perform the operation included in the repeat statement (for statement) 915. The video decoding apparatus 100 may perform the operations (operations related to coeff _ abs _ level _ grease 1_ flag and coeff _ abs _ level _ grease 2_ flag) included in the repeat statement (for statement) 915 until n becomes a value less than 0.

The video decoding apparatus 100 can determine the position blkpos of the current transform coefficient as a value of 1 added to the index information setPos indicating the position of the first coefficient of the current coefficient group in the ScanOrder arrangement.

The video decoding device 100 may obtain the absolute value abs _ coef for the current transform coefficient based on the information gt 0flag, gt 1flag, gt2flag, remaining _ absolute value level related to the current transform coefficient.

When the significant transform coefficient flag sig _ flag [ blkpos ] of the current transform coefficient is 1, if the sum of cnt _ gl (i.e., the number of transform coefficients having an absolute value of a transform coefficient greater than 1 among transform coefficients included in a previously scanned coefficient group) and c1 (i.e., the number of transform coefficients that can obtain coeff _ abs _ level _ generator 1_ flag from the bitstream among the previously scanned transform coefficients within the current coefficient group) (i.e., the maximum number of transform coefficients that can obtain coeff _ abs _ level _ generator _ flag from the bitstream among the transform coefficients having an absolute value of a coefficient greater than 1), the video decoding apparatus 100 may obtain coeff _ abs _ level _ generator _ flag for the current transform coefficient from the bitstream (coeff _ abs _ level _ generator _ 1_ flag 1 may indicate that the current transform coefficient may obtain a coeff _ abs _ level _ flag greater than the absolute value of the current transform coefficient 1 The flag of number gt1_ flag (gtlflag [ blkpos ]).

When the value of the flag gt1_ flag of the current transform coefficient is 1, the video decoding device 100 may increase c1 by 1. In the case where the flag gt1_ flag of the current transform coefficient is 1, the value of c1 is increased by 1 to update c1, and thus, when an operation of a repeat statement (for statement) is performed on the subsequent transform coefficients, c1 may indicate the number of transform coefficients that obtain coeff _ abs _ level _ granularity 1_ flag from the bitstream among the previously scanned transform coefficients within the current coefficient group.

When the significant transform coefficient flag sig _ flag [ blkpos ] of the current transform coefficient is 1, if the sum of cnt _ g2 (i.e., the number of transform coefficients whose absolute values are greater than 2 among the transform coefficients within the previously scanned coefficient group) and c2 (i.e., the number of transform coefficients whose absolute values are obtained from the previously scanned bitstream within the current coefficient group coeff _ abs _ level _ header 2_ flag) is less than num _ gt2 (i.e., the maximum number of transform coefficients whose absolute values are greater than 2 among the transform coefficients whose absolute values are obtained from the bitstream, coeff _ abs _ level _ header _ flag), the video decoding apparatus 100 may obtain coeff _ abs _ level _ header 2_ flag for the current transform coefficient from the bitstream. coeff _ abs _ level _ header 2_ flag may be a flag indicating whether the absolute value of the current transform coefficient is greater than 2. The video decoding device 100 may determine a flag gt2_ flag for the current transform coefficient based on coeff _ abs _ level _ header 2_ flag obtained from the bitstream.

When the value of the flag gt2_ flag of the current transform coefficient is 1, the video decoding device 100 may increase the number c2 of the information gt2_ flag of the current transform coefficient by 1. When the flag gt2_ flag of the current transform coefficient is 1, the value of c2 is increased by 1 to update c2, and thus, when an operation of a repeat statement (for statement) is performed on the following transform coefficients, c2 may indicate the number of transform coefficients, of which coeff _ abs _ level _ granularity 1_ flag is obtained from the bitstream, among the previously scanned transform coefficients within the current coefficient group.

Also, the video decoding apparatus 100 may determine escapeDataPresent as 1. The escapeDataPresent may be a value indicating the presence of information (e.g., residual level absolute value information coeff _ abs _ level _ remaining) that needs to be additionally obtained in order to determine the value of the transform coefficient within the current transform coefficient group.

When the significant transform coefficient flag sig _ flag [ blkpos ] of the current transform coefficient is 1, the video decoding apparatus 100 may determine escapeDataPresent to be 1 if the sum of cnt _ g2 (i.e., the number of transform coefficients whose absolute values are greater than 2 among the transform coefficients within the previously scanned coefficient group) and c2 (i.e., the number of transform coefficients whose absolute values are obtained coeff _ abs _ level _ greater2_ flag from the previously scanned bitstream within the current coefficient group) is not less than num _ gt2 (i.e., the maximum number of transform coefficients whose absolute values are greater than 2 among the transform coefficients that can be obtained coeff _ abs _ level _ greater _ flag from the bitstream).

When cnt _ g1 (i.e., the number of transform coefficients whose absolute values are greater than 1 among the transform coefficients included in the previously scanned coefficient group) and c1 (i.e., the number of transform coefficients whose coeff _ abs _ level _ greater1_ flag is obtained from the bitstream among the transform coefficients previously scanned within the current coefficient group) are not less than num _ gt1 (i.e., the maximum number of transform coefficients whose coeff _ abs _ level _ greater _ flag can be obtained from the bitstream among the transform coefficients whose absolute values are greater than 1), the video decoding apparatus 100 may determine escapeDataPresent to be 1.

When escapeDataPresent is 1, the video decoding apparatus 100 may determine n as a value (lastScanPos-setPos) of subtracting setPos from lastScanPos when i denotes the last coefficient group (i ═ lastSet), otherwise determine n as 15, and when n is greater than or equal to 0, may decrease i by 1 and perform an operation included in the repeat sentence (for sentence) 920. The video decoding apparatus 100 may perform an operation (operation related to coeff _ abs _ level _ remaining) included in the repeat sentence (for sentence) 920 until n becomes a value less than 0.

The video decoding apparatus 100 can determine the position blkpos of the current transform coefficient as a value of 1 added to the index information setPos indicating the position of the first coefficient of the current coefficient group in the ScanOrder arrangement.

When the absolute value of the current coding coefficient is greater than 1 (sig _ flag blkpos), the video decoding apparatus 100 may determine the base level. In the case where cnt _ gt1 (i.e., the number of transform coefficients whose absolute values are greater than 1 among the transform coefficients scanned before the current transform coefficient within the current coefficient group) is less than num _ gtl (i.e., the maximum number of transform coefficients whose absolute values are coeff _ abs _ level _ marker _ flag among the transform coefficients whose absolute values are greater than 2 among the transform coefficients that can be obtained from the bitstream), the video decoding apparatus 100 may determine the base level to be 3 in the case where cnt _ gt2 (i.e., the number of transform coefficients whose absolute values are greater than 2 among the transform coefficients scanned before the current transform coefficient within the current coefficient group) is less than num _ 2 (i.e., the maximum number of transform coefficients whose absolute values are greater than 2 among the transform coefficients whose absolute values are obtained from the bitstream). When cnt _ gtl is smaller than num _ gtl that can be obtained from the bitstream, the video decoding apparatus 100 may determine the base level to be 2 if cnt _ gt2 is smaller than num _ gt 2. When cnt _ gtl is not less than num _ gt1, the video decoding apparatus 100 may determine the base level as 1.

The video decoding apparatus 100 may determine the absolute value level abs _ coef of the current transform coefficient as a value obtained by adding 1 to the value obtained by adding gt2flag to the value of gtlflag of the current transform coefficient. When the absolute value of the current transform coefficient has the same value as the base level (abs _ coef [ blkpos ] ═ base _ level), the video decoding apparatus 100 may obtain the remaining absolute value level coeff _ abs _ level _ remaining for the current transform coefficient from the bitstream. Video decoding apparatus 100 may determine the absolute value level abs _ coef for the current transform coefficient by adding the remaining absolute value level _ remaining for the current transform coefficient to abs _ coef.

When the absolute value level abs _ coef for the current transform coefficient is greater than 2, the video decoding apparatus 100 may increase cnt _ gt2 by 1. Also, when the absolute value level abs _ coef for the current transform coefficient is greater than 1, the video decoding apparatus 100 may increase cnt _ gt1 by 1.

If escapeDataPresent is 0, when i indicates the last coefficient group (i ═ lastSet), n may be determined as a value obtained by subtracting setPos from lastScanPos (lastScanPos-setPos), otherwise n is determined as 15, and when n is greater than or equal to 0, i is decreased by 1, and the operation included in the repeat statement (for statement) 925 is performed. The video decoding apparatus 100 may perform the operation (operation related to abs _ coef) included in the repeat sentence (for sentence) 925 until n becomes a value smaller than 0. A value may be determined that adds n to the index information setPos indicating the position of the first coefficient of the current coefficient group on the ScanOrder arrangement.

The video decoding device 100 may obtain the absolute value abs _ coef for the current transform coefficient based on the information gt 0flag, gt 1flag, gt2flag, and remaining _ absolute value level related to the current transform coefficient.

When the significant transform coefficient flag for the current transform coefficient is 1, the video decoding apparatus 100 may increase cnt _ gt2 by 1 if the absolute value for the current transform coefficient (i.e., abs _ coef) is greater than or equal to 2. When the significant transform coefficient flag for the current transform coefficient is 1, cnt _ gt1 may be incremented by 1 if the absolute value for the current transform coefficient (i.e., abs _ coef) is greater than or equal to 1.

When the difference between the position lastsigscan pos of the last significant transform coefficient and the position firstsigscan pos of the first significant transform coefficient within the current coefficient group i scanned in the forward scan order is greater than 3, the video decoding apparatus 100 may determine the value signHidden [ i ] of signHidden to be 1, the value signHidden [ i ] of signHidden indicating that at least one symbol is hidden in the current coefficient group i. When the difference between the position lastsigscan pos of the last significant transform coefficient and the position firstsigscan pos of the first significant transform coefficient within the current coefficient group i scanned in the forward scan order is not more than 3, the video decoding apparatus 100 may determine a value signHidden [ i ] indicating signHidden in which at least one symbol is hidden within the current coefficient group i to be 0.

The video decoding apparatus 100 may initialize i to 0, increase i by 1 when i is less than or equal to lastSet, and perform the operation included in the repeat sentence (for sentence) 930. The video decoding apparatus 100 may perform the operation included in the repeat sentence (for sentence) 930 until i becomes a value larger than lateSet. At this time, i may be an index indicating a coefficient group. In other words, the video decoding apparatus 100 performs an operation for one coefficient group every time an operation included in the repeat sentence (for sentence) 930 is performed.

If rsp (residual sign prediction) is applied (rsp _ apply), when i is the last coefficient group (i ═ lastSet), the video decoding apparatus 100 may determine n as a value obtained by subtracting setPos from lastScanPos (lastScanPos-setPos), otherwise determine n as 15, reduce i by 1 when n is greater than or equal to 0, and perform the operation included in the repeat term (for term) 935. The video decoding apparatus 100 may perform the operation included in the repeat sentence 935 until n becomes a value smaller than 0.

The video decoding apparatus 100 may determine the position blkpos of the transform coefficient currently being decoded as a value obtained by adding the index information setPos indicating the position of the first coefficient of the current coefficient group in the ScanOrder arrangement in the forward scanning order to the current n.

When the position blkpos of the transform coefficient currently being decoded is not rsp _ pos but is the position hidden _ pos of the transform coefficient whose sign is hidden, the video decoding apparatus 100 may determine information sign [ blkpos ] regarding the sign of the transform coefficient currently being decoded as information hidden _ sign regarding the sign of the hidden transform coefficient.

If the position is not the position hidden _ pos of the transform coefficient for which the sign of the transform coefficient is concealed, the video decoding apparatus 100 can obtain information (sign [ blkpos ]) on the sign of the currently decoded transform coefficient from the bit stream.

When the position blkpos of the transform coefficient currently being decoded is rsp _ pos, the video decoding apparatus 100 may obtain information sign _ rsp [ blkpos ] related to the rsp symbol of the transform coefficient currently being decoded from the bitstream.

If rsp is not applied, the video decoding apparatus 100 may determine n as a value (lastScanPos-setPos) of subtracting setPos from lastScanPos when i is the last coefficient group (i ═ lastSet), otherwise determine n as 15, reduce i by 1 when n is greater than or equal to 0, and perform the operation included in the repeat sentence (for sentence) 940. The video decoding apparatus 100 may perform the operation included in the repeat sentence (for sentence) 940 until n becomes a value smaller than 0.

The video decoding apparatus 100 can determine the position blkpos of the current transform coefficient as a value of index information setPos indicating the position of the first coefficient in the current coefficient group in the ScanOrder arrangement in the forward direction plus the current n.

When the value of a significant transform coefficient flag of a current transform coefficient is 1, the value of a flag sign _ data _ suppressing _ enabled _ flag indicating whether or not to activate concealment of symbol data is 0, or there is no symbol concealed for a current block (| sign high [ i ]), or the position of the current transform coefficient within a current coefficient group is not the position firstsigscan pos of a first significant transform coefficient in a scan order in the forward direction, information sign [ blkpos ] related to the symbol of the current transform coefficient may be obtained from a bitstream.

Fig. 9d to 9f are diagrams for explaining a residual coding syntax structure according to another embodiment.

Referring to fig. 9d to 9f, the video decoding apparatus 100 may scan syntax element information related to transform coefficients, and decode the transform coefficients based on the scanned syntax element information to restore the transform coefficients.

First, the video decoding apparatus 100 may obtain syntax element information last _ sig _ coeff _ x and last _ sig _ coeff _ y indicating coordinates specifying a scanning area from a bitstream. At this time, the syntax element information last _ sig _ coeff _ x may indicate a horizontal direction (x-axis direction) coordinate value of the significant transform coefficient located last in the scanning order in the forward direction with reference to the coordinates of the upper left corner of the current block, and the syntax element information last _ sig _ coeff _ y may indicate a vertical direction (y-axis direction) coordinate value of the significant transform coefficient located last in the scanning order in the forward direction within the current block with reference to the coordinates of the upper left corner of the current block. The video decoding apparatus 100 may determine the coordinates lastsigcoeff x and lastsigcoeff y of the designated scanning area based on the syntax element information last _ sig _ coeff _ x and last _ sig _ coeff _ y.

The video decoding apparatus 100 may initialize the position lastScanPos of the last significant transform coefficient in the forward scan order within the subblock to 16. The video decoding apparatus 100 may initialize lastSubBlock of a subblock in which a significant transform coefficient located last in a forward scanning order is located within the subblock. At this time, the initialized value may indicate a last sub-block among sub-blocks of the current block that is scanned in the forward scan order. In other words, lastSubBlock may be initialized to a value indicating a sub-block located last among sub-blocks scanned in a scan order in a forward direction based on the height and width of the current block.

The video decoding apparatus 100 may perform the operation within the Do-while loop statement 950 when the position xC in the horizontal direction of the current coefficient is not LastSigcoeffX or the position yC in the vertical direction of the current coefficient is not LastSigcoeffY, and the video decoding apparatus 100 may not perform the operation within the Do-while loop statement 950 when xC is LastSigcoeffX and the position yC in the vertical direction of the current coefficient is LastSigcoeffY.

If lastScanPos is 0, video decoding device 100 may determine lastScanPos as 16 and decrease lastsub by 1. In other words, at the end of scanning within one sub-block in the reverse scanning order (i.e., when lastScanPos is 0), lastScanPos is initialized to 16 to scan the next sub-block in the reverse scanning order, and lastSubBlock is decremented by 1 to scan the next sub-block. The video decoding apparatus 100 may scan the transform coefficients within the current sub-block lastsub-block in the reverse scan order while decrementing lastScanPos by 1.

The video decoding apparatus 100 may determine a position xS in the horizontal direction and a position yS in the vertical direction of the current sub-block. The video decoding apparatus 100 may determine the position of the current sub-block based on the ScanOrder arrangement determined in a predetermined forward scan order for the current block. In other words, the video decoding device 100 may determine the position of the current sub-block based on the height log2height and the width log2width of the current block, a scan index scanIdx indicating a predetermined scan order, and a value lastSubBlock indicating the current sub-block. At this time, xS and yS may be positions of sub-blocks when the sub-blocks are regarded as one pixel, not positions of actual sub-blocks. In other words, regardless of the width and height of the sub-blocks, the difference between the horizontal direction and the vertical direction of the sub-blocks between the sub-blocks adjacent to each other may be 1.

Also, the video decoding apparatus 100 may determine the horizontal direction position xC and the vertical direction position yC of the current transform coefficient based on ScanOrder arrangement determined in a predetermined forward scanning order for the current subblock. In other words, the video decoding apparatus 100 may determine the position (xC, yC) of the current transform coefficient based on the height (log24 ═ 2) and the width (log24 ═ 2) of the current subblock, a scan index scanIdx indicating a predetermined scan order, the position (xS, yS) of the current subblock, and a value lastScanPos indicating the position of the current transform coefficient within the current subblock.

When the position yC in the vertical direction of the current transform coefficient is LastSigcoeffY, the video decoding apparatus 100 does not perform the operation within the do-while loop statement 950 any more, and therefore lastScanPos determined by the do-while loop statement 950 may be index information indicating the position of the last significant transform coefficient within a subblock including the last significant transform coefficient within the current block scanned in the scanning order in the forward direction, and lastSubBlock may be index information indicating a subblock within which the last significant transform coefficient within the current block is located.

The video decoding apparatus 100 may initialize i to lastSubBlock, reduce i by 1 when i is greater than 0, and perform the operation included in the repeat statement (for statement) 960. The video decoding apparatus 100 may perform the operation included in the repeat sentence (for sentence) 960 until i becomes a value smaller than 0. At this time, i may be an index indicating a coefficient group. In other words, the video decoding apparatus 100 may perform an operation on one sub-block whenever performing an operation in a repeat statement (for statement) 960.

The video decoding apparatus 100 may determine a position xS in the horizontal direction and a position yS in the vertical direction of the current sub-block.

When i indicating the current subblock indicates a subblock whose value is smaller than a value (lastSubBlock) indicating a subblock where the last significant transform coefficient is located or indicates a subblock whose value is greater than a value (0) indicating a subblock including a DC coefficient, the video decoding apparatus 100 may obtain a coded _ sub _ block _ flag [ xS ] [ yS ] for the current subblock i. The coded _ sub _ block _ flag [ xS ] [ yS ] may be flag information indicating whether at least one significant transform coefficient is included in the current sub-block i.

The video decoding apparatus 100 may initialize n to lastScanPos-1 when the current sub-block is a sub-block lastSubBlock including the last significant transform coefficient, otherwise initialize n to 15, reduce n by 1 when n is greater than or equal to 0, and perform the operation included in the repeat sentence (for sentence). The video decoding apparatus 100 may perform the operation in the repeat sentence (for sentence) 960 until n becomes a value smaller than 0. At this time, n may be an index indicating the position of the current transform coefficient within the sub-block. In other words, the video decoding apparatus 100 can perform an operation on one coefficient group each time an operation included in a repeat sentence (for sentence) is performed. In other words, the video decoding apparatus 100 may perform an operation on one subblock (an operation related to sig _ coeff _ flag) whenever performing an operation included in a repeat statement (for statement) 960.

The video decoding apparatus 100 may determine the positions xC and yC of the current transform coefficient n.

When the coded _ sub _ block _ flag [ xS ] [ yS ] of the current subblock is 1, n is greater than 0, or infesbdcsigcoeffflag is 0, the video decoding apparatus 100 may obtain a flag sig _ coeff _ flag [ xC ] [ yC ] for the current transform coefficient from the bitstream. sig _ coeff _ flag [ xC ] [ yC ] may be a flag indicating whether the current transform coefficient is a significant transform coefficient.

If the value of sig _ coeff _ flag [ xC ] [ yC ] is 1, the inferSbDSigCoeffFlag may be determined to be 0.

The video decoding apparatus 100 may initialize a value of a position firstScanPos of a transform coefficient that is scanned first in a forward scanning order within a current subblock, and initialize a value of a position lastSigScanPos of a transform coefficient that is scanned last in the forward scanning order within the current subblock. The video decoding apparatus 100 may initialize a value of numGreater1Flag indicating the number of Greater1Flag, and initialize a value of lastGreater1ScanPos that obtains the final position of Greater1Flag in the forward scanning order.

The video decoding apparatus 100 may initialize cg _ nz [ i ] indicating the number of transform coefficients that are not 0 within a sub-block i.

The video decoding apparatus 100 may initialize n to 15, and when n is greater than or equal to 0, may perform an operation within the repeat sentence (for sentence) 965 while decreasing n by 1 until n is less than 0.

The video decoding apparatus 100 may determine the positions xC and yC of the current transform coefficient n.

When the significant transform coefficient Flag sig _ coeff _ Flag for the current transform coefficient is 1, if numGreater1Flag is less than 8, the video decoding apparatus 100 may obtain coeff _ level _ larger 1_ Flag [ n ] for the current transform coefficient n and increase the value of numGreater1Flag by 1. coeff _ level _ header 1_ flag [ n ] may be flag information indicating whether the absolute value of the level of the transform coefficient n is greater than 1. The video decoding device 100 may determine the Greater1Flag [ n ] based on coeff _ level _ Greater1_ Flag [ n ].

If the value of coeff _ abs _ level _ larger _ flag for the current transform coefficient is 1 and the value of lastlarger 1ScanPos is the initial value, the video decoding apparatus 100 may determine lastlarger 1ScanPos as the position n of the current transform coefficient within the current sub-block. Thus, lastgreat 1ScanPos may be index information indicating a position of a coefficient having a value of coeff _ abs _ level _ great _ flag located last in a forward scanning order within the current subblock. Otherwise (when the value of coeff _ abs _ level _ grease 1_ flag for the current transform coefficient is 1 and the value of lastgrease 1ScanPos is not the starting value), escapeDataPresent may be determined to be 1 when coeff _ abs _ level _ grease 1_ flag is 1.

If numGreater1Flag is not less than 8, the video decoding apparatus 100 may determine escapeDataPresent as 1.

When lastSigScanPos indicating the position of the significant transform coefficient located last in the forward scan order within the current sub-block has an initial value, the video decoding apparatus 100 may determine lastSigScanPos as the position n of the transform coefficient within the current sub-block.

The video decoding apparatus 100 may determine, as the position n of the transform coefficient within the current sub-block, firstSigScanPos indicating the position of the significant transform coefficient located first in the forward scanning order within the current sub-block. When the significant transform coefficient flag sig _ coeff _ flag [ xC ] [ yC ] of the current transform coefficient is 1, the value of the firstSigScanPos is continuously updated to the index information n indicating the position of the current transform coefficient within the current subblock, and thus when the operation is finally performed on the transform coefficient in the current subblock i, the firstSigScanPos may be the index information indicating the position where the first significant transform coefficient exists within the subblock in the forward scanning order.

When the difference between lastSigScanPos and firstSigScanPos is greater than 3, video decoding apparatus 100 may determine the sigHidden value to be 1, and when the difference between lastSigScanPos and firstSigScanPos is not greater than 3, video decoding apparatus 100 may determine the sigHidden value to be 0. At this time, sigHidden may be a value indicating a sign of concealing at least one transform coefficient for the current subblock.

When the position lastGreater1ScanPos of the transform coefficient at which the coeff _ abs _ level _ Greater1_ Flag is finally obtained in coeff _ abs _ level _ Greater1_ Flag (Greater1Flag) of the transform coefficients scanned in the forward scanning order and obtained from the bitstream is not an initial value (i.e., when at least one Greater1Flag is obtained from the bitstream in the current subblock), the video decoding apparatus 100 may obtain a Flag coeff _ abs _ level _ Greater2_ Flag for the transform coefficient at the lastGreater1ScanPos position from the bitstream. coeff _ abs _ level _ header 2_ flag may be a flag indicating whether the absolute value of a transform coefficient is greater than 2. When the flag coeff _ abs _ level _ larger 2_ flag for the transform coefficient for the lastlarger 1ScanPos position is 1, the video decoding apparatus 100 may determine escapeDataPresent to be 1. The video decoding device 100 may determine the Greater2Flag [ n ] based on coeff _ level _ Greater2_ Flag [ n ].

The video decoding apparatus 100 may initialize numSigCoeff. At this time, numshigcoeff may mean the number of significant transform coefficients among transform coefficients scanned until the current transform coefficient in the reverse scan order.

The video decoding apparatus 100 may initialize n to 15, reduce n by 1 and perform the operation in the for statement 970 when n is greater than or equal to 0, and may not perform the operation in the for statement 970 when n is less than 0. The video decoding apparatus 100 may perform an operation of scanning information on transform coefficients within a current subblock in a reverse scan order, and the like.

The video decoding apparatus 100 may determine the position xC and yC of the current transform coefficient based on the position n of the current transform coefficient within the current sub-block.

When the value of the significant transform coefficient flag sig _ coeff _ flag [ xC ] [ yC ] related to the current transform coefficient is 1, the video decoding apparatus 100 may determine a value that is the sum of the value of the flag coeff _ abs _ level _ header 1_ flag related to the current transform coefficient and the value of the flag coeff _ abs _ level _ header 2_ flag of the current transform coefficient plus 1 as a base level (baseLevel).

If numSigCoeff is less than 8 and the position n of the current transform coefficient is lastgreat 1ScanPos, the video decoding apparatus 100 may obtain the residual level value information coeff _ abs _ level _ remaining [ n ] for the current transform coefficient n from the bitstream when basevelel is 3.

If numSigCoeff is less than 8, the position n of the current coding coefficient is not lastgreat 1ScanPos, and when the base level is 2, the video decoding apparatus 100 may obtain the residual level value information coeff _ abs _ level _ remaining [ n ] for the current transform coefficient n from the bitstream. If numSigCoeff is greater than 8, the video decoding apparatus 100 may obtain the residual level value information coeff _ abs _ level _ remaining [ n ] for the current transform coefficient n from the bitstream when the base level is 1.

The video decoding apparatus 100 may add the base level of the current transform coefficient and the remaining level value of the current transform coefficient to determine the level value of the current transform coefficient within the current block, i.e., TransCoeffLevel [ x0] [ y0] [ cIdx ] [ xC ] [ yC ]. Here, the cIdx may be an index indicating a color component.

The operation included in the repeat statement (for statement) 975 may be performed until i gradually increases by 1 from 0 to a value less than or equal to lastSubBlock. At this time, i may be an index indicating the subblock. In other words, the video decoding apparatus 100 can perform an operation on one coefficient group each time the sentence in the for sentence 975 is executed.

When the flag coded _ sub _ block _ flag [ xS ] [ yS ] for the current subblock i is 1, in the case of applying rsp (rsp _ apply), the video decoding apparatus 100 may determine n as 15, perform the operation within the for sentence 980 when n is greater than or equal to 0, and may perform the operation within the for sentence 980 again when n is greater than or equal to 0 after reducing n by 1. If n is less than 0, the operation in the for statement 980 may no longer be performed.

The video decoding apparatus 100 may determine the horizontal direction position and the vertical direction position xC and yC of the current transform coefficient corresponding to the position n of the current transform coefficient within the current sub-block. The video decoding apparatus 100 can determine the position blkpos of the current transform coefficient based on xC and yC.

When the position blkpos of the current transform coefficient is not rsp _ pos but a position hidden _ pos of the transform coefficient of which the sign is hidden, the video decoding apparatus 100 may determine the sign of the current transform coefficient as the sign hidden _ sign of the hidden transform coefficient.

Otherwise (if not the position hidden _ pos of the transform coefficient hidden by the symbol), the video decoding apparatus 100 may obtain information sign [ xC ] [ yC ] about the symbol of the current transform coefficient from the bitstream.

When the position of the current transform coefficient is rsp _ pos, the video decoding apparatus 100 may obtain information sign _ rsp [ xC ] [ yC ] about the rsp symbol of the current transform coefficient from the bitstream.

If rsp is not applied, the video decoding apparatus 100 may determine n as 15, perform the operation in the for-sentence 985 when n is greater than 0, perform the operation in the for-sentence 985 again when n is still greater than or equal to 0 after decreasing n by 1, and may not perform the operation in the for-sentence 985 again when n is less than 0.

The video decoding apparatus 100 may determine the position in the horizontal direction and the positions in the horizontal direction of the current transform coefficient, i.e., xC and yC, corresponding to the position n of the current transform coefficient within the current sub-block. The video decoding apparatus 100 can determine the position blkpos of the current transform coefficient based on xC and yC.

If the value of the significant transform coefficient flag sig _ coeffi _ flag [ xC ] [ yC ] of the current transform coefficient is 1, the value of the flag sign _ data _ suppressing _ enabled _ flag indicating whether or not to activate concealment of symbol data is 0, or the symbol (| sign high dden [ i ]) not to be concealed to the current subblock, or the position of the current transform coefficient in the scan order in the forward direction within the current subblock (i.e., the position of the first significant transform coefficient firstscanpos) is not 0, the symbol information sign [ xC ] [ yC ] of the current transform coefficient may be obtained from the bitstream.

Hereinafter, a method of determining a data unit usable when the video decoding apparatus 100 according to an embodiment decodes an image will be described with reference to fig. 10 to 23. The operation of video encoding device 150 may be various embodiments similar to or the reverse of the operation of video decoding device 100 described below.

Fig. 10 illustrates a process of determining at least one coding unit as the video decoding apparatus 100 divides the current coding unit, according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine the shape of the coding unit by using the block shape information, and determine the shape into which the coding unit is divided by using the division shape information. In other words, the partition method of the coding unit indicated by the partition shape information may be determined based on what block shape is indicated by the block shape information used by the video decoding apparatus 100.

According to an embodiment, the video decoding apparatus 100 may use block shape information indicating that the current coding unit has a square shape. For example, the video encoding apparatus 100 may determine whether to divide a square coding unit, vertically divide a square coding unit, horizontally divide a square coding unit, or divide a square coding unit into four coding units, etc., according to the division shape information. Referring to fig. 10, when the block shape information of the current coding unit 1000 indicates a square shape, the video decoding apparatus 100 may not divide a coding unit 1010a having the same size as the current coding unit 1000 according to the division shape information indicating non-division, or may determine coding units 1010b, 1010c, 1010d, etc. divided based on the division shape information indicating a predetermined division method.

Referring to fig. 10, according to an embodiment, the video decoding apparatus 100 may determine two coding units 1010b by dividing the current coding unit 1000 in the vertical direction based on the division shape information indicating the division in the vertical direction. The video decoding apparatus 100 may determine two coding units 1010c by dividing the current coding unit 1000 in the horizontal direction based on the division shape information indicating the division in the horizontal direction. The video decoding apparatus 100 may determine the four coding units 1010d by dividing the current coding unit 1000 in the vertical and horizontal directions based on the division shape information indicating the division in the vertical and horizontal directions. However, the division shape into which the square coding unit can be divided is not limited to the above shape, and may include any shape that can be indicated by the division shape information. The predetermined division shape into which the square coding unit is divided will now be described in detail by various embodiments.

Fig. 11 illustrates a process in which the video decoding apparatus 100 divides coding units having a non-square shape to determine at least one coding unit according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may use block shape information indicating that the current coding unit has a non-square shape. The video decoding apparatus 100 may determine whether to divide the non-square current coding unit or to divide the non-square current coding unit via a specific method according to the division shape information. Referring to fig. 11, when the block shape information of the current coding unit 1100 or 1150 indicates a non-square shape, the video decoding apparatus 100 may not divide the coding unit 1110 or 1160 having the same size as the current coding unit 1100 or 1150 according to the non-divided division shape information, or may determine that the determination coding units 1120a, 1120b, 1130a, 1130b, 1130c, 1170a, 1170b, 1180a, 1180b, and 1180c are divided based on the division shape information indicating a specific division method. A specific partitioning method for partitioning a non-square coding unit will now be described in detail by various embodiments.

According to an embodiment, the video decoding apparatus 100 may determine the shape into which the coding unit is divided by using the division shape information, and in this case, the division shape information may indicate the number of at least one coding unit generated as the coding unit is divided. Referring to fig. 11, when the partition shape information indicates that the current coding unit 1100 or 1150 is divided into two coding units, the video decoding apparatus 100 may determine two coding units 1120a and 1120b or 1170a and 1170b included in the current coding unit 1100 or 1150 by dividing the current coding unit 1100 or 1150 based on the partition shape information.

According to an embodiment, when the video decoding apparatus 100 divides the current coding unit 1100 or 1150 having the non-square shape based on the division shape information, the video decoding apparatus 100 may divide the current coding unit 1100 or 1150 in consideration of the position of the long side of the current coding unit 1100 or 1150 having the non-square shape. For example, the video decoding apparatus 100 may determine a plurality of coding units by dividing the current coding unit 1100 or 1150 in a direction of dividing a long side of the current coding unit 1100 or 1150 in consideration of the shape of the current coding unit 1100 or 1150.

According to an embodiment, when the division shape information indicates that the coding unit is divided into odd blocks, the video decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1100 or 1150. For example, when the division shape information indicates that the current coding unit 1100 or 1150 is divided into three coding units, the video decoding apparatus 1100 may divide the current coding unit 1100 or 1150 into three coding units 1130a, 1130b, 1130c, 1180a, 1180b, and 1180 c. According to an embodiment, the video decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1100 or 1150, and the sizes of the determined coding units may not all be the same. For example, the size of the coding unit 1130b or 1180b of the odd-numbered coding units 1130a, 1130b, 1130c, 1180a, 1180b, and 1180c may be different from the size of the coding units 1130a, 1130c, 1180a, and 1180 c. In other words, the coding unit into which the current coding unit 1100 or 1150 is divided to be determinable may have various types of sizes, and the odd number of coding units 1130a, 1130b, 1130c, 1180a, 1180b, and 1180c may have different sizes according to circumstances.

According to an embodiment, when the division shape information indicates that the coding unit is divided into odd blocks, the video decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 1100 or 1150, and in addition, may set a specific limit on at least one coding unit of the odd number of coding units generated through the division. Referring to fig. 11, for the coding units 1130b and 1180b located at the center among the three coding units 1130a, 1130b, 1130c, 1180a, 1130b, and 1180c generated by dividing the current coding unit 1100 or 1150, the decoding process performed by the video decoding apparatus 100 may be different from the decoding process performed for the other coding units 1130a, 1130c, 1180a, and 1180 c. For example, unlike the other coding units 1130a, 1130c, 1180b, and 1180c, the video decoding apparatus 100 may limit the coding units 1130b and 1180b located at the center to be no longer divided, or to be divided only a certain number of times.

Fig. 12 illustrates a process in which the video decoding apparatus 100 divides a coding unit based on at least one of block shape information and division shape information according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine that the first coding unit 1200 having a square shape is divided or is not divided into coding units based on at least one of the block shape information and the division shape information. According to an embodiment, when the partition shape information indicates that the first encoding unit 1200 is divided in the horizontal direction, the video decoding apparatus 100 may determine the second encoding unit 1210 by dividing the first encoding unit 1200 in the horizontal direction. The first coding unit, the second coding unit, and the third coding unit used according to an embodiment are terms for indicating a relationship before and after dividing the coding unit. For example, the second coding unit may be determined by dividing the first coding unit, and the third coding unit may be determined by dividing the second coding unit. Hereinafter, it should be understood that the relationship between the first to third encoding units is consistent with the features described above.

According to an embodiment, the video decoding apparatus 100 may determine that the determined second encoding unit 1210 is divided or is not divided into encoding units based on at least one of block shape information and division shape information. Referring to fig. 12, the video decoding apparatus 100 may divide a second encoding unit 1210, which has a non-square shape and is determined by dividing a first encoding unit 1200, into at least one third encoding unit 1220a, 1220b, 1220c, 1220d, etc., or may not divide the second encoding unit 1210, based on at least one of block shape information and division shape information. The video decoding apparatus 100 may obtain at least one of block shape information and division shape information and obtain a plurality of second coding units (e.g., second coding units 1210) having various shapes by dividing the first coding unit 1200 based on the obtained at least one of block shape information and division shape information, and the second coding units 1210 may be divided according to a method of dividing the first coding unit 1200 based on at least one of block shape information and division shape information. According to an embodiment, when the first encoding unit 1200 is divided into the second encoding units 1210 based on at least one of block shape information and division shape information regarding the first encoding unit 1200, the second encoding units 1210 may also be divided into third encoding units (e.g., 1220a, 1220b, 1220c, 1220d, etc.) based on at least one of block shape information and division shape information regarding the second encoding units 1210. In other words, the coding units may be recursively divided based on at least one of the division shape information and the block shape information about each coding unit. Thus, square code cells may be determined from non-square code cells, and such square code cells may be recursively divided to determine non-square code cells. Referring to fig. 12, a predetermined coding unit (e.g., a coding unit located at the center or a square coding unit) among odd number of third coding units 1220b, 1220c, and 1220d determined by dividing the second coding unit 1210 having a non-square shape may be recursively divided. According to an embodiment, the third encoding unit 1220c having a square shape among the third encoding units 1220b, 1220c, and 1220d may be divided into a plurality of fourth encoding units in the horizontal direction. The fourth encoding unit 1240 having a non-square shape among the plurality of fourth encoding units may be divided into a plurality of encoding units again. For example, the fourth encoding unit 1240 having a non-square shape may be divided into an odd number of encoding units 1250a, 1250b, and 1250 c.

Methods that may be used to recursively divide the coding units will be described below by various embodiments.

According to an embodiment, the video decoding apparatus 100 may divide each of the third coding units 1220a, 1220b, 1220c, 1220d, etc. into coding units or determine not to divide the second coding unit 1210 based on at least one of the block shape information and the division shape information. According to an embodiment, the video decoding apparatus 100 may divide the second encoding unit 1210 having a non-square shape into an odd number of third encoding units 1220b, 1220c, and 1220 d. The video decoding apparatus 100 may set a specific restriction on a predetermined third encoding unit among the third encoding units 1120b, 1120c, and 1120 d. For example, the video decoding apparatus 100 may limit the third encoding unit 1220c located at the center of the third encoding units 1220b, 1220c, and 1220d from being divided any more, or limit the number of times that the division is possible to be set. Referring to fig. 12, the video decoding apparatus 100 may restrict the third coding unit 1220c located at the center of the third coding units 1220b, 1220c, and 1220d included in the second coding unit 1210 having a non-square shape from being divided, to be divided into a predetermined divided shape (e.g., into four coding units or into a shape corresponding to the shape into which the second coding unit 1210 is divided), or to be divided only a predetermined number of times (e.g., only n times, where n > 0). However, such restrictions on the third encoding unit 1220c located at the center are merely examples and should not be construed as being limited to these embodiments, but should be construed to include various restrictions that the third encoding unit 1220c located at the center can be decoded differently from the other third encoding units 1220b and 1220 d.

According to an embodiment, the video decoding apparatus 100 may obtain at least one of block shape information and division shape information for dividing the current coding unit from a predetermined position in the current coding unit.

Fig. 13 illustrates a method in which the video decoding apparatus 100 determines a predetermined coding unit among an odd number of coding units according to an embodiment. Referring to fig. 13, at least one of block shape information and division shape information of the current coding unit 1300 may be obtained from a sample (e.g., a sample 1340 located at the center) at a predetermined position among a plurality of samples included in the current coding unit 1300. However, the predetermined position in the current coding unit 1300 at which at least one of the block shape information and the division shape information is obtained is not limited to the center position shown in fig. 13, but may be any position included in the current coding unit 300 (e.g., the uppermost position, the lowermost position, the left position, the right position, the left upper end position, the left lower end position, the right upper end position, or the right lower end position, etc.). The video decoding apparatus 100 may determine whether to divide the current coding unit into coding units having various shapes and sizes or not by obtaining at least one of block shape information and division shape information from a predetermined position.

According to an embodiment, the video decoding apparatus 100 may select one coding unit when a current coding unit is divided into a predetermined number of coding units. The method of selecting one of the plurality of coding units may vary and will be described below by various embodiments.

According to an embodiment, the video decoding apparatus 100 may divide a current coding unit into a plurality of coding units and determine a coding unit at a predetermined position.

Fig. 13 illustrates a method of determining a coding unit at a predetermined position from an odd number of coding units by the video decoding apparatus 100 according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine the coding unit located at the center from the odd-numbered coding units using information indicating the position of each of the odd-numbered coding units. Referring to fig. 13, the video decoding apparatus 100 may determine odd-numbered coding units 1320a, 1320b, and 1320c by dividing a current coding unit 1300. The video decoding apparatus 100 may determine the center coding unit 1320b by using information on the positions of the odd number of coding units 1320a, 1320b, and 1320 c. For example, the video decoding apparatus 100 may determine the coding unit 1320b located at the center by determining the positions of the coding units 1320a, 1320b, and 1320c based on information indicating the positions of predetermined samples included in the coding units 1320a, 1320b, and 1320 c. In detail, the video decoding apparatus 100 may determine the coding unit 1320b located at the center by determining the positions of the coding units 1320a, 1320b, and 1320c based on information indicating the positions of the samples 1330a, 1330b, and 1330c at the upper left ends of the coding units 1320a, 1320b, and 1320 c.

According to an embodiment, the information indicating the positions of the samples 1330a, 1330b, and 1330c included at the upper left ends in the coding units 1320a, 1320b, and 1320c, respectively, may include information about the positions or coordinates of the coding units 1320a, 1320b, and 1320c in the picture. According to an embodiment, the information indicating the positions of the samples 1330a, 1330b, and 1330c respectively included at the upper left ends in the coding units 1320a, 1320b, and 1320c may include information indicating the widths or heights of the coding units 1320a, 1320b, and 1320c included in the current coding unit 1300, and the widths or heights may correspond to information indicating the differences between the coordinates of the coding units 1320a, 1320b, and 1320c in the picture. In other words, the video decoding apparatus 100 may determine the coding unit 1320b located at the center by directly using information on positions or coordinates of the coding units 1320a, 1320b, and 1320c in a picture or by using information on a width or height of the coding unit corresponding to a difference between the coordinates.

According to an embodiment, the information indicating the position of the sample 1330a at the upper left end of the upper end coding unit 1320a may indicate (xa, ya) coordinates, the information indicating the position of the sample 1330b at the upper left end of the center coding unit 1320b may indicate (xb, yb) coordinates, and the information indicating the position of the sample 1330c at the upper left end of the lower end coding unit 1320c may indicate (xc, yc) coordinates. The video decoding apparatus 100 may determine the center coding unit 1320b by using coordinates of samples 1330a, 1330b, and 1330c at the upper left ends included in the coding units 1320a, 1320b, and 1320c, respectively. For example, when the coordinates of the samples 1330a, 1330b, and 1330c at the upper left are arranged in an ascending or descending order, the coding unit 1320b including the coordinates (xb, yb) of the sample 1330b located at the center may be determined as the coding unit located at the center among the coding units 1320a, 1320b, and 1320c determined by dividing the current coding unit 1300. However, the coordinates indicating the positions of the samples 1330a, 1330b, and 1330c at the upper left end may represent coordinates indicating an absolute position in the screen, and in addition, coordinates (dxb, dyb) (i.e., information indicating the relative position of the sample 1330b at the upper left end of the center coding unit 1320 b) and (dxc, dyc) (i.e., information indicating the relative position of the sample 1330c at the upper left end of the lower end coding unit 1320 c) may be used based on the position of the sample 1330a at the upper left end of the upper end coding unit 1320 a. In addition, a method of determining a coding unit at a predetermined position by using coordinates of a sample as information indicating a position of the sample included in the coding unit should not be construed as being limited to the above-described method, and may be construed as various arithmetic methods capable of using the coordinates of the sample.

According to an embodiment, the video decoding apparatus 100 may divide the current coding unit 1300 into a plurality of coding units 1320a, 1320b, and 1320c, and select a coding unit from the coding units 1320a, 1320b, and 1320c according to a predetermined standard. For example, the video decoding apparatus 100 may select the coding unit 1320b having a different size from the coding units 1320a, 1320b, and 1320 c.

According to an embodiment, the video decoding apparatus 100 may determine the widths or heights of the coding units 1320a, 1320b, and 1320c by using (xa, ya) coordinates, i.e., information indicating the position of the sample 1330a at the upper left end of the upper end coding unit 1320a, (xb, yb) coordinates, i.e., information indicating the position of the sample 1330b at the upper left end of the center coding unit 1320b, and (xc, yc) coordinates, i.e., information indicating the position of the sample 1330c at the upper left end of the lower end coding unit 1320c, respectively. The video decoding apparatus 100 may determine the sizes of the coding units 1320a, 1320b, and 1320c by using the coordinates (xa, ya), (xb, yb), and (xc, yc) of the information indicating the positions of the coding units 1320a, 1320b, and 1320c, respectively.

According to an embodiment, the video decoding apparatus 100 may determine the width of the upper end coding unit 1320a as xb-xa and the height as yb-ya. According to an embodiment, the video decoding apparatus 100 may determine the width of the center coding unit 1320b as xc-xb and the height as yc-yb. According to an embodiment, the video decoding apparatus 100 may determine the width or height of the lower coding unit by using the width and height of the current coding unit and the widths and heights of the upper coding unit 1320a and the center coding unit 1320 b. The video decoding apparatus 100 may determine a coding unit having a size different from other coding units based on the determined widths and heights of the coding units 1320a, 1320b, and 1320 c. Referring to fig. 13, the video decoding apparatus 100 may determine a center coding unit 1320b having a size different from the sizes of the upper and lower end coding units 1320a and 1320c as a coding unit at a predetermined position. However, the process of the video encoding apparatus 100 determining the coding unit having the size different from the other coding units is only one embodiment of determining the coding unit at the predetermined position by using the size of the coding unit determined based on the sample coordinates, and thus various processes of determining the coding unit at the predetermined position by comparing the sizes of the coding units determined according to the predetermined sample coordinates may be used.

However, the position of the sample considered to determine the position of the coding unit is not limited to the upper left end as described above, and information on the position of any sample included in the coding unit may be used.

According to an embodiment, the video decoding apparatus 100 may select a coding unit at a predetermined position among an odd number of coding units determined by dividing the current coding unit, in consideration of the shape of the current coding unit. For example, when the current coding unit has a non-square shape whose width is longer than the height, the video decoding apparatus 100 may determine a coding unit at a predetermined position in the horizontal direction. In other words, the video decoding apparatus 100 can determine one of the coding units having different positions in the horizontal direction and set a restriction on the one coding unit. When the current coding unit has a non-square shape having a height longer than a width, the video decoding apparatus 100 may determine a coding unit at a predetermined position in the vertical direction. In other words, the video decoding apparatus 100 can determine one of the coding units having different positions in the vertical direction and set a restriction on the one coding unit.

According to an embodiment, the video decoding apparatus 100 may determine the coding unit at the predetermined position from the even number of coding units using information indicating the position of each of the even number of coding units. The video decoding apparatus 100 may determine an even number of coding units by dividing the current coding unit, and determine a coding unit at a predetermined position by using information on positions of the even number of coding units. The detailed procedures thereof may correspond to those described in fig. 13 for determining a coding unit at a predetermined position (e.g., a center position) from among an odd number of coding units, and thus are omitted.

According to an embodiment, when a current coding unit having a non-square shape is divided into a plurality of coding units, a coding unit at a predetermined position may be determined from the plurality of coding units using predetermined information on the coding unit at the predetermined position during the dividing process. For example, the video decoding apparatus 100 may determine a coding unit located at the center from among a plurality of coding units obtained by dividing the current coding unit using at least one of block shape information and division shape information stored in samples included in the center coding unit during the division process.

Referring to fig. 13, the video decoding apparatus 100 may divide a current coding unit 1300 into a plurality of coding units 1320a, 1320b, and 1320c based on at least one of block shape information and division shape information, and determine a coding unit 1320b located at the center from among the plurality of coding units 1320a, 1320b, and 1320 c. In addition, the video decoding apparatus 100 may determine the coding unit 1320b located at the center in consideration of a position where at least one of the block shape information and the partition shape information is obtained. In other words, at least one of block shape information and division shape information of the current coding unit 1300 may be obtained from a sample 1340 located at the center of the current coding unit 1300, and when the current coding unit 1300 is divided into a plurality of coding units 1320a, 1320b, and 1320c based on at least one of the block shape information and the division shape information, the coding unit 1320b including the sample 1340 may be determined as a coding unit located at the center. However, the information for determining the coding unit located at the center should not be construed as being limited to at least one of the block shape information and the division shape information, but various types of information may be used in determining the coding unit located at the center.

According to an embodiment, predetermined information for identifying a coding unit at a predetermined position may be obtained from a predetermined sample included in a coding unit to be determined. Referring to fig. 13, the video decoding apparatus 100 may determine a coding unit at a predetermined position (e.g., a coding unit at the center among a plurality of coding units) from among a plurality of coding units 1320a, 1320b, and 1320c determined by dividing the current coding unit 1300, using at least one of block shape information and division shape information obtained from a sample at a predetermined position in the current coding unit 1300 (e.g., a sample at the center of the current coding unit 1300). In other words, the video decoding apparatus 100 may determine the samples at the predetermined positions in consideration of the block shape of the current coding unit 1300, and the video decoding apparatus 100 determines a coding unit 1320b including samples from which predetermined information (e.g., at least one of block shape information and division shape information) is available and sets a predetermined limit thereto from among the plurality of coding units 1320a, 1320b, and 1320c determined by dividing the current coding unit 1300. Referring to fig. 13, according to an embodiment, the video decoding apparatus 100 may determine a sample 1340 located at the center of a current coding unit 1300 as a sample at which predetermined information is available, and set a predetermined limit to a coding unit 1320b including such a sample 1340 during a decoding process. However, the position of the sample where the predetermined information is obtainable should not be construed as being limited to the above-described position, and may be construed as a sample at an arbitrary position included in the encoding unit 1320b determined for setting the limitation.

According to an embodiment, the positions of samples where predetermined information can be obtained may be determined according to the shape of the current coding unit 1300. According to an embodiment, the block shape information may determine whether the shape of the current coding unit is square or non-square, and determine the positions of samples where predetermined information can be obtained according to the shape. For example, the video decoding apparatus 100 may determine a sample located on a boundary dividing at least one of the width and the height of the current coding unit into two halves by using at least one of the information on the width and the information on the height of the current coding unit as a sample for which predetermined information is obtainable. As another example, when the block shape information on the current coding unit indicates a non-square shape, the video decoding apparatus 100 may determine one of samples adjacent to a boundary dividing a long side of the current coding unit into two halves as a sample from which predetermined information can be obtained.

According to an embodiment, when a current coding unit is divided into a plurality of coding units, the video decoding apparatus 100 may determine a coding unit at a predetermined position from the plurality of coding units using at least one of block shape information and division shape information. According to an embodiment, the video decoding apparatus 100 may obtain at least one of block shape information and division shape information from samples included at predetermined positions in the coding units, and the video decoding apparatus 100 may divide the plurality of coding units generated as the current coding unit is divided by using at least one of the division shape information and the block shape information obtained from samples included at predetermined positions in each of the plurality of coding units. In other words, the coding units may be recursively divided by using at least one of block shape information and division shape information obtained from samples included at predetermined positions in each coding unit. Since the process of recursively dividing the coding units has been described above with reference to fig. 12, a detailed description thereof is omitted.

According to an embodiment, the video decoding apparatus 100 may determine at least one coding unit by dividing a current coding unit, and determine an order of encoding the at least one coding unit according to a predetermined block (e.g., the current coding unit).

Fig. 14 illustrates an order in which a plurality of coding units are processed when the video decoding apparatus 100 divides a current coding unit to determine the plurality of coding units according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may divide the first encoding unit 1400 in the vertical direction according to the block shape information and the division shape information to determine the second encoding units 1410a and 1410b, divide the first encoding unit 1400 in the horizontal direction to determine the second encoding units 1430a and 1430b, or determine the second encoding units 1450a, 1450b, 1450c, and 1450d by dividing the first encoding unit 1400 in the vertical and horizontal directions.

Referring to fig. 14, the video decoding apparatus 100 may determine an order such that the second encoding units 1410a and 1410b determined by dividing the first encoding unit 1400 in the vertical direction are processed in the horizontal direction 1410 c. The video decoding apparatus 100 may determine a processing order of the second encoding units 1430a and 1430b determined by dividing the first encoding unit 1400 in the horizontal direction as the vertical direction 1430 c. The video decoding apparatus 100 may determine the second coding units 1450a, 1450b, 1450c, and 1450d determined by dividing the first coding unit 1400 in the vertical and horizontal directions according to a predetermined order (e.g., a raster scan order or a z-scan order 1450e, etc.) in which the coding units located in one row are processed and then the coding units located in the next row are processed.

According to an embodiment, the video decoding apparatus 100 may recursively divide the coding units. Referring to fig. 14, the video decoding apparatus 100 may determine a plurality of second encoding units 1410a, 1410, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450d by dividing the first encoding unit 1400, and recursively divide each of the determined plurality of second encoding units 1410a, 1410b, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450 d. The method of dividing the plurality of coding units 1410a, 1410, 1430a, 1430b, 1450a, 1450b, 1450c, and 1450d may correspond to the method of dividing the first coding unit 1400. Accordingly, each of the plurality of coding units 1410a, 1410b, 1430a, 1430b, 1450a, 1450b, 1450c and 1450d may be independently divided into a plurality of coding units. Referring to fig. 14, the video decoding apparatus 100 may determine the second encoding units 1410a and 1410b by dividing the first encoding unit 1400 in the vertical direction, and in addition, independently determine whether each of the second encoding units 1410a and 1410b is divided or not divided.

According to an embodiment, the video decoding apparatus 100 may divide the second encoding unit 1410a on the left side into the third encoding units 1420a and 1420b in the horizontal direction, and may not divide the second encoding unit 1410b on the right side.

According to an embodiment, the order in which the coding units are processed may be determined based on a partitioning process of the coding units. In other words, the order of processing the divided coding units may be determined based on the order of processing the coding units before being divided. The video decoding apparatus 100 may determine the order in which the third coding units 1420a and 1420b determined by dividing the left second coding unit 1410a are processed, independently from the right second coding unit 1410 b. Since the third encoding units 1420a and 1420b are determined by dividing the left second encoding unit 1410a in the horizontal direction, the third encoding units 1420a and 1420b may be processed in the vertical direction 1420 c. In addition, since the order in which the left second encoding unit 1410a and the right second encoding unit 1410b are processed corresponds to the horizontal direction 1410c, the right second encoding unit 1410b may be processed after the third encoding units 1420a and 1420b included in the left second encoding unit 1410a are processed in the vertical direction 1420 c. The above description is a related procedure of determining the processing order of the coding units according to the coding units before being divided, but it should not be construed as being limited to the above embodiment, and various methods of independently processing the coding units divided into various shapes in a predetermined order may be used.

Fig. 15 illustrates a process of determining that a current coding unit is to be divided into odd-numbered coding units when the video decoding apparatus 100 cannot process coding units in a predetermined order, according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine that the current coding unit is to be divided into an odd number of coding units based on the obtained block shape information and the division shape information. Referring to fig. 15, a first encoding unit 1500 having a square shape may be divided into second encoding units 1510a and 1510b having a non-square shape, and the second encoding units 1510a and 1510b may be independently divided into third encoding units 1520a, 1520b, 1520c, 1520d, and 1520e, respectively. According to an embodiment, the video decoding apparatus 100 may divide the left second encoding unit 1510a of the second encoding units 1510a and 1510b into horizontal directions to determine a plurality of third encoding units 1520a and 1520b, and divide the right second encoding unit 1510b into odd number of third encoding units 1520c, 1520d, and 1520 e.

According to an embodiment, the video decoding apparatus 100 may determine whether there are coding units divided into an odd number by determining whether the third coding units 1520a, 1520b, 1520c, 1520d, and 1520e can be processed in a predetermined order. Referring to fig. 15, the video decoding apparatus 100 may determine the third coding units 1520a, 1520b, 1520c, 1520d, and 1520e by recursively dividing the first coding unit 1500. The video decoding apparatus 100 may determine whether there are coding units divided into an odd number from among the shapes into which the first, second, and third coding units 1500, 1510a and 1510b, or 1520a, 1520b, 1520c, 1520d, and 1520e are divided, based on at least one of the block shape information and the division shape information. For example, the second encoding unit 1510b on the right side among the second encoding units 1510a and 1510b may be divided into odd number of third encoding units 1520c, 1520d, and 1520 e. The order of processing the plurality of coding units included in the first coding unit 1500 may be a predetermined order (e.g., the z-scan order 1530), and the video decoding apparatus 100 may determine whether the third coding units 1520c, 1520d, and 1520e determined that the second coding unit 1510b on the right side is divided into an odd number satisfy a condition that can be processed in the predetermined order.

According to an embodiment, the video decoding apparatus 100 may determine whether the third coding units 1520a, 1520b, 1520c, 1520d, and 1520e included in the first coding unit 1500 satisfy a condition that can be processed according to a predetermined order, wherein the condition relates to whether at least one of the width and the height of each of the second coding units 1510a and 1510b is divided in half according to the boundary of the third coding units 1520a, 1520b, 1520c, 1520d, and 1520 e. For example, the third coding units 1520a and 1520b determined when the height of the left second coding unit 1510a of the non-square shape is divided in half satisfy the condition, but it may be determined that the third coding units 1520c, 1520d, and 1520e do not satisfy the condition because the boundary of the third coding units 1520c, 1520d, and 1520e determined when the right second coding unit 1510b is divided into three coding units cannot divide the width or height of the right second coding unit 1510b in half, and the video decoding apparatus 100 may determine discontinuity of the scanning order (discontinuity) when the condition is not satisfied and determine that the right second coding unit 1510b is to be divided into odd coding units based on the determination result. According to an embodiment, when divided into an odd number of coding units, the video decoding apparatus 100 may set a predetermined restriction on a coding unit at a predetermined position in the divided coding units, and since such a restriction or predetermined position has been described above through various embodiments, a detailed description thereof is omitted.

Fig. 16 illustrates a process in which the video decoding apparatus 100 divides the first coding unit 1600 to determine at least one coding unit according to an embodiment. According to an embodiment, the video decoding apparatus 100 may divide the first encoding unit 1600 based on at least one of the block shape information and the division shape information obtained by the obtainer 110. The first coding unit 1600 having a square shape may be divided into four coding units having a square shape or a plurality of coding units having a non-square shape. For example, referring to fig. 16, when the block shape information indicates that the first encoding unit 1600 is a square and the division shape information indicates division into non-square encoding units, the video decoding apparatus 100 may divide the first encoding unit 1600 into a plurality of non-square encoding units. In detail, when the division shape information indicates that an odd number of coding units are determined by dividing the first coding unit 1600 in the horizontal direction or the vertical direction, the video decoding apparatus 100 may divide the first coding unit 1600 having a square shape into the odd number of coding units, i.e., the second coding units 1610a, 1610b, and 1610c determined by dividing in the vertical direction or the second coding units 1620a, 1620b, and 1620c determined by dividing in the horizontal direction.

According to an embodiment, the video decoding apparatus 100 may determine whether the second coding units 1610a, 1610b, 1610c, 1620a, 1620b and 1620c included in the first coding unit 1600 satisfy a condition that can be processed in a predetermined order by groups, wherein the condition relates to whether at least one of the width and height of the first coding unit 1600 is divided in half according to the boundary of the second coding units 1610a, 1610b, 1610c, 1620a, 1620b and 1620 c. Referring to fig. 16, since the boundaries of the second coding units 1610a, 1610b and 1610c determined when the first coding unit 1600 having a square shape is divided in the vertical direction do not divide the width of the first coding unit 1600 in half, it may be determined that the first coding unit 1600 does not satisfy the condition that can be processed in a predetermined order. In addition, since the boundaries of the second coding units 1620a, 1620b, and 1620c determined by dividing the first coding unit 1600 having a square shape in the horizontal direction do not divide the height of the first coding unit 1600 in half, it may be determined that the first coding unit 1600 does not satisfy the condition that can be processed in a predetermined order. The video decoding apparatus 100 may determine discontinuity of the scan order when such a condition is not satisfied, and determine that the first encoding unit 1600 is to be divided into an odd number of encoding units based on the determination result. According to an embodiment, the video decoding apparatus 100 may set a predetermined restriction on a coding unit at a predetermined position in an odd number of coding units obtained by dividing the coding unit, and since such restriction or predetermined position has been described above by various embodiments, detailed description thereof is omitted.

According to an embodiment, the video decoding apparatus 100 may determine the coding units having various shapes by dividing the first coding unit.

Referring to fig. 16, the video decoding device 100 may divide a first encoding unit 1600 having a square shape and a first encoding unit 1630 or 1650 having a non-square shape into encoding units having various shapes.

Fig. 17 illustrates that shapes into which the second coding unit may be divided are limited when the second coding unit having a non-square shape determined by the video encoding apparatus 100 dividing the first coding unit 1700 satisfies a predetermined condition according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine to divide the first encoding unit 1700 having a square shape into the second encoding units 1710a, 1710b, 1720a, and 1720b having a non-square shape based on at least one of the block shape information and the division shape information obtained by the obtainer 105. The second encoding units 1710a, 1710b, 1720a and 1720b may be independently divided. Accordingly, the video decoding apparatus 100 can determine whether to be divided into a plurality of coding units or not based on at least one of block shape information and division shape information on each of the second encoding units 1710a, 1710b, 1720a, and 1720 b. According to an embodiment, the video decoding apparatus 100 may determine the third encoding units 1712a and 1712b by dividing the second encoding unit 1710a having a non-square shape determined by dividing the first encoding unit 1700 in the vertical direction in the horizontal direction. Only, when the left second encoding unit 1710a is divided in the horizontal direction, the video decoding apparatus 100 may set a restriction such that the right second encoding unit 1710b cannot be divided in the same horizontal direction as the direction in which the left second encoding unit 1710a is divided. When the second coding unit 1710b on the right is divided in the same direction, i.e., the horizontal direction to determine the third coding units 1714a and 1714b, the second coding unit 1710a on the left and the second coding unit 1710b on the right are independently divided in the horizontal direction to determine the third coding units 1712a, 1712b, 1714a, and 1714b, respectively. However, this is the same result as the division of the first encoding unit 1700 into the four second encoding units 1730a, 1730b, 1730c, and 1730d having a square shape based on at least one of the block shape information and the division shape information, and this may be inefficient in image decoding.

According to an embodiment, the video decoding apparatus 100 may determine the second coding units 1722a, 1722b, 1724a, and 1724b by dividing the second coding unit 1720a or 1720b having a non-square shape determined by dividing the first coding unit 1700 in the horizontal direction in the vertical direction. Only, when one of the second encoding units (e.g., the upper-end second encoding unit 1720a) is divided in the vertical direction, the video decoding apparatus 100 may set a restriction such that the second encoding unit 1720a (e.g., the lower-end encoding unit 1720b) cannot be divided in the same vertical direction as the direction in which the upper-end second encoding unit 1720a is divided, for the above-described reason.

Fig. 18 illustrates a process in which the video decoding device 100 divides a coding unit having a square shape when the division shape information cannot indicate that the coding unit is divided into four square shapes, according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine the second coding units 1810a, 1810b, 1820a, 1820b, and so on by dividing the first coding unit 1800 based on at least one of the block shape information and the division shape information. The division shape information may include information on various shapes into which the coding unit may be divided, but there may be a case where such information on various shapes does not include information for 4 coding units divided into a square shape. According to such division shape information, the video decoding apparatus 100 cannot divide the first coding unit 1800 having a square shape into four second coding units 1830a, 1830b, 1830c, and 1830d having a square shape. The video decoding apparatus 100 may determine the second coding units 1810a, 1810b, 1820a, 1820b, and so on having a non-square shape based on the partition shape information.

According to an embodiment, the video decoding apparatus 100 may independently divide each of the second coding units 1810a, 1810b, 1820a, 1820b, and so on having a square shape. Each of the second encoding units 1810a, 1810b, 1820a, 1820b, and so on may be divided in a predetermined order via a recursive method, which may be a dividing method corresponding to a method of dividing the first encoding unit 1800 based on at least one of the block shape information and the divided shape information.

For example, the video decoding apparatus 100 may determine the third coding units 1812a and 1812b having a square shape by dividing the left second coding unit 1810a in the horizontal direction, or determine the third coding units 1814a and 1814b having a square shape by dividing the right second coding unit 1810b in the horizontal direction. In addition, the video decoding apparatus 100 may determine the third encoding units 1816a, 1816b, 1816c, and 1816d having a square shape by dividing both the left second encoding unit 1810a and the right second encoding unit 1810b in the horizontal direction. In this case, the coding unit may be determined in the same manner as when the first coding unit 1800 is divided into the four second coding units 1830a, 1830c, and 1830d having a square shape.

As yet another example, the video decoding apparatus 100 may determine the third encoding units 1822a and 1822b having a square shape by dividing the second encoding unit 1820a at the upper end in the vertical direction, and determine the third encoding units 1824a and 1824b having a square shape by dividing the second encoding unit 1820b at the lower end in the vertical direction. In addition, the video decoding apparatus 100 may determine the third encoding units 1822a, 1822b, 1824a, and 1824b having a square shape by dividing the second encoding unit 1820a at the upper end and the second encoding unit 1820b at the lower end in the vertical direction. In this case, the coding unit may be determined in the same manner as when the first coding unit 1800 is divided into the four second coding units 1830a, 1830c, and 1830d having a square shape.

Fig. 19 illustrates that the processing order between a plurality of coding units according to an embodiment may be changed according to the process of dividing the coding units.

According to an embodiment, the video decoding apparatus 100 may divide the first encoding unit 1900 based on the block shape information and the division shape information. When the block shape information indicates a square shape and the division shape information indicates that the first encoding unit 1900 is divided in at least one of the horizontal direction and the vertical direction, the video decoding apparatus 100 may divide the first encoding unit 1900 to determine the second encoding unit (e.g., 1910a, 1910b, 1920a, 1920b, 1930a, 1930b, 1930c, 1930d, etc.). Referring to fig. 19, non-square-shaped second encoding units 1910a, 1910b, 1920a and 1920b, which are determined by the first encoding unit 1900 being divided in the horizontal direction or the vertical direction, may be independently divided based on block shape information and division shape information about them, respectively. For example, the video decoding apparatus 100 may determine the third encoding units 1916a to 1916d by dividing each of the second encoding units 1910a and 1910b generated by dividing the first encoding unit 1900 in the vertical direction in the horizontal direction, or determine the third encoding units 1926a, 1926b, 1926c, and 1926d by dividing the second encoding units 1920a and 1920b generated by dividing the first encoding unit 1900 in the horizontal direction. The process of dividing the second encoding units 1910a, 1910b, 1920a and 1920b has been described above with reference to fig. 17, and thus a detailed description thereof is omitted.

According to an embodiment, the video decoding apparatus 100 may process the coding units according to a predetermined order. The features regarding processing the coding units according to the predetermined order have been described above with reference to fig. 14, and thus detailed description thereof is omitted. Referring to fig. 19, the video decoding apparatus 100 may determine four third coding units 1916a, 1916b, 1916c, 1916d, 1926a, 1926b, 1926c, and 1926d having a square shape by dividing the first coding unit 1900 having a square shape. According to an embodiment, the video decoding apparatus 100 may determine the processing order of the third coding units 1916a, 1916b, 1916c, 1916d, 1926a, 1926b, 1926c and 1926d based on the shape in which the first coding unit 1900 is divided.

According to an embodiment, the video decoding apparatus 100 may determine the third encoding units 1916a, 1916b, 1916c, and 1916d by dividing the second encoding units 1910a and 1910b generated by dividing the first encoding unit 1900 in the vertical direction in the horizontal direction, and the video decoding apparatus 100 processes the third encoding units 1916a and 1916b included in the second encoding unit 1910a on the left side in the vertical direction first, and then processes the third encoding units 1916c and 1916d included in the second encoding unit 1910b on the right side in the vertical direction in order 1917.

According to an embodiment, the video decoding apparatus 100 may determine the third encoding units 1926a, 1926b, 1926c, and 1926d by dividing the second encoding units 1920a and 1920b generated by dividing the first encoding unit 1900 in the vertical direction in the horizontal direction, and the video decoding apparatus 100 processes the third encoding units 1926a and 1926b included in the upper-end second encoding unit 1920a first in the vertical direction and then processes the third encoding units 1926c and 1926d included in the lower-end second encoding unit 1920b in the order 1927 in the horizontal direction.

Referring to fig. 19, the second encoding units 1910a, 1910b, 1920a and 1920b are each divided to determine third encoding units 1916a, 1916b, 1916c1916d, 1926a, 1926b, 1926c and 1926d having a square shape. The second coding units 1910a and 1910b determined by division in the vertical direction and the second coding units 1920a and 1920b determined by division in the horizontal direction are divided into different shapes, but the first coding unit 1900 is resultantly divided into coding units having the same shape according to the third coding units 1916a, 1916b, 1916c, 1916d, 1926a, 1926b, 1926c and 1926d determined thereafter. Therefore, even if the coding units having the same shape are determined by recursively dividing the coding units through different processes based on at least one of the block shape information and the division shape information, the video decoding apparatus 100 can process a plurality of coding units determined to be the same shape in different orders.

Fig. 20 illustrates a process of determining a depth of a coding unit as a shape and size of the coding unit are changed when the coding unit is recursively divided to determine a plurality of coding units according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine the depth of the coding unit according to a predetermined standard. For example, the predetermined criterion may be the length of the long side of the coding unit. When the length of the long side of the current coding unit is divided into 2n times the length before the long side of the coding unit is divided, the video decoding apparatus 100 may determine that the depth of the current coding unit is increased by n of the depth before the coding unit is divided, where n > 0. Hereinafter, the coding unit having the increased depth is referred to as a coding unit of the lower depth.

Referring to fig. 20, according to an embodiment, the video decoding apparatus 100 may determine the second encoding unit 2002, the third encoding unit 2004, and the like of the lower depth by dividing the first encoding unit 2000 having a SQUARE shape based on block shape information indicating a SQUARE shape (e.g., the block shape information may indicate "0: SQUARE"). When the size of the first coding unit 2000 having a square shape is 2N × 2N, the second coding unit 2002 determined by dividing the width and height of the first coding unit 2000 by 1/21 times may have a size of N × N. In addition, the third encoding unit 2004 determined by dividing the width and height of the second encoding unit 2002 into 1/2 sizes may have a size of N/2 × N/2. In this case, the width and height of the third encoding unit 2004 correspond to 1/22 times the first encoding unit 2000. When the depth of the first coding unit 2000 is D, the depth of the second coding unit having 1/21 times the width and height of the first coding unit 2000 may be D +1, and the depth of the third coding unit 2004 having 1/22 times the width and height of the first coding unit 2000 may be D + 2.

According to an embodiment, the video decoding apparatus 100 may determine the second encoding unit 2012 or 2022, the third encoding unit 2014 or 2024, and the like by dividing the first encoding unit 2010 or 2020 having a non-square shape based on block shape information indicating the non-square shape (e.g., the block shape information may indicate "1: NS _ VER" representing a non-square shape having a height longer than a width or "2: NS _ HOR" representing a non-square shape having a width longer than a height).

The video decoding apparatus 100 may determine the second encoding unit (e.g., the second encoding units 2002, 2012, 2022, etc.) by dividing at least one of the width and the height of the first encoding unit 2010 of size N × 2N. In other words, the video decoding apparatus 100 may determine the second encoding unit 2002 of size N × N or the second encoding unit 2022 of size N × N/2 by dividing the first encoding unit 2010 in the horizontal direction, or determine the second encoding unit 2012 of size N/2 × N by dividing the first encoding unit 2010 in the horizontal and vertical directions.

According to an embodiment, the video decoding apparatus 100 may determine the second coding unit (e.g., the second coding units 2002, 2012, 2022, etc.) by dividing at least one of a width and a height of the first coding unit 2020 of a size of 2N × N. In other words, the video decoding apparatus 100 may determine the second coding unit 2002 of size N × N or the second coding unit 2012 of size N/2 × N by dividing the first coding unit 2020 in the vertical direction, or determine the second coding unit 2022 of size N × N/2 by the first coding unit 2010 in the horizontal and vertical directions.

According to an embodiment, the video decoding apparatus 100 may determine the third encoding unit (e.g., the third encoding units 2004, 2014, 2024, etc.) by dividing at least one of the width and the height of the second encoding unit 2002 of size N × N. In other words, the video decoding apparatus 100 may determine the third coding unit 2004 of size N/2 × N/2, the third coding unit 2014 of size N/22 × N/2, or the third coding unit 2024 of size N/2 × N/22 by dividing the second coding unit 20002 in the vertical and horizontal directions.

According to an embodiment, the video decoding apparatus 100 may determine the third encoding unit (e.g., the third encoding units 2004, 2014, 2024, etc.) by dividing at least one of the width and the height of the second encoding unit 2012 of size N/2 × N. In other words, the video decoding apparatus 100 may divide the second encoding unit 2012 in the horizontal direction to determine the third encoding unit 2004 of size N/2 × N/2 or the third encoding unit 2024 of size N/2 × N/22, or divide in the vertical and horizontal directions to determine the third encoding unit 2014 of size N/22 × N/2.

According to an embodiment, the video decoding apparatus 100 may determine the third coding unit (e.g., the third coding units 2004, 2014, etc.) by dividing at least one of the width and the height of the second coding unit 2014 having a size of N × N/2. In other words, the video decoding apparatus 100 may determine the third coding unit 2004 of size N/2 × N/2 or the third coding unit 2014 of size N/22 × N/2 by dividing the second coding unit 2012 in the vertical direction, or determine the third coding unit 2024 of size N/2 × N/22 by dividing the third coding unit 2014 in the vertical and horizontal directions.

According to an embodiment, the video decoding apparatus 100 may divide the coding units (e.g., 2000, 2002, 2004) having a square shape in a horizontal or vertical direction. For example, the first encoding unit 2000 having a size of 2N × 2N may be divided in a vertical direction to determine the first encoding unit 2010 having a size of N × 2N, or may be divided in a horizontal direction to determine the first encoding unit 2020 having a size of 2N × N. According to an embodiment, when determining a depth based on the length of the longest side of the coding unit, the depth of the coding unit determined by dividing the first coding unit 2000, 2002, or 2004 having the size of 2N × 2N in the horizontal or vertical direction may be the same as the depth of the first coding unit 2000, 2002, or 2004.

According to an embodiment, the width and height of the third encoding unit 2014 or 2024 may be 1/22 of the width and height of the first encoding unit 2010 or 2020. When the depth of the first coding unit 2010 or 2020 is D, the depth of the second coding unit 2012 or 2022, which is 1/2 of the width and height of the first coding unit 2010 or 2020, may be D +1, and the depth of the third coding unit 2014 or 2024, which is 1/22 of the width and height of the first coding unit 2010 or 2020, may be D + 2.

Fig. 21 illustrates a depth that can be determined according to the shape and size of a coding unit and a partial index (hereinafter, PID) for distinguishing the coding unit according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine the second coding units having various shapes by dividing the first coding unit 2100 having a square shape. Referring to fig. 21, the video decoding apparatus 100 may determine the second encoding units 2102a, 2102b, 2104a, 2104b, 2106a, 2106b, 2106c, and 2106d by dividing the first encoding unit 2100 in at least one of a vertical direction and a horizontal direction according to the division shape information. In other words, the video decoding apparatus 100 can determine the second encoding units 2102a, 2102b, 2104a, 2104b, 2106a, 2106b, 2106c, and 2106d based on the division shape information of the first encoding unit 2100.

According to an embodiment, the depths of the second encoding units 2102a, 2102b, 2104a, 2104b, 2106a, 2106b, 2106c, and 2106d, which are determined with respect to the division shape information of the first encoding unit 2100 having a square shape, may be determined based on the lengths of the long sides. For example, since the length of one side of the first encoding unit 2100 having a square shape is the same as the length of the long side of the second encoding units 2102a, 2102b, 2104a and 2104b having a non-square shape, the depths of the first encoding unit 2100 and the second encoding units 2102a, 2102b, 2104a and 2104b having a non-square shape may be regarded as the same D. On the other hand, when the video decoding apparatus 100 divides the first coding unit 2100 into four second coding units 2106a, 2106b, 2106c, and 2106D having a square shape based on the division shape information, the length of one side of the second coding units 2106a, 2106b, 2106c, and 2106D having a square shape is 1/2 of the length of one side of the first coding unit 2100, and thus the depth of the second coding units 2106a, 2106b, 2106c, and 2106D may be one depth deeper than the depth D of the first coding unit 2100, i.e., D + 1.

According to an embodiment, the video decoding apparatus 100 may divide the first encoding unit 2110 having a height longer than a width into a plurality of second encoding units 2112a and 2112b or 2114a, 2114b, and 2114c in the horizontal direction according to the division shape information. According to an embodiment, the video decoding apparatus 100 may divide the first encoding unit 2120, which has a longer width than height, into a plurality of second encoding units 2122a and 2122b or 2124a, 2124b and 2124c in a vertical direction according to the division shape information.

According to an embodiment, the depth of the second encoding unit 2112a, 2112b, 2114a, 2114b, 2114c, 2122a, 2122b, 2124a, 2124b and 2124c, which is determined according to the division shape information with respect to the first encoding unit 2110 or 2120 having a non-square shape, may be determined based on the length of the long side. For example, since the length of one side of the second coding units 2112a and 2112b having a square shape is 1/2 having a length of the long side of the non-square-shaped first coding unit 2110 whose height is longer than the width, the depth of the second coding units 2102a, 2102b, 2104a and 2104b having a square shape is deeper by one depth, that is, D +1, than the depth D of the first coding unit 2110 having a non-square shape.

In addition, the video decoding apparatus 100 may divide the first encoding unit 2110 having a non-square shape into odd number of second encoding units 2114a, 2114b, and 2114c based on the division shape information. The odd number of second encoding units 2114a, 2114b, and 2114c may include second encoding units 2114a and 2114c having a non-square shape and second encoding units 2114b having a square shape. In this case, since the length of the long sides of the second coding units 2114a and 2114c having a non-square shape and the length of one side of the second coding unit 2114b having a square shape are 1/2 of the length of one side of the first coding unit 2110, the depths of the second coding units 2114a, 2114b, and 2114c may be deeper than the depth D of the first coding unit 2110 by one depth, that is, D + 1. The video decoding apparatus 100 may determine the depth of the coding unit related to the first coding unit 2120 having a non-square shape whose width is longer than the length in a manner corresponding to the above-described manner of determining the depth of the coding unit related to the first coding unit 2110.

According to an embodiment, regarding determining an index (PID) for distinguishing coding units, when coding units divided into an odd number do not have the same size, the video decoding apparatus 100 may determine the index based on a size ratio between the coding units. Referring to fig. 21, the second encoding unit 2114b, which is located at the center, among the second encoding units 2114a, 2114b, and 2114c divided into odd numbers may have the same width as the other encoding units 2114a and 2114c, but have a height twice that of the other encoding units 2114a and 2114 c. In this case, the second encoding unit 2114b located at the center may include two other encoding units 2114a and 2114 c. Therefore, when the index (PID) of the encoding unit 2114b located at the center is 1 according to the scanning order, the index of the encoding unit 2114c located in the next order may be 3 increased by 2. In other words, the values of the indices may be non-contiguous. According to an embodiment, the video decoding apparatus 100 may determine whether coding units divided into odd numbers have the same size based on discontinuity of indexes for distinguishing the coding units.

According to an embodiment, the video decoding apparatus 100 may determine whether to divide into a specific division shape based on a value of an index for distinguishing a plurality of coding units determined by dividing a current coding unit. Referring to fig. 21, the video decoding apparatus 100 may determine even-numbered second encoding units 2112a and 2112b or odd-numbered second encoding units 2114a, 2114b, and 2114c by dividing a first encoding unit 2110 having a rectangular shape with a height longer than a width. The video decoding apparatus 100 may distinguish the plurality of coding units using an index (PID) indicating each coding unit. According to an embodiment, the PID may be obtained from a sample at a predetermined position (e.g., the upper left end sample) of each coding unit.

According to an embodiment, the video decoding apparatus 100 may determine a coding unit at a predetermined position among the divided and determined coding units using an index for distinguishing the coding units. According to an embodiment, when the division shape information on the first encoding unit 2110 having a rectangular shape with a height longer than a width indicates that the first encoding unit 2110 is divided into three encoding units, the video decoding apparatus 100 may divide the first encoding unit 2110 into three encoding units 2114a, 2114b, and 2114 c. The video decoding apparatus 100 may assign an index to each of the three encoding units 2114a, 2114b, and 2114 c. The video decoding apparatus 100 may compare the indexes of the odd-numbered coding units to determine a central coding unit from the coding units. The video decoding apparatus 100 may determine the second coding unit 2114b having an index corresponding to the center value among the indexes as a coding unit at the center position among the coding units determined by the first coding unit 2110 being divided, based on the index of the coding unit. According to an embodiment, in determining an index for distinguishing coding units, if the coding units do not have the same size, the video decoding apparatus 100 may determine the index based on a size ratio between the coding units. Referring to fig. 21, the second encoding unit 2114b generated when the first encoding unit 2110 is divided may have the same width as the other encoding units 2114a and 2114c, but may have a height twice the height of the other encoding units 2114a and 2114 c. In this case, when the index (PID) of the second encoding unit 2114b located at the center is 1, the index of the encoding unit 2114c located in the next order may be 3 increased by 2. Accordingly, in the case where the increase amplitude of the index becomes different when increasing so uniformly, the video decoding apparatus 100 may determine that the current coding unit is divided into a plurality of coding units including a coding unit having a size different from other coding units. According to an embodiment, when the division shape information indicates the division into an odd number of coding units, the video decoding apparatus 100 may divide the current coding unit into: the coding unit (e.g., the center coding unit) at the predetermined position has a shape of an odd number of coding units having a size different from the other coding units. In this case, the video decoding apparatus 100 may determine the center coding unit having a different size by using an index (PID) of the coding unit. However, the above-described index, the size or position of the coding unit at the predetermined position to be determined are specific for describing an embodiment, which should not be construed as being limited thereto, and various indexes and various positions and sizes of the coding unit may be used.

According to an embodiment, the video decoding apparatus 100 may use a predetermined data unit that starts recursive division of a coding unit.

Fig. 22 illustrates determining a plurality of coding units from a plurality of predetermined data units included in a picture according to an embodiment.

According to an embodiment, the predetermined data unit may be defined as a data unit in which the coding unit starts to be recursively divided using at least one of the block shape information and the division shape information. In other words, the predetermined data unit may correspond to a coding unit of the uppermost bit depth used when determining a plurality of coding units by dividing the current picture. Hereinafter, for convenience of description, the predetermined data unit is referred to as a reference data unit.

According to an embodiment, the reference data element may indicate a predetermined size and shape. According to an embodiment, the reference coding unit may include M × N samples. Here, M and N may be the same and may be integers expressed as multipliers of 2. In other words, the reference data unit may indicate a square shape or a non-square shape, and may then be divided into an integer number of coding units.

According to an embodiment, the video decoding apparatus 100 may divide a current picture into a plurality of reference data units. According to an embodiment, the video decoding apparatus 100 may divide a plurality of reference data units of a current picture by using division shape information on each of the plurality of reference data units. Such a division process of the reference data unit may correspond to a division process using a quadtree structure.

According to an embodiment, the video decoding apparatus 100 may determine in advance a minimum size that a reference data unit included in a current picture may have. Accordingly, the video decoding apparatus 100 may determine reference data units having various sizes equal to or greater than a minimum size, and determine at least one coding unit using block shape information and partition shape information based on the determined reference data units.

Referring to fig. 22, the video decoding apparatus 100 may use a reference coding unit 2200 having a square shape, or may use a reference coding unit 2202 having a non-square shape. According to an embodiment, the shape and size of a reference coding unit may be determined according to various data units (e.g., sequence, picture, slice segment, maximum coding unit, etc.) that may include at least one reference coding unit.

According to an embodiment, the obtainer 105 of the video decoding apparatus 100 may obtain at least one of information regarding a shape of a reference coding unit and information regarding a size of the reference coding unit from a bitstream according to various data units. The process of determining at least one coding unit included in the reference coding unit 2200 having a square shape has been described above through the process of dividing the current coding unit 1000 of fig. 10, and the process of determining at least one coding unit included in the reference coding unit 2200 having a non-square shape has been described above through the process of dividing the current coding unit 1100 or 1150 of fig. 11, and thus a detailed description thereof is omitted.

According to an embodiment, in order to determine the size and shape of a reference coding unit from some data units predetermined based on a predetermined condition, the video decoding apparatus 100 may use an index for distinguishing the size and shape of the reference coding unit. In other words, the maximum coding unit obtainer 105 may obtain only an index for identifying the size and shape of the reference coding unit from the bitstream for each slice, slice segment, and maximum coding unit that is a data unit of the various data units (e.g., a sequence, a picture, a slice segment, and a maximum coding unit, etc.) satisfying a predetermined condition (e.g., a data unit having a size less than or equal to a slice). The video decoding apparatus 100 may determine the size and shape of the reference data unit for each data unit satisfying the predetermined condition by using the index. When information on the shape of a reference coding unit and information on the size of the reference coding unit are obtained and used from a bitstream according to a data unit having a relatively small size, the use efficiency of the bitstream may be insufficient, and thus, instead of directly obtaining the information on the shape of the reference coding unit and the information on the size of the reference coding unit, only an index may be obtained and used. In this case, at least one of the size and the shape of the reference coding unit corresponding to the index indicating the size and the shape of the reference coding unit may be predetermined. In other words, the video decoding apparatus 100 may select at least one of the size and the shape of the predetermined reference coding unit according to the index in order to determine at least one of the size and the shape of the reference coding unit included in the data unit as a criterion for obtaining the index.

According to an embodiment, the video decoding apparatus 100 may use at least one reference coding unit included in one maximum coding unit. In other words, the maximum coding unit of the divided image may include at least one reference coding unit, and the coding unit may be determined by a recursive division process of each reference coding unit. According to an embodiment, at least one of the width and the height of the maximum coding unit may correspond to an integer multiple of at least one of the width and the height of the reference coding unit. According to an embodiment, the size of the reference coding unit may be equal to the size of the maximum coding unit divided n times according to the quadtree structure. In other words, the video decoding apparatus 100 may determine the reference coding unit by dividing the maximum coding unit n times according to the quadtree structure, and according to various embodiments, divide the reference coding unit based on at least one of the block shape information and the division shape information.

Fig. 23 illustrates a processing block used as a reference for determining the determination order of the reference coding unit included in the picture 2300, according to an embodiment.

According to an embodiment, the video decoding apparatus 100 may determine at least one processing block that divides a picture. The processing block is a data unit including at least one reference coding unit that divides the image, and the at least one reference coding unit included in the processing block may be determined in a predetermined order. In other words, the determination order in the at least one reference coding unit determined in each processing block may correspond to one of various orders for determining the reference coding unit, and the determination order in the reference coding unit determined in each processing block may be different according to the processing block. The determination order of the reference coding unit determined by the processing block may be one of various orders such as a raster scan order, a Z scan order, an N scan order, an upper right diagonal scan order, a horizontal scan order, a vertical scan order, and the like, but the determinable order should not be construed as being limited to the above scan order.

According to an embodiment, the video decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information on the size of the processing block. The video decoding apparatus 100 may obtain information on the size of the processing block from the bitstream to determine the size of at least one processing block included in the image. The size of the processing block may be a predetermined size of the data unit indicated by the information on the size of the processing block.

According to an embodiment, the obtainer 105 of the video decoding apparatus 100 may obtain information on the size of the processing block from the bitstream in a predetermined data unit. For example, information on the size of a processing block may be obtained from a bitstream in data units of images, sequences, pictures, slices, slice segments, and the like. In other words, the obtainer 105 may obtain information on the size of the processing block from the bitstream according to such several data units, and the video decoding apparatus 100 may determine the size of at least one processing block dividing the picture by using the obtained information on the size of the processing block, and the size of such processing block may be an integer multiple of the size of the reference coding unit.

According to an embodiment, the video decoding apparatus 100 may determine the size of the processing blocks 2302 and 2312 included in the picture 2300. For example, the video decoding apparatus 100 may determine the size of the processing block based on information on the size of the processing block obtained from the bitstream. Referring to fig. 23, according to an embodiment, the video decoding apparatus 100 may determine the horizontal size of the processing blocks 2302 and 2312 to be four times the horizontal size of a reference coding unit and determine the vertical size thereof to be four times the vertical size of the reference coding unit. The video decoding apparatus 100 may determine an order in which at least one reference coding unit is determined within at least one processing block.

According to an embodiment, the video decoding apparatus 100 may determine each of the processing blocks 2302 and 2312 included in the picture 2300 based on the size of the processing blocks, and determine a determination order of at least one reference coding unit included in the processing blocks 2302 and 2312. According to an embodiment, the determining of the reference coding unit may comprise determining a size of the reference coding unit.

According to an embodiment, the video decoding apparatus 100 may obtain information on a determined order of at least one reference coding unit included in at least one processing block from a bitstream, and determine an order in which at least one coding unit is determined based on the obtained information on the determined order. The information on the determined order may be defined as an order or a direction in which the reference coding unit is determined in the processing block. In other words, the order in which the reference coding units are determined can be independently determined by the processing block.

According to an embodiment, the video decoding apparatus 100 may obtain information on the determined order of the reference coding units from the bitstream in predetermined data units. For example, the obtainer 105 may obtain information on the determined order of the reference coding units from the bitstream in units of data (such as images, sequences, pictures, slices, slice segments, processing blocks, etc.). Since the information on the determined order of the reference coding units indicates the order in which the reference coding units are determined within the processing block, the information on the determined order can be obtained in a specific data unit including an integer number of the processing blocks.

According to an embodiment, the video decoding apparatus 100 may determine at least one reference coding unit based on the determined order.

According to an embodiment, the obtainer 105 may obtain information regarding the determined order of the reference coding units from the bitstream as information regarding the processing blocks 2302 and 2312, and the video decoding apparatus 100 may determine an order of determining at least one reference coding unit included in the processing blocks 2302 and 2312 and determine at least one reference coding unit included in the picture 2300 according to the determined order of the coding units. Referring to fig. 23, the video decoding apparatus 100 may determine the determination orders 2304 and 2314 of at least one reference coding unit respectively associated with the processing blocks 2302 and 2312. For example, when information on the determination order of the reference coding units is obtained by the processing blocks, the determination orders of the reference coding units related to the processing blocks 2302 and 2312 may be different from each other. For example, when information on the determination order of the reference coding units is obtained by the processing blocks, the determination orders of the reference coding units related to the processing blocks 2302 and 2312 may be different from each other. When the determination order 2304 related to the processing block 2302 is a raster scan order, the reference coding unit included in the processing block 2302 may be determined according to the raster scan order. In contrast, when the determination order 2314 related to the processing block 2312 is the reverse order of the raster scan order, the reference coding units included in the processing block 2312 may be determined in the reverse order of the raster scan order.

According to an embodiment, the video decoding apparatus 100 may encode the determined at least one reference coding unit. The video decoding apparatus 100 can decode an image based on the reference coding unit determined by the above embodiments. Examples of the method of decoding the reference coding unit may include various methods of decoding an image.

According to an embodiment, the video decoding apparatus 100 may obtain and use block shape information indicating a shape of a current coding unit or partition shape information indicating a method of partitioning the current coding unit from a bitstream. The block shape information or the division shape information may be included in a bitstream related to various data units. For example, the video decoding apparatus 100 may use block shape information or partition shape information included in a sequence parameter set, a picture parameter set, a video parameter set, a slice header, and a slice segment header. In addition, the video decoding apparatus 100 may obtain and use syntax corresponding to block shape information or partition shape information from a bitstream according to a maximum coding unit, a reference coding unit, and a processing block.

The present disclosure has been described so far centering on various embodiments. Those having ordinary knowledge in the technical field to which the present disclosure pertains will appreciate that the present disclosure can be implemented in modified forms within a scope that does not depart from the essential characteristics of the present disclosure. Therefore, the disclosed embodiments should be considered in a non-limiting sense, rather than a descriptive sense. The scope of the present disclosure is shown in the claims rather than the description set forth above, and all differences within the equivalent scope thereof should be construed as being included in the present disclosure.

Meanwhile, the embodiments of the present disclosure may be written as computer programs and may be implemented with general-purpose digital computers that execute the programs using a computer readable recording medium. Examples of the computer readable recording medium include magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, or DVDs), and the like.

86页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于图像/视频处理的线性编码器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类