Parameter derivation for inter-prediction

文档序号:1821948 发布日期:2021-11-09 浏览:13次 中文

阅读说明:本技术 用于帧间预测的参数推导 (Parameter derivation for inter-prediction ) 是由 张凯 张莉 刘鸿彬 许继征 王悦 于 2020-03-26 设计创作,主要内容包括:提供了一种用于视频处理的方法。该方法包括:对于视频的当前视频块和视频的编解码表示之间的转换,基于当前视频块的所选择的邻近样点和参考块的对应的邻近样点来确定使用线性模型的编解码工具的参数;以及基于该确定来执行转换。(A method for video processing is provided. The method comprises the following steps: for a transition between a current video block of the video and a codec representation of the video, determining parameters of a codec tool using a linear model based on selected neighboring samples of the current video block and corresponding neighboring samples of a reference block; and performing a conversion based on the determination.)

1.A method for video processing, comprising:

for a transition between a current video block of the video and a codec representation of the video, determining parameters of a codec tool using a linear model based on selected neighboring samples of the current video block and corresponding neighboring samples of a reference block; and

performing a conversion based on the determination.

2. The method of claim 1, wherein the codec tool is a Local Illumination Compensation (LIC) tool that includes a linear model that uses illumination changes in the current video block during the conversion.

3. The method of claim 2, wherein neighboring samples of the current video block and neighboring samples of the reference block are selected based on a location rule.

4. The method of claim 2, wherein the parameter of the coding tool is determined based on maximum and minimum values of neighboring samples of the current video block and neighboring samples of the reference block.

5. The method of claim 2, wherein the parameters of the codec tool are determined using a parameter table, wherein entries of the parameter table are retrieved from two neighboring samples of the current video block and two neighboring samples of the reference block.

6. The method of claim 2, wherein neighboring samples of the current video block and neighboring samples of the reference block are downsampled to derive parameters of a coding tool.

7. The method of claim 2, wherein neighboring samples used to derive parameters of the LIC tool exclude samples at a particular location in an upper and/or left column of the current video block.

8. The method of claim 2, wherein the upper left sample of the current video block has coordinates (x0, y0), and the sample having coordinates (x0, y0-1) is not used to derive parameters of the LIC tool.

9. The method of claim 2, wherein the upper left sample of the current video block has coordinates (x0, y0), and the sample having coordinates (x0-1, y0) is not used to derive parameters of the LIC tool.

10. The method of claim 7, wherein the particular location depends on availability of an upstream and/or left column.

11. The method of claim 7, wherein the particular location depends on a block size of the current video block.

12. The method of claim 1, wherein the determining is dependent on availability of an uplink and/or a left column.

13. The method of claim 2, wherein N neighboring samples of the current video block and N neighboring samples of the reference block are used to derive parameters of the LIC tool.

14. The method of claim 13, wherein N is 4.

15. The method of claim 13, wherein the N adjacent samples of the current video block comprise N/2 samples from an upper row of the current video block and N/2 samples from a left column of the current video block.

16. The method of claim 13, wherein N is equal to min (L, T), T is a total number of available neighboring samples for a current video block, and L is an integer.

17. The method of claim 13, wherein the N neighboring samples are selected based on the same rules applicable to selecting samples to derive parameters of the CCLM.

18. The method of claim 13, wherein the N neighboring samples are selected based on the same rule applicable to selecting samples to derive parameters of the first mode of the CCLM that use only the upper neighboring samples.

19. The method of claim 13, wherein the N neighboring samples are selected based on the same rule applicable to selecting samples to derive parameters for the second mode of the CCLM using only left neighboring samples.

20. The method of claim 13, wherein the N neighboring samples of the current video block are selected based on availability of an upper or left column of the current video block.

21. A method for video processing, comprising:

for a transition between a current video block of a video and a codec representation of the video, determining parameters of a Local Illumination Compensation (LIC) tool based on N neighboring samples of the current video block and N corresponding neighboring samples of a reference block, wherein the N neighboring samples of the current video block are selected based on locations of the N neighboring samples; and

a conversion is performed based on the determination,

wherein the LIC tool uses a linear model of the illumination changes in the current video block during the conversion.

22. The method of claim 21, wherein the N neighboring samples of the current video block are selected based on a width and a height of the current video block.

23. The method of claim 21, wherein the N neighboring samples of the current video block are selected based on availability of neighboring blocks of the current video block.

24. The method of claim 21, wherein N neighboring samples of the current video block are selected using a first position offset value (F) and a step size value (S) that depend on the size of the current video block and the availability of neighboring blocks.

25. The method of any of claims 1-24, wherein the current video block is affine coded.

26. A method for video processing, comprising:

for a transition between a current video block of video as a chroma block and a codec representation of the video, determining parameters of a cross-component linear model (CCLM) based on the chroma samples and corresponding luma samples; and

a conversion is performed based on the determination,

wherein some of the chroma samples are obtained by a padding operation and the chroma samples and corresponding luma samples are grouped into two arrays G0 and G1, each array comprising two chroma samples and corresponding luma samples.

27. The method of claim 26, wherein, in case the sum of cntT and cntL is equal to 2, the following operations are performed in order: i) pSelComp [3] is set equal to pSelComp [0], ii) pSelComp [2] is set equal to pSelComp [1], iii) pSelComp [0] is set equal to pSelComp [1], and iv) pSelComp [1] is set equal to pSelComp [3], wherein cntT and cntL indicate the number of samples selected from the upper and left neighboring blocks, respectively, and wherein pSelComp [0] through pSelComp [3] indicate pixel values of color components of the selected corresponding samples.

28. The method of claim 26, wherein determining parameters comprises initializing values of G0[0], G0[1], G1[0], and G1[1 ].

29. The method of claim 28, wherein G0[0] ═ 0, G0[1] ═ 2, G1[0] ═ 1, and G1[1] ═ 3.

30. The method of claim 28, wherein determining parameters further comprises, after an initialization value, exchanging chroma samples of G0[0] and their corresponding luma samples with chroma samples of G0[1] and their corresponding luma samples after a comparison of two luma sample values of G0[0] and G0[1 ].

31. The method of claim 30 wherein, in the event that the luma sample value of G0[0] is greater than the luma sample value of G0[1], the chroma samples of G0[0] and their corresponding luma samples are swapped with the chroma samples of G0[1] and their corresponding luma samples.

32. The method of claim 28, wherein determining parameters further comprises, after an initialization value, exchanging chroma samples of G1[0] and their corresponding luma samples with chroma samples of G1[1] and their corresponding luma samples after a comparison of two luma sample values of G1[0] and G1[1 ].

33. The method of claim 32 wherein, in the event that the luma sample value of G1[0] is greater than the luma sample value of G1[1], the chroma samples of G1[0] and their corresponding luma samples are swapped with the chroma samples of G1[1] and their corresponding luma samples.

34. The method of claim 28, wherein determining parameters further comprises, after an initialization value, exchanging chroma samples of G0[0] or G0[1] and their corresponding luma samples with chroma samples of G1[0] or G1[1] and their corresponding luma samples after a comparison of two luma sample values of G0[0] and G1[1 ].

35. The method of claim 34, wherein chroma samples of G0[0] or G0[1] and their corresponding luma samples are swapped with chroma samples of G1[0] or G1[1] and their corresponding luma samples in case the luma sample value of G0[0] is greater than the luma sample value of G1[1 ].

36. The method of claim 28, wherein determining parameters further comprises, after an initialization value, exchanging chroma samples of G0[1] and their corresponding luma samples with chroma samples of G1[0] and their corresponding luma samples after a comparison of two luma sample values of G0[1] and G1[0 ].

37. The method of claim 36 wherein chroma samples of G0[1] and their corresponding luma samples are swapped with chroma samples of G1[0] and their corresponding luma samples in the event that the luma samples of G0[1] are greater than the luma samples of G1[0 ].

38. The method of claim 28, wherein determining parameters further comprises, after initializing values, performing the following swapping operations in order after comparison of two luminance sample values of G0[0], G0[1], G1[0], and G1[1 ]: i) the method includes the steps of (i) exchanging the chroma samples of G0[0] and the corresponding luminance samples thereof with the chroma samples of G0[1] and the corresponding luminance samples thereof, (ii) exchanging the chroma samples of G1[0] and the corresponding luminance samples thereof with the chroma samples of G1[1] and the corresponding luminance samples thereof, iii) exchanging the chroma samples of G0[0] or G0[1] and the corresponding luminance samples thereof with the chroma samples of G1[0] or G1[1] and the corresponding luminance samples thereof, and iv) exchanging the chroma samples of G0[1] and the corresponding luminance samples thereof with the chroma samples of G1[0] and the corresponding luminance samples thereof.

39. The method of any of claims 1-38, wherein performing a transform comprises generating a codec representation from a current block.

40. The method of any of claims 1-38, wherein performing a transformation comprises generating a current block from a codec representation.

41. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of claims 1-40.

42. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing the method of any of claims 1-40.

Technical Field

This patent document relates to video processing techniques, devices, and systems.

Background

Despite advances in video compression, digital video still accounts for the largest proportion of bandwidth usage on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, the bandwidth requirements for pre-counting digital video usage will continue to grow.

Disclosure of Invention

Devices, systems, and methods related to digital video processing and, for example, to simplified Linear Model derivation of Cross-Component Linear Model (CCLM) prediction modes in video codecs are described. The described methods may be applied to existing Video codec standards (e.g., High Efficiency Video Coding (HEVC)) and future Video codec standards (e.g., universal Video Coding (VVC)) or codecs.

In one representative aspect, the disclosed technology can be used to provide a method for video processing. The method comprises the following steps: for a transition between a current video block of video as a chroma block and a codec representation of the video, determining whether to derive maxima and/or minima of a luma component and a chroma component for deriving parameters of a cross-component linear model (CCLM) based on availability of left and upper neighboring blocks of the current video block; and performing a conversion based on the determination.

In another representative aspect, the disclosed techniques can be used to provide a method for video processing. The method comprises the following steps: determining, for a transition between a current video block of video as a chroma block and a codec representation of the video, a location at which a luma sample point is downsampled, wherein the downsampled luma sample point is used to determine parameters of a cross-component linear model (CCLM) based on the chroma sample point and the downsampled luma sample point, wherein the downsampled luma sample point is at a location corresponding to a location of the chroma sample point used to derive the parameters of the CCLM; and performing a conversion based on the determination.

In another representative aspect, the disclosed techniques can be used to provide a method for video processing. The method comprises the following steps: for a transition between a current video block of video as a chroma block and a codec representation of the video, determining a method to derive parameters of a cross-component linear model (CCLM) using chroma samples and luma samples based on codec conditions associated with the current video block; and performing a conversion based on the determination.

In another representative aspect, the disclosed techniques can be used to provide a method for video processing. The method comprises the following steps: for a transition between a current video block of the video and a codec representation of the video, determining parameters of a codec tool using a linear model based on selected neighboring samples of the current video block and corresponding neighboring samples of a reference block; and performing a conversion based on the determination.

In another representative aspect, the disclosed techniques can be used to provide a method for video processing. The method comprises the following steps: for a transition between a current video block of a video and a codec representation of the video, determining parameters of a Local Illumination Compensation (LIC) tool based on N neighboring samples of the current video block and N corresponding neighboring samples of a reference block, wherein the N neighboring samples of the current video block are selected based on locations of the N neighboring samples; and performing a transformation based on the determination, wherein the LIC tool uses a linear model of the illumination changes in the current video block during the transformation.

In another representative aspect, the disclosed techniques can be used to provide a method for video processing. The method comprises the following steps: for a transition between a current video block of video as a chroma block and a codec representation of the video, determining parameters of a cross-component linear model (CCLM) based on the chroma samples and corresponding luma samples; and performing a conversion based on the determination, wherein some of the chroma samples are obtained by a padding operation, and the chroma samples and corresponding luma samples are grouped into two arrays G0 and G1, each array including two chroma samples and corresponding luma samples.

In yet another representative aspect, the above-described methods are embodied in the form of processor-executable code and stored in a computer-readable program medium.

In yet another representative aspect, an apparatus configured or operable to perform the above-described method is disclosed. The apparatus may include a processor programmed to implement the method.

In yet another representative aspect, a video decoder device may implement the methods described herein.

The above and other aspects and features of the disclosed technology are described in more detail in the accompanying drawings, the description and the claims.

Drawings

Fig. 1 shows an example of the locations of samples used to derive weights for a linear model for cross-component prediction.

Fig. 2 shows an example of classifying neighboring samples into two groups.

Fig. 3A shows an example of chroma samples and their corresponding luma samples.

Fig. 3B shows an example of down-filtering a cross-component linear Model (CCLM) in a Joint Exploration Model (JEM).

Fig. 4A and 4B show examples of top-only and left-only neighboring samples, respectively, for linear model-based prediction.

Fig. 5 shows an example of a straight line between a minimum luminance value and a maximum luminance value as a function of corresponding chrominance samples.

Fig. 6 shows an example of a current chroma block and its neighboring samples.

Fig. 7 shows an example of different parts of a chroma block predicted by a linear model using only left neighboring samples (LM-L) and a linear model using only upper neighboring samples (LM-a).

Fig. 8 shows an example of an upper left neighboring block.

Fig. 9 shows an example of sampling points to be used for deriving a linear model.

Fig. 10 shows an example of left column and left bottom column and up-and right-up-row with respect to the current block.

Fig. 11 shows an example of a current block and its reference samples.

Fig. 12 shows an example of two neighboring samples when both left and upper neighboring reference samples are available.

Fig. 13 shows an example of two neighboring samples when only the upper neighboring reference sample is available.

Fig. 14 shows an example of two neighboring samples when only the left neighboring reference sample is available.

Fig. 15 shows an example of four neighboring samples when both left and upper neighboring reference samples are available.

Fig. 16 shows an example of a lookup table used in LM derivation.

Fig. 17 shows an example of the LM parameter derivation process using 64 entries.

Fig. 18A-18F illustrate a flow diagram of an example method for video processing based on some embodiments of the disclosed technology.

Fig. 19A and 19B are block diagrams of examples of hardware platforms for implementing the visual media decoding or visual media encoding techniques described in this document.

Fig. 20A and 20B show an example of the LM parameter derivation process using four entries. Fig. 20A shows an example when both the upper and left neighboring samples are available, and fig. 20B shows an example when only the upper neighboring sample is available and the upper right neighboring sample is not available.

Fig. 21 shows an example of neighboring samples for deriving LIC parameters.

Detailed Description

Due to the increasing demand for higher resolution video, video coding methods and techniques are ubiquitous in modern technology. Video codecs typically include electronic circuits or software that compress or decompress digital video, and are continually being improved to provide higher codec efficiency. Video codecs convert uncompressed video into a compressed format and vice versa. There is a complex relationship between video quality, the amount of data used to represent the video (as determined by the bit rate), the complexity of the encoding and decoding algorithms, susceptibility to data loss and errors, ease of editing, random access, and end-to-end delay (latency). The compressed format typically conforms to a standard video compression specification, such as the High Efficiency Video Codec (HEVC) standard (also referred to as h.265 or MPEG-H part 2), the universal video codec (VVC) standard to be finalized, or other current and/or future video codec standards.

Embodiments of the disclosed techniques may be applied to existing video codec standards (e.g., HEVC, h.265) and future standards to improve runtime performance. Section headings are used in this document to improve readability of the description, and the discussion or embodiments (and/or implementations) are not limited in any way to only the individual sections.

1 Cross-component prediction embodiment

Cross-component prediction is a form of chroma-to-luma prediction method that strikes a well-balanced tradeoff between complexity and compression efficiency improvement.

1.1 examples of Cross-component Linear models (CCLM)

In some embodiments and to reduce cross-component redundancy, a cross-component linear model (CCLM) prediction mode (also referred to as LM) is used in JEM for which chroma samples are predicted based on reconstructed luma samples of the same CU by using a linear model as follows:

predC(i,j)=α·recL′(i,j)+β (1)

here, predC(i, j) represents the predicted chroma samples in the CU, and recL' (i, j) denotes downsampled reconstructed luma samples of the same CU in color format 4:2:0 or 4:2:2, while recL' (i, j) denotes reconstructed luma samples of the same CU in color format 4:4: 4. CCLM parameters α and β are derived by minimizing the regression error between neighboring reconstructed luma and chroma samples around the current block as follows:

and is

Here, l (N) denotes top neighboring reconstructed luma samples and left neighboring reconstructed luma samples of down-sampled (for color format 4:2:0 or 4:2:2) or original (for color format 4:4:4), c (N) denotes top neighboring reconstructed chroma samples and left neighboring reconstructed chroma samples, and the value of N is equal to twice the minimum of the width and height of the current chroma codec block.

In some embodiments and for codec blocks with square shapes, the above two equations are applied directly. In other embodiments and for non-square codec blocks, adjacent samples of the longer boundary are first subsampled to have the same number of samples as the shorter boundary. Fig. 1 shows the positions of the left and upper reconstructed samples, as well as the samples of the current block involved in CCLM mode.

In some embodiments, this regression error minimization calculation is performed as part of the decoding process, not just as an encoder search operation, and thus does not use syntax to convey the α and β values.

In some embodiments, the CCLM prediction mode also includes prediction between two chroma components, e.g., a Cr (red color difference) component from a Cb (blue color difference) component. Instead of using the reconstructed sample signals, the CCLM Cb to Cr prediction is applied in the residual domain. This is achieved by adding the weighted reconstructed Cb residual to the original Cr intra prediction to form the final Cr prediction:

here, resiCb' (i, j) represents the reconstructed Cb residual samples at position (i, j).

In some embodiments, the scaling factor α may be derived in a similar manner as in CCLM luma to chroma prediction. The only difference is that a regression cost is added with respect to the default alpha value of the error function, so that the derived scaling factor is biased towards a default value of-0.5, as follows:

here, Cb (n) represents neighboring reconstructed Cb samples, Cr (n) represents neighboring reconstructed Cr samples, and λ is equal to Σ (Cb (n) (n)) Cb > 9.

In some embodiments, the CCLM luma to chroma prediction modes are added as an additional chroma intra prediction mode. At the encoder side, an additional RD cost check of one chroma component is added for selecting the chroma intra prediction mode. When intra prediction modes other than the CCLM luma-to-chroma prediction mode are used for the chroma components of the CU, the CCLM Cb-to-Cr prediction is used for the Cr component prediction.

1.2 example of a Multi-model CCLM

In JEM, there are two CCLM modes: single model CCLM mode and multi-model CCLM mode (MMLM). As the name implies, the single model CCLM mode uses one linear model for predicting chroma samples from luma samples of the entire CU, whereas in MMLM there may be two models.

In MMLM, neighboring luma samples and neighboring chroma samples of a current block are classified into two groups, each of which is used as a training set to derive a linear model (i.e., specific alpha and beta are derived for a specific group). In addition, samples of the current luminance block are also classified based on the same rules used to classify neighboring luminance samples.

Fig. 2 shows an example of classifying neighboring samples into two groups. The threshold is calculated as the average of neighboring reconstructed luminance samples. Rec'L[x,y]<The neighboring spots that are Threshold are classified as group 1; and Rec'L[x,y]>The neighbor samples of Threshold are classified as group 2.

1.3 example of downsampling Filter in CCLM

In some embodiments and to perform cross-component prediction, for a 4:2:0 chroma format (where 4 luma samples correspond to 1 chroma sample), the reconstructed luma block needs to be downsampled to match the size of the chroma signal. The default downsampling filter used in CCLM mode is as follows:

Rec'L[x,y]={2×RecL[2x,2y]+2×RecL[2x,2y+1]+RecL[2x-1,2y]+RecL[2x+1,2y]+RecL[2x-1,2y+1]+RecL[2x+1,2y+1]+4}>>3 (7)

here, as for the position of the chroma sampling point with respect to the position of the luminance sampling point, the down-sampling employs the "type 0" phase relationship as shown in fig. 3A, for example, the juxtaposed sampling is performed horizontally and the gap sampling is performed vertically.

(6) The exemplary 6-tap downsampling filter defined in (1) is used as a default filter for the single model CCLM mode and the multi-model CCLM mode.

In some embodiments and for MMLM mode, the encoder may alternatively select one of four additional luma downsampling filters to apply to the prediction in the CU, and send a filter index to indicate which of these filters to use. As shown in fig. 3B, four alternative luminance downsampling filters in MMLM mode are as follows:

Rec'L[x,y]=(RecL[2x,2y]+RecL[2x+1,2y]+1)>>1 (8)

Rec'L[x,y]=(RecL[2x+1,2y]+RecL[2x+1,2y+1]+1)>>1 (9)

Rec'L[x,y]=(RecL[2x,2y+1]+RecL[2x+1,2y+1]+1)>>1 (10)

Rec'L[x,y]=(RecL[2x,2y]+RecL[2x,2y+1]+RecL[2x+1,2y]+RecL[2x+1,2y+1]+2)>>2 (11)

1.4 multidirectional LM (MDLM)

This prior embodiment proposes a multidirectional lm (mdlm). In MDLM, two additional CCLM modes are proposed: LM-A, where the linear model parameters are derived based only on top-adjacent (or top-adjacent) samples, as shown in FIG. 4A; LM-L, where linear model parameters are derived based only on left neighboring samples, as shown in fig. 4B.

1.5 Cross-component Linear model simplification

This prior embodiment proposes an LMS algorithm that replaces the linear model parameters α and β with linear equations, a so-called two-point method. The two points (luminance and chrominance pairs) (a, B) are the minimum and maximum values within the set of neighboring luminance samples, as depicted in fig. 5.

Herein, the linear model parameters α and β are obtained according to the following formula:

in some embodiments, the division operation required in deriving α is avoided and replaced by multiplication and shifting as follows:

herein, S is set equal to iShift, α is set equal to a, and β is set equal to b. In addition, g _ aiLMDivTableLow and g _ aiLMDivTableHigh are two tables each having 512 entries, where each entry stores a 16-bit integer.

To derive the chroma prediction value, for the current VTM implementation, the multiplication is replaced by an integer operation as follows:

predC(i,j)=(α·rec′L (i,j))>>S+β

this embodiment is also simpler than the current VTM embodiment, since the shift S always has the same value.

Example of CCLM in 1.6 VVC

The CCLM in JEM was adopted in VTM-2.0, but not in VTM-2.0. MDLM and simplified CCLM have been used in VTM-3.0.

Example of local illumination Compensation in 1.7 JEM

Local Illumination Compensation (LIC) is based on a linear model of the Illumination variation, using a scaling factor a and an offset b. And adaptively enable or disable local illumination compensation for each inter-mode Codec Unit (CU).

When LIC is applied to a CU, a least squares method is employed to derive the parameters a and b by using neighboring samples of the current CU and their corresponding reference samples. More specifically, as shown in fig. 21, sub-sampled (2:1 sub-sampled) neighboring samples of a CU and corresponding pixels (identified by motion information of the current CU or sub-CU) in the reference picture are used. IC parameters are derived and applied separately for each prediction direction.

When a CU is coded in 2Nx2N Merge mode, the LIC flag is copied from neighboring blocks in a similar way to the motion information copy in Merge mode; otherwise, the LIC flag is signaled to the CU to indicate whether LIC is applicable.

When LIC is enabled for a picture, an additional CU level RD check is needed to determine if LIC is applicable to the CU. When LIC is enabled for a CU, the mean removed sum of absolute differences (MR-SAD) and the mean removed sum of absolute hadamard transform differences (MR-SATD) are used for integer pixel motion search and fractional pixel motion search, respectively, instead of SAD and SATD.

In order to reduce the coding complexity, the following coding scheme is applied in JEM:

when there is no significant illumination change between the current picture and its reference picture, LIC is disabled for the entire picture. To identify this situation, a histogram of the current picture and each reference picture of the current picture are computed at the encoder. Disabling LIC for the current picture if the histogram difference between the current picture and each reference image of the current picture is less than a given threshold; otherwise, LIC is enabled for the current picture.

2 example of the disadvantages in the prior embodiment

The current embodiment introduces a two-point method instead of the LMs method of the LM mode in the JEM. Although the new method reduces the number of additions and multiplications in the CCLM, it also introduces the following problems:

1) the comparison is introduced to find the minimum and maximum luminance values, which is not friendly to Single Instruction Multiple Data (SIMD) software design.

2) Two look-up tables with a total of 1024 entries storing 16-bit numbers were introduced, where 2K ROM memory requirements were not expected in the hardware design.

Exemplary method for Cross-component prediction in 3-video codec

Embodiments of the presently disclosed technology overcome the disadvantages of the prior implementations to provide video codecs with higher codec efficiency and lower computational complexity. Based on the disclosed techniques, simplified linear model derivation for cross-component prediction may enhance existing and future video codec standards, as will be set forth in the examples described below for various embodiments. The examples of the disclosed technology provided below illustrate the general concept and are not meant to be construed as limiting. In examples, various features described in these examples may be combined unless explicitly indicated to the contrary.

In the following examples and methods, the term "LM method" includes, but is not limited to, LM mode in JEM or VTM and MMLM mode in JEM, left LM mode using only left neighboring samples to derive linear models, upper LM mode using only upper neighboring samples to derive linear models, or other kinds of methods that utilize luma reconstruction samples to derive chroma prediction blocks. All LM modes that are not LM-L or LM-A are called normal LM modes.

In the following examples and methods, Shift (x, s) is defined as Shift (x, s) — (x + off) > s, and sigshift (x, s) is defined as Shift (x, s)

Herein, off is, for example, 0 or 2s-1Is an integer of (1).

The height and width of the current chroma block are denoted as H and W, respectively.

Fig. 6 shows an example of neighboring samples of a current chroma block. Let the coordinates of the top left sample point of the current chroma block be denoted (x, y). Then, the neighboring chroma samples (as shown in fig. 6) are represented as:

a: top spots at left: [ x-1, y ],

b: top middle sample at left: [ x-1, y + H/2-1],

c: bottom middle sample at left: [ x-1, y + H/2],

d: bottom spots at left side: [ x-1, y + H-1],

e: top samples of the extended bottom at left: [ x-1, y + H ],

f: top middle samples of the extended bottom at left: [ x-1, y + H + H/2-1],

g: bottom middle sample at the extension bottom at left: [ x-1, y + H + H/2],

i: bottom samples of the extended bottom at left: [ x-1, y + H + H-1],

j: left spot at top: [ x, y-1],

k: left middle spot at top: [ x + W/2-1, y-1],

l: right middle sample at top: [ x + W/2, y-1],

m: right spot at top: [ x + W-1, y-1],

n: left sample point above extension at top: [ x + W, y-1],

o: left middle sample point above extension at top: [ x + W + W/2-1, y-1],

p: right-side middle sample point above extension at top: [ x + W + W/2, y-1], and

q: extended upper right samples at top: [ x + W + W-1, y-1 ].

Example 1.The parameters α and β in the LM method are derived from chroma samples at two or more specific locations.

a. The derivation also depends on corresponding down-sampled luminance samples of the selected chrominance samples. Alternatively, the derivation also depends on the corresponding luma sample of the selected chroma samples, such as when it is in a 4:4:4 color format.

b. For example, the parameters α and β in CCLM are from 2SChroma sampling at (e.g., S-2 or 3) locations derivation such as:

i. position { A, D, J, M };

position { A, B, C, D, J, K, L, M };

position { a, I, J, Q };

position { A, B, D, I, J, K, M, Q };

v. position { A, B, D, F, J, K, M, O };

position { A, B, F, I, J, K, O, Q };

position { A, C, E, I, J, L, N, Q };

position { A, C, G, I, J, L, P, Q };

position { A, C, E, G, J, L, N, P };

x, position { a, B, C, D };

position { a, B, D, I };

xi, position { a, B, D, F };

position { a, C, E, I };

position { a, C, G, I };

xv. positions { A, C, E, G };

position { J, K, L, M };

xvii. position { J, K, M, Q };

position { J, K, M, O };

position { J, K, O, Q };

xx. positions { J, L, N, Q };

xxi. position { J, L, P, Q };

xxii. position { J, L, N, P };

xxiii. position { A, B, C, E, E, F, G, I };

xxiv. position { J, K, L, M, N, O, P, Q };

c. for example, the parameters α and β in the CCLM are derived from the chroma sampling points at:

any combination between { A, B, C, D, E, F, G, I } and { J, K, L, M, N, O, P, Q }, such as

(a) Positions A and J;

(b) positions B and K;

(c) positions C and L;

(d) positions D and M;

(e) positions E and N;

(f) positions F and O;

(g) positions G and P;

(h) positions I and Q;

any two different positions extracted from { A, B, C, D, E, F, G } in the sample

(a) Positions A and B;

(b) positions A and C;

(c) positions A and D;

(d) positions A and E;

(e) positions A and F;

(f) positions A and G;

(g) positions A and I;

(h) positions D and B;

(i) positions D and C;

(j) positions E and B;

(k) positions E and C;

(l) Positions I and B;

(m) positions I and C;

(n) positions I and D;

(o) positions I and E;

(p) positions I and F;

(q) positions I and G;

any two different positions extracted from { J, K, L, M, N, O, P, Q })

(a) Positions J and K;

(b) positions J and L;

(c) positions J and M;

(d) positions J and N;

(e) positions J and O;

(f) positions J and P;

(g) positions J and Q;

(h) positions M and K;

(i) positions M and L;

(j) positions N and K;

(k) positions N and L;

(l) Positions Q and K;

(m) positions Q and L;

(n) positions Q and M;

(o) positions Q and N;

(p) positions Q and O;

(Q) positions Q and P;

(r) positions Q and Q;

in one example, if two selected locations have the same brightness value, more locations may be further examined.

d. For example, not all available chroma samples are searched for minimum and maximum luminance values to derive the parameters α and β in the CCLM in a two-point method.

One of the i.K chroma samples (and its corresponding downsampled luma sample) is included in the search set. K may be 2, 4, 6 or 8.

(a) For example, if Rec [ x, y ] is an upper neighboring sample point, it is included in the search set only if x% K ═ 0. If Rec [ x, y ] is a left-adjacent sample point, it is included in the search set only if y% K ═ 0.

The search set includes only chroma samples at specific locations, such as the locations defined in 1. a.i-1. a.xxiv.

e. For the mode LM-L, all selected samples must be left adjacent samples.

f. For the mode LM-A, all selected samples must be up-adjacent samples.

g. The selected locations may be fixed, or they may be adaptive.

i. In one example, which positions to select may depend on the width and height of the current chroma block;

in one example, which locations to select may be signaled from the encoder to the decoder, such as in VPS/SPS/PPS/slice header/slice/CTU/CU/PU.

h. The selected chroma samples are used to derive the parameters α and β using the least mean square method shown by Eq (2) and Eq (3). In Eq (2) and Eq (3), N is set to the number of selected samples.

i. A pair of selected chroma samples is used to derive the parameters alpha and beta using a two-point method.

j. In one example, how to select the samples may depend on the availability of neighboring blocks.

i. For example, if both left and top neighboring blocks are available, locations A, D, J and M are selected; if only left neighboring blocks are available, then position A and position D are selected; and if only the upper neighboring block is available, then positions J and M are selected.

Example 2.The set of parameters in the CCLM mode can be first derived and then combined to form the final linear model parameters for coding and decoding one block. Suppose alpha1And beta1Derived from a set of chroma samples, denoted as set 1, at a particular position, alpha2And beta2Derivation from a set of chroma samples at a particular location denoted as set 2, … …, alphaNAnd betaNDerived from a set of chroma samples at a particular location, denoted as set N, then the final alpha and beta may be derived from (alpha)11)、……、(αNN) And (6) derivation.

a. In one example, α is calculated as α1、……、αNAnd β is calculated as β1、……、βNAverage value of (a).

i. In one example, α ═ SignShift (α)12,1),β=SignShift(β12,1)。

in one example, α ═ Shift (α)12,1),β=Shift(β12,1)。

if (α)11) And (alpha)22) With different accuracy, e.g. in order to obtain the chroma prediction CP from its corresponding downsampled luminance samples LR, it uses (α)11) Is calculated as CP ═ SignShift (α)1×LR+β1,Sh1) But using (α)22) Is calculated as CP ═ SignShift (α)2×LR+β2,Sh2),Sh1Is not equal to Sh2Then the parameters need to be shifted before combining. Suppose Sh1>Sh2Then, before combining, the parameters should be shifted to:

(a)α1=SignShift(α1,Sh1-Sh2),β1=SignShift(β1,Sh1-Sh2). Then the final precision is (alpha)22)。

(b)α1=Shift(α1,Sh1-Sh2),β1=Shift(β1,Sh1-Sh2). Then the final precision is (alpha)22)。

(c)α2=α2<<(Sh1-Sh2),β2=β2<<(Sh1-Sh2). Then the final precision is (alpha)11)。

b. Some examples of locations in groups 1 and 2:

group 1: positions a and D, group 2: positions J and M.

Group 1: positions a and I, group 2: positions J and Q.

Group 1: positions a and D, group 2: positions E and I, where there are two groups for mode LM-L.

v. group 1: positions J and M, group 2: positions N and Q, two of which are used for mode LM-A.

Group 1: positions a and B, group 2: positions C and D, where there are two groups for modes LM-L.

Group 1: positions J and K, group 2: positions L and M, where there are two groups for mode LM-A.

Example 3.Assume two chroma sampling values denoted C0 and C1 and their representations as L0 and L1 (L0)<L1) is an input. The two-point method may use inputs to derive α and β as

And β ═ C0- α L0.

The bit-depths of the luminance and chrominance samples are denoted BL and BC. One or more simplifications of this embodiment include:

a. if L1 is equal to L0, then α is output as 0. Alternatively, when L1 is equal to L0, some intra prediction mode (e.g., DM mode, DC or plane) is used instead of using CCLM mode to derive the prediction block.

b. The division operation is replaced by another operation without a look-up table. The log2 operation may be performed by examining the position of the most significant digit.

i.α=Shift(C1-C0,Floor(log2(L1-L0)) or α ═ SignShift (C1-C0, Floor (log)2(L1-L0))

ii.α=Shift(C1-C0,Ceiling(log2(L1-L0)) or α ═ SignShift (C1-C0, Ceiling (log)2(L1-L0))

Example i or example ii may be selected based on the values of L1-L0.

(a) For example, if L1-L0< T, then example i is used, otherwise example ii is used. For example, T may be

(Floor(log2(L1-L0))+Ceiling(log2(L1-L0)))/2

(b) For example, ifThen use the indicationExample i, otherwise use example ii.

(c) For example, ifThen use example i, otherwise use example ii.

c. The division operation is replaced by a look-up table denoted M k.

i. The size of the look-up table denoted V is less than 2PWhere P is an integer, such as 5, 6 or 7.

Each entry of the lookup table stores an F-bit integer, e.g., F-8 or 16.

(a) In one example, M [ k-Z ] ═ ((1< < S) + Off)/k, where S is an integer defining precision, e.g., S ═ F. Off is an offset, e.g., (k + Z) > > 1. Z defines the starting value of the table, e.g., Z-1, or Z-8, or Z-32. The valid key k of the lookup table must satisfy k > -Z.

K Shift (L1-L0, W) is used as a key to look up the look-up table.

(a) In one example, W depends on BL, V, and Z.

(b) In one example, W also depends on the values of L1-L0.

if k is not a valid key for the lookup table (k-Z <0 or k-Z > ═ V), then 0 is output by α.

v. for example, the following are preferred,

α ═ Shift ((C1-C0) × M [ k-Z ], D), or

α=SignShift((C1-C0)×M[k-Z],D)

To derive a chroma prediction CP from its corresponding (e.g., downsampled for 4:2: 0) luma sample point LR, it is calculated as

CP ═ SignShift (α × LR + β, Sh), or

CP=Shift(α×LR+β,Sh)

Sh may be a fixed number, or it may depend on the values of C0, C1, L0, L1 used to calculate alpha and beta.

(a) Sh may depend on BL, BC, V, S, and D.

(b) D may depend on Sh.

Size of the lookup table denoted V equals 2PWhere P is an integer, such as 5, 6, 7 or 8. Alternatively, V is set to 2PM (e.g., M equals 0).

Assuming that α is P/Q (e.g., Q is L1-L0, P is C1-C0, or they are derived otherwise), then α is calculated with a look-up table as α ═ Shift (P × M [ k-Z ], D) or α ═ SignShift (P × M [ k-Z ], D), where k is the key (index) that queries the entries in the look-up table.

(a) In one example, k is derived from Q using a function k ═ f (Q).

(b) In one example, k is derived from Q and P using a function k ═ f (Q, P).

(c) In one example, k is valid within a certain range [ kMin, kMax ]. For example, kMin ═ Z and kMax ═ V + Z.

(d) In one example, k is Shift (Q, W),

a.W may depend on BL, V, and Z.

b.W may depend on the value of Q.

c. In one example, when k is calculated as Shift (Q, W), then α is calculated as Shift using the lookup table

α ═ or (Shift (P × M [ k-Z ], D)) < < W or α ═ or (sigshift (P × M [ k-Z ], D)) < < W.

(e) In one example, k is derived in different ways with different values of Q.

a. For example, when Q < ═ kMax, k equals Q, and when Q > kMax, k equals Shift (Q, W). For example, W is chosen as the smallest positive integer, which makes Shift (Q, W) no greater than kMax.

b. For example, k ═ Min (kMax, Q).

c. For example, k is Max (kMin, Min (kMax, Q)).

(f) In one example, when Q <0, Q is replaced with-Q in the calculation. Then- α is the output.

(g) In one example, when Q is equal to 0, then α is set to a default value, such as 0 or 1.

(h) In one example, when Q is equal to 2E,E>When 0, α is Shift (P, E) or α is sigshift (P, E).

d. All operations to derive the LM parameters must be within K bits, which may be 8, 10, 12, 16, 24, or 32.

i. If the intermediate variable can exceed the range represented by the constrained bits, it should be clipped or right-shifted into the constrained bits.

Example 4.Multiple linear models may be used for a single chroma block, and the selection of the multiple linear models depends on the location of chroma samples within the chroma block.

a. In one example, the LM-L and LM-A modes may be combined in a single chroma block.

b. In one example, some samples are predicted by the LM-L mode and other samples are predicted by the LM-A mode.

i. Fig. 7 shows an example. Assume the left upper sample point is at position (0, 0). Samples at location (x, y) (where x > y (or x > ═ y)) are predicted by LM-a, and other samples are predicted by LM-L.

c. Assuming that the predictions for samples at location (x, y) with LM-L and LM-A are denoted as P1(x, y) and P2(x, y), respectively, the final prediction P (x, y) is calculated as a weighted sum of P1(x, y) and P2(x, y).

i.P(x,y)=w1*P1(x,y)+w2*P2(x,y)

(a)w1+w2=1。

P (x, y) — (w1 × P1(x, y) + w2 × P2(x, y) + Offset) > > shift, where Offset may be 0 or 1< < (shift-1), and shift is an integer, such as 1, 2, 3 … …. (a) w1+ w2 ═ 1< < shift.

P (x, y) — (w1 × P1(x, y) + ((1< < shift) -w1) × P2(x, y) + Offset) > > shift, where Offset can be 0 or 1< < (shift-1) and shift is an integer such as 1, 2, 3 … ….

W1 and w2 may depend on location (x, y).

(a) For example, if x < y, w1> w2 (e.g., w 1-3, w 2-1),

(b) for example, if x > y, w1< w2 (e.g., w 1-1, w 2-3),

(c) for example, if x ═ y, then w1 ═ w2 (e.g., w1 ═ 2, w2 ═ 2),

(d) for example, when x < y, if y-x increases, then w 1-w 2 increases.

(e) For example, when x > y, if x-y increases, then w 2-w 1 increases.

Example 5.It is proposed to divide neighboring samples, including chrominance samples and their corresponding luminance samples, which may be downsampled, into N groups. The maximum and minimum luminance values of the kth group (where k is 0, 1, … …, N-1) are denoted as MaxLkAnd MinLkAnd their corresponding chromaticity values are denoted as MaxC, respectivelykAnd MinCk

a. In one example, MaxL is calculated as MaxL ═ f1 (MaxL)S0,MaxLS1,…,MaxLSm) (ii) a MaxC is calculated as MaxC ═ f2 (MaxC)S0,MaxCS1,…MaxCSm) (ii) a MinL is calculated as MinL ═ f3 (MinL)S0,MinLS1,…MinLSm). MinC is calculated as MinC ═ f3 (MinC)S0,MinCS1,…,MinCSm). f1, f2, f3, and f4 are functions. The two-point method uses inputs to derive α and β as:

β=MinC-αMinL

i. in one example, f1, f2, f3, f4 all represent an average function.

S0, S1, … …, Sm are indices for computing the selected groups of α and β.

(1) For example, all groups are used, e.g., S0 ═ 0, S1 ═ 1, … …, Sm ═ N-1.

(2) For example, two groups are used, e.g., m 1, S0 0, S1N-1.

(3) For example, not all groups are used, e.g., m < N-1, S0-0, S1-2, S3-4, … ….

b. In one example, samples located at an upper row (or downsampled samples) may be classified into one group, and samples located at the left column of the block (or downsampled samples) may be classified into another group.

c. In one example, the samples (or downsampled samples) are classified based on their positions or coordinates.

i. For example, samples may be classified into two groups.

(1) For samples with coordinates (x, y) at the upper row, if x% P ═ Q, then they are classified as group S0, where P and Q are integers, e.g., P ═ 2, Q ═ 1, P ═ 2, Q ═ 0, or P ═ 4, Q ═ 0; otherwise, it is classified as group S1.

(2) For samples with coordinates (x, y) at the left column, if y% P ═ Q, then they are classified as group S0, where P and Q are integers, e.g., P ═ 2, Q ═ 1, P ═ 2, Q ═ 0, or P ═ 4, Q ═ 0; otherwise, it is classified as group S1.

(3) Only samples in one group (such as S0) are used to find MaxC and MaxL. For example, MaxL ═ MaxLS0 and MaxC ═ MaxCS 0.

d. In one example, only a portion of the neighboring samples (or downsampled samples) are used to be divided into N groups.

e. The number of groups (e.g., N) and/or the selected group index and/or function (f1/f2/f3/f4) may be predefined or signaled in the SPS/VPS/PPS/picture header/slice group header/LCUs/CUs.

f. In one example, how to select samples for each group may depend on the availability of neighboring blocks.

i. For example, MaxL when both left-neighboring and upper-neighboring blocks are available0/MaxC0And MinL0/MinC0Are found from positions a and D; MaxL1/MaxC1And MinL1/MinC1Is found from positions J and M, then MaxL ═ MaxL (MaxL)0+MaxL1)/2、MaxC=(MaxC0+MaxC1)/2、MinL=(MinL0+MinL1)/2、MinC=(MinC0+MinC1)/2。

For example, when only left neighboring blocks are available, MaxL/MaxC and MinL/MinC are found directly from positions A and D.

(1) Alternatively, if the upper neighboring block is not available, α and β are set equal to some default values. For example, α ═ 0 and β ═ 1< < (bitDepth-1), where bitDepth is the bit depth of the chroma samples.

For example, when only upper neighboring blocks are available, MaxL/MaxC and MinL/MinC are found directly from positions J and M.

(1) Alternatively, if the left neighboring block is not available, α and β are set equal to some default values. For example, α ═ 0 and β ═ 1< < (bitDepth-1), where bitDepth is the bit depth of the chroma samples.

g. In one example, how the samples are selected for each group may depend on the width and height of the block.

h. In one example, how the samples are selected for each group may depend on the values of the samples.

i. In one example, two samples having a maximum luminance value and a minimum luminance value are sorted out to be in the first group. And all other samples are in the second group.

Example 6.Whether and how the LM-L and LM-A modes are applied may depend on the width (W) and height (H) of the current block.

(a) For example, if W > K × H, e.g., K ═ 2, LM-L cannot be applied.

(b) For example, if H > K × W, e.g., K ═ 2, LM-a cannot be applied.

(c) If one of LM-L and LM-A cannot be applied, a flag indicating whether LM-L or LM-A is used should not be signaled.

Example 7.A flag is signaled to indicate whether CCLM mode is applied. The context for the codec flag in arithmetic codec may depend on whether the upper left neighboring block applies the CCLM mode as shown in fig. 8.

(a) In one example, if the upper left neighboring block applies CCLM mode, then the first context is used; and if the upper left neighboring block does not apply the CCLM mode, using the second context.

(b) In one example, if the upper left neighboring block is not available, it is considered to not apply the CCLM mode.

(c) In one example, if the upper left neighboring block is not available, it is considered to apply the CCLM mode.

(d) In one example, if the top-left neighboring block is not intra-coded, it is considered not to apply the CCLM mode.

(e) In one example, if the top-left neighboring block is not intra-coded, it is considered to apply the CCLM mode.

Example 8.The indication of DM and LM modes or codewords may be coded in different orders from sequence to sequence/picture to picture/slice to slice/block to block.

(a) The indicated codec order of LM and DM (e.g., whether it is LM mode first, and if not, whether it is DM mode first, or whether it is DM mode first, and if not, whether it is LM mode) may depend on the mode information of one or more neighboring blocks.

(b) In one example, when the top-left block of the current block is available and is coded in LM mode, then the indication of LM mode is coded first.

(c) Alternatively, when the top-left block of the current block is available and is coded in DM mode, then the indication of DM mode is coded first.

(d) Alternatively, when the top-left block of the current block is available and is coded in a non-LM (e.g., DM mode or other intra prediction mode other than LM), then the indication of DM mode is coded first.

(e) In one example, the indication of order may be signaled in SPS/VPS/PPS/picture header/slice group header/LCUs/CUs.

Example 9.In the above example, the samples (or downsampled samples) may be located outside the range of 2 × W upper neighboring samples or 2 × H left neighboring samples, as shown in fig. 6.

(a) In LM mode or LM-L mode, it is possible to use neighboring samples RecC [ x-1, y + d ], where d is in the range of [ T, S ]. T may be less than 0 and S may be greater than 2H-1. For example, T ═ 4 and S ═ 3H. In another example, T is 0 and S is max (2H, W + H). In yet another example, T-0 and S-4H.

(b) In LM mode or LM-A mode, it is possible to use neighboring samples RecC [ x + d, y ], where d is in the range of [ T, S ]. T may be less than 0 and S may be greater than 2W-1. For example, T ═ 4 and S ═ 3W. In another example, T is 0 and S is max (2W, W + H). In yet another example, T-0 and S-4W.

Example 10.In one example, chroma neighboring samples and their corresponding luma samples (which may be downsampled) are downsampled prior to deriving linear model parameters α and β as disclosed in examples 1-7. Assume that the width and height of the current chroma block are W and H.

(a) In one example, whether and how downsampling is performed may depend on W and H.

(b) In one example, after the downsampling process, the number of neighboring samples used to derive the parameters on the left side of the current block and the number of neighboring samples used to derive the parameters above the current block should be the same.

(c) In one example, if W is equal to H, chroma neighboring samples and their corresponding luma samples (which may be downsampled) are not downsampled.

(d) In one example, if W < H, chroma neighboring samples and their corresponding luma samples (which may be downsampled) to the left of the current block are downsampled.

(i) In one example, one chroma sample of each H/W chroma sample is chosen for deriving alpha and beta. Other chroma samples are discarded. For example, assuming R [0,0] represents the top left sample of the current block, R [ -1, K × H/W ] (K from 0 to W-1) is chosen to derive α and β.

(e) In one example, if W > H, chroma neighboring samples above the current block and their corresponding luma samples (which may be downsampled) are downsampled.

(ii) In one example, one chroma sample of each W/H chroma sample is chosen for deriving alpha and beta. Other chroma samples are discarded. For example, assuming R [0,0] represents the top left sample of the current block, R [ K W/H, -1] (K from 0 to H-1) is chosen to derive α and β.

(ii) Fig. 9 shows examples of samples to be picked when deriving α and β using the position D and the position M in fig. 6, and downsampling performed when W > H.

Example 11.The neighboring downsampled/originally reconstructed samples and/or downsampled/originally reconstructed samples may be further refined before being used in a linear model prediction process or a cross-color component prediction process.

(a) "to be refined" may refer to a filtering process.

(b) "to be refined" may refer to any non-linear processing.

(c) It is proposed to pick several neighboring samples (including chroma samples and their corresponding luma samples that can be downsampled) to compute C1, C0, L1 and L0 in order to derive α and β, such as α ═ C1-C0)/(L1-L0) and β ═ C0- α L0.

(d) In one example, S neighboring luma samples (which may be downsampled) denoted as Lx1, Lx2, … …, LxS and their corresponding chroma samples denoted as Cx1, Cx2, … …, CxS are used to derive C0 and L0, and T neighboring luma samples (which may be downsampled) denoted as Ly1, Ly2, … …, LyT and their corresponding chroma samples denoted as Cy1, Cy2, … …, CyT are used to derive C1 and L1, as follows:

(i) c0 ═ f0(Cx1, Cx2, … CxS), L0 ═ f1(Lx1, Lx2, … LxS), C1 ═ f2(Cy1, Cy2, … CyT), L1 ═ f4(Ly1, Ly2, … LyT). f0, f1, f2, and f3 are any functions.

(ii) In one example, f0 is the same as f 1.

(iii) In one example, f2 is the same as f 3.

(iv) In one example, f0, f1, f2, f3 are the same.

1. For example, they are all averaging functions.

(v) In one example, S is equal to T.

1. In one example, the set { x1, x2, … xS } is the same as the set { y1, y2, …, yT }.

(vi) In one example, Lx1, Lx2, … …, LxS are selected as the smallest S luminance samples in the group of luminance samples.

1. For example, the set of luminance samples includes all the neighboring samples used in VTM-3.0 to derive the CCLM linearity parameters.

2. For example, the set of luminance samples includes some of the neighboring samples used in VTM-3.0 to derive the CCLM linearity parameters.

a. For example, the brightness sample group includes four samples, as shown in fig. 2-5.

(vii) In one example, Ly1, Ly2, … …, LyS are selected as the largest S luminance samples in the group of luminance samples.

1. For example, the set of luminance samples includes all the neighboring samples used in VTM-3.0 to derive the CCLM linearity parameters.

2. For example, the set of luminance samples includes some of the neighboring samples used in VTM-3.0 to derive the CCLM linearity parameters.

a. For example, the brightness sample group includes four samples, as shown in fig. 2-5.

Example 12.It is proposed to select other neighboring or down-sampled neighboring samples based on the largest neighboring or down-sampled neighboring sample of a given set of neighboring or down-sampled neighboring samples.

(a) In one example, the neighboring sample point representing the largest or downsampled neighboring sample point is located at position (x0, y 0). The spots in the regions (x0-d1, y0), (x0, y0-d2), (x0+ d3, y0), (x0, y0+ d4) may be used to select other spots. The integers d1, d2, d3, d4 may depend on the position (x0, y 0). For example, if (x0, y0) is the left side of the current block, d 1-d 3-1 and d 2-d 4-0. If (x0, y0) is above the current block, d 1-d 3-0 and d 2-d 4-1.

(b) In one example, the neighboring sample point representing the smallest or downsampled neighboring sample point is located at position (x1, y 1). The spots in the regions (x1-d1, y1), (x1, y1-d2), (x1+ d3, y1), (x1, y1+ d4) may be used to select other spots. The integers d1, d2, d3, d4 may depend on the position (x1, y 1). For example, if (x1, y1) is the left side of the current block, d 1-d 3-1 and d 2-d 4-0. If (x1, y1) is above the current block, d 1-d 3-0 and d 2-d 4-1.

(c) In one example, the above sampling points represent sampling points of one color component (e.g., a luminance color component). The samples used in the CCLM/cross-color component process may be derived by the corresponding coordinates of the second color component.

(d) Similar methods can be used to derive the smallest samples.

Example 13: in the above example, the luminance and chrominance may be switched. Alternatively, the luminance color component may be replaced by a primary color component (e.g., G) and the chrominance color component may be replaced by a secondary color component (e.g., B or R).

Example 14.The choice of the location of the chroma samples (and/or the corresponding luma samples) may depend on the mode information of the codec.

(a) Further alternatively, it may depend on the availability of neighboring samples, such as whether left column or up or right up or left down is available. Fig. 10 depicts the concept of left column/up/right up/left down with respect to blocks.

(b) Further alternatively, it may depend on the availability of samples located at a particular location, such as whether the first upper-right sample and/or the first lower-left sample are available.

(c) Further, it may alternatively depend on the block size.

(i) Further, it may alternatively depend on the ratio between the width and the height of the current chrominance (and/or luminance) block.

(ii) Additionally, it may alternatively depend on whether the width and/or height is equal to K (e.g., K-2).

(d) In one example, when the current mode is the normal LM mode, the following may be applied to select chroma samples (and/or downsampled or non-downsampled luma samples):

(i) if both left column and up row are available, two samples in the left column and two samples in the up row may be selected. They may be located (assuming the upper left coordinate of the current block is (x, y)):

(x-1, y), (x, y-1), (x-1, y + H-1) and (x + W-1, y-1)

(x-1, y), (x, y-1), (x-1, y + H-H/W-1), and (x + W-1, y-1). For example, when H is greater than W.

(x-1, y), (x, y-1), (x-1, y + H-1), and (x + W-W/H-1, y-1). For example, when H is less than W.

(x-1, y), (x, y-1), (x-1, y + H-max (1, H/W)) and (x + W-max (1, W/H), y-1).

(ii) If only the upstream line is available, only samples from the upstream line are selected.

1. For example, four samples in the upper row may be selected.

2. For example, two samples may be selected.

3. How the sampling points are selected may depend on the width/height. For example, four samples are selected when W >2, and two samples are selected when W ═ 2.

4. The selected samples may be located (assuming (x, y) coordinates of the current block at the top left):

a.(x,y-1),(x+W/4,y-1),(x+2*W/4,y-1),(x+3*W/4,y–1)

b.(x,y-1),(x+W/4,y-1),(x+3*W/4,y–1),(x+W-1,y-1)

(x, y-1), (x + (2W)/4, y-1), (x + 3W)/4, y-1). For example, when the upper right row is available, or when the first upper right sample is available.

(x, y-1), (x + (2W)/4, y-1), (x +3 x (2W)/4, y-1), (x + (2W) -1, y-1). For example, when the upper right row is available, or when the first upper right sample is available.

(iii) Samples are selected from the left column only if only the left column is available.

1. For example, four samples in the left column may be selected;

2. for example, two samples in the left column may be selected;

3. how the sampling points are selected may depend on the width/height. For example, four samples are selected when H >2, and two samples are selected when H ═ 2.

4. The selected samples may be located at:

a.(x-1,y),(x-1,y+H/4),(x-1,y+2*H/4),(x-1,y+3*H/4)

b.(x-1,y),(x-1,y+2*H/4),(x-1,y+3*H/4),(x-1,y+H-1)

(x-1, y), (x-1, y + (2H)/4), (x-1, y +2 (2H)/4), (x-1, y +3 (2H)/4). For example, when the lower left is available, or when the first lower left sample point is available.

(x-1, y), (x-1, y +2 (2H)/4), (x-1, y +3 (2H)/4), (x-1, y + (2H) -1). For example, when the lower left is available, or when the first lower left sample point is available.

(iv) For the above example, only two of the four samples may be selected.

(e) In one example, when the current mode is the LM-a mode, it may select a sample point according to example 11(d) (ii).

(f) In one example, when the current mode is the LM-L mode, it may select samples according to example 11(d) (iii).

(g) The selected luma samples (e.g., according to the selected chroma positions) may be grouped into 2 groups, one group having the maximum and minimum values of all selected samples, and the other group having all remaining samples.

(i) The two maxima of the 2 groups were averaged as the maximum in the two-point method; the two minima of the 2 groups were averaged as the minimum in the two-point method to derive the LM parameter.

(ii) When only 4 samples are selected, the two larger sample values are averaged, the two smaller sample values are averaged, and the average value is used as input to the two-point method to derive the LM parameter.

Example 15.In the above example, the luminance and chrominance may be switched. Alternatively, the luminance color component may be replaced by a primary color component (e.g., G) and the chrominance color component may be replaced by a secondary color component (e.g., B or R).

Example 16.It is proposed to select the upper neighboring chroma samples (and/or their corresponding luma samples, which may be downsampled) based on the first position offset value (denoted F) and the step value (denoted S). Assume that the width of the available upper neighboring samples to be used is W.

a. In one example, W may be set to the width of the current block.

b. In one example, W may be set to (L × width of the current block), where L is an integer value.

c. In one example, when both the upper block and the left block are available, W may be set to the width of the current block.

i. Alternatively, when the left block is not available, W may be set to (L × width of the current block), where L is an integer value.

in one example, L may depend on the availability of the top right block. Alternatively, L may depend on the availability of one upper left sample point.

d. In one example, W may depend on the codec mode.

i. In one example, if the current block is coded as the LM mode, W may be set to the width of the current block;

if the current block is coded in LM-a mode, W may be set to (L x width of the current block), where L is an integer value.

(a) L may depend on the availability of the top right block. Alternatively, L may depend on the availability of one upper left sample point.

e. Assuming that the upper left coordinate of the current block is (x0, y0), the upper neighboring sample point at position (x0+ F + K × S, y0-1) (where K ═ 0, 1, 2, … …, kMax) is selected.

f. In one example, F ═ W/P. P is an integer.

i. For example, P2iWhere i is an integer such as 1 or 2.

Alternatively, F ═ W/P + offset.

g. In one example, S ═ W/Q. Q is an integer.

i. For example, Q ═ 2jWhere j is an integer such as 1 or 2.

h. In one example, F ═ S/R. R is an integer.

i. For example, R ═ 2mWhere m is an integer such as 1 or 2.

i. In one example, S ═ F/Z. Z is an integer.

i. For example, Z-2nWhere n is an integer such as 1 or 2.

Kmax and/or F and/or S and/or offset may depend on the prediction mode of the current block (such as LM, LM-a or LM-L);

kmax and/or F and/or S and/or offset may depend on the width and/or height of the current block.

Kmax and/or F and/or S and/or offset may depend on the availability of neighboring samples.

Kmax and/or F and/or S and/or offset may depend on W.

n. for example, kMax is 1, F is W/4, S is W/2, and offset is 0. Further alternatively, if the current block is LM-coded, both left-adjacent samples and upper-adjacent samples are available, and W > -4, the setting is made.

o. for example, kMax is 3, F is W/8, S is W/4, and offset is 0. Further alternatively, if the current block is LM-coded, only the upper neighboring samples are available, and W > -4, the setting is made.

p. for example, kMax is 3, F is W/8, S is W/4, and offset is 0. Further alternatively, if the current block is LM-a codec, and W > -4, the setting is made.

q. for example, kMax is 1, F is 0, S is 1, and offset is 0. Further alternatively, if W is equal to 2, the setting is made.

Example 17.It is proposed to select the left neighboring chroma sample point (and/or its corresponding luminance sample point, which may be down-sampled) based on the first position offset value (denoted F) and the step value (denoted S). Assume that the height of the available left neighboring sample point to be used is H.

a. In one example, H may be set to the height of the current block.

b. In one example, H may be set to (L x height of the current block), where L is an integer value.

c. In one example, H may be set to the height of the current block when both the upper block and the left block are available.

i. Alternatively, when the upper block is not available, H may be set to (L x the height of the current block), where L is an integer value.

in one example, L may depend on the availability of the bottom left block. Alternatively, L may depend on the availability of one lower left sample point.

Alternatively, H may be set to (height of current block + width of current block) if the required upper right neighboring block is available.

(a) In one example, when the left neighboring sample is not available, the same H upper neighboring samples are chosen for LM-A mode and LM mode.

d. In one example, H may depend on the codec mode.

i. In one example, if the current block is coded as the LM mode, H may be set as the height of the current block;

if the current block is coded in LM-a mode, W may be set to (L x height of the current block), where L is an integer value.

(a) L may depend on the availability of the bottom left block. Alternatively, L may depend on the availability of one upper left sample point.

(b) Alternatively, W may be set to (height of current block + width of current block) if the required lower left neighboring block is available.

(c) In one example, when the upper neighboring samples are not available, the same W left neighboring samples are chosen for LM-L mode and LM mode.

e. Assuming that the upper left coordinate of the current block is (x0, y0), a left neighboring sample point at position (x0-1, y0+ F + K × S) (where K ═ 0, 1, 2, … …, kMax) is selected.

f. In one example, F ═ H/P. P is an integer.

i. For example, P2iWhere i is an integer such as 1 or 2.

Alternatively, F ═ H/P + offset.

g. In one example, S ═ H/Q. Q is an integer.

i. For example, Q ═ 2jWhere j is an integer such as 1 or 2.

h. In one example, F ═ S/R. R is an integer.

i. For example, R ═ 2mWhere m is an integer such as 1 or 2.

i. In one example, S ═ F/Z. Z is an integer.

i. For example, Z-2nWhere n is an integer such as 1 or 2.

Kmax and/or F and/or S and/or offset may depend on the prediction mode of the current block (such as LM, LM-a or LM-L);

kmax and/or F and/or S and/or offset may depend on the width and/or height of the current block.

Kmax and/or F and/or S and/or offset may depend on H.

Kmax and/or F and/or S and/or offset may depend on the availability of neighboring samples.

n. for example, kMax is 1, F is H/4, S is H/2, and offset is 0. Further alternatively, if the current block is LM-coded, both left-adjacent samples and upper-adjacent samples are available, and H > -4, the setting is made.

o. for example, kMax is 3, F is H/8, S is H/4, and offset is 0. Further alternatively, if the current block is LM-coded, only the upper neighboring samples are available, and H > -4, the setting is made.

p. for example, kMax is 3, F is H/8, S is H/4, and offset is 0. In addition, alternatively, if the current block is LM-L codec, and H > -4, the setting is made.

For example, if H is equal to 2, kMax is 1, F is 0, S is 1, and offset is 0.

Example 18: it is proposed to select two or four neighboring chroma samples (and/or their corresponding luma samples, which may be downsampled) to derive linear model parameters.

a. In one example, maxY/maxC and minY/minC are derived from two or four adjacent chroma samples (and/or their corresponding luma samples, which may be downsampled) and then used to derive linear model parameters using a two-point approach.

b. In one example, if there are two neighboring chroma sample points (and/or their corresponding luma sample points that may be downsampled) selected to derive maxY/maxC and minY/minC, then minY is set to the smaller luma sample value and minC is its corresponding chroma sample value; maxY is set to the larger luma sample value and maxC is its corresponding chroma sample value.

c. In one example, if four neighboring chroma samples (and/or their corresponding luma samples that may be downsampled) are selected to derive maxY/maxC and minY/minC, the luma samples and their corresponding chroma samples are divided into two arrays G0 and G1, each array containing two luma samples and their corresponding luma samples.

i. Assuming that four luminance samples and their corresponding chrominance samples are denoted as S0, S1, S2, S3, they may be divided into two groups in any order. For example:

(a)G0={S0,S1},G1={S2,S3};

(b)G0={S1,S0},G1={S3,S2};

(c)G0={S0,S2},G1={S1,S3};

(d)G0={S2,S0},G1={S3,S1};

(e)G0={S1,S2},G1={S0,S3};

(f)G0={S2,S1},G1={S3,S0};

(g)G0={S0,S3},G1={S1,S2};

(h)G0={S3,S0},G1={S2,S1};

(i)G0={S1,S3},G1={S0,S2};

(j)G0={S3,S1},G1={S2,S0};

(k)G0={S3,S2},G1={S0,S1};

(l)G0={S2,S3},G1={S1,S0};

(m) G0 and G1 may be interchanged.

in one example, the luminance sample values of G0[0] and G0[1] are compared, and if the luminance sample value of G0[0] is greater than the luminance sample value of G0[1], then the luminance sample value of G0[0] and its corresponding chroma sample point are swapped with the luminance sample value of G0[1] and its corresponding chroma sample point.

(a) Alternatively, if the luma sample value of G0[0] is greater than or equal to the luma sample value of G0[1], then the luma samples of G0[0] and their corresponding chroma samples are exchanged with the luma samples of G0[1] and their corresponding chroma samples.

(b) Alternatively, if the luminance sample value of G0[0] is less than the luminance sample value of G0[1], then the luminance sample and its corresponding chroma sample of G0[0] are exchanged with the luminance sample and its corresponding chroma sample of G0[1 ].

(c) Alternatively, if the luma sample value of G0[0] is less than or equal to the luma sample value of G0[1], then the luma samples of G0[0] and their corresponding chroma samples are exchanged with the luma samples of G0[1] and their corresponding chroma samples.

in one example, the luminance sample values of G1[0] and G1[1] are compared, and if the luminance sample value of G1[0] is greater than the luminance sample value of G1[1], then the luminance sample value of G1[0] and its corresponding chroma sample point are swapped with the luminance sample value of G1[1] and its corresponding chroma sample point.

(a) Alternatively, if the luma sample value of G1[0] is greater than or equal to the luma sample value of G1[1], then the luma samples of G1[0] and their corresponding chroma samples are exchanged with the luma samples of G1[1] and their corresponding chroma samples.

(b) Alternatively, if the luminance sample value of G1[0] is less than the luminance sample value of G1[1], then the luminance sample and its corresponding chroma sample of G1[0] are exchanged with the luminance sample and its corresponding chroma sample of G1[1 ].

(c) Alternatively, if the luma sample value of G1[0] is less than or equal to the luma sample value of G1[1], then the luma samples of G1[0] and their corresponding chroma samples are exchanged with the luma samples of G1[1] and their corresponding chroma samples.

in one example, the luminance sample values of G0[0] and G1[1] are compared, and G0 and G1 are swapped if the luminance sample value of G0[0] is greater than (or less than, or not greater than, or not less than) the luminance sample value of G1[1 ].

(a) In one example, the luminance sample values of G0[0] and G1[0] are compared, and G0 and G1 are swapped if the luminance sample value of G0[0] is greater than (or less than, or not greater than, or not less than) the luminance sample value of G1[0 ].

(b) In one example, the luminance sample values of G0[1] and G1[0] are compared, and G0 and G1 are exchanged if the luminance sample value of G0[1] is greater than (or less than, or not greater than, or not less than) the luminance sample value of G1[0 ].

(c) In one example, the luminance sample values of G0[1] and G1[1] are compared, and G0 and G1 are exchanged if the luminance sample value of G0[1] is greater than (or less than, or not greater than, or not less than) the luminance sample value of G1[1 ].

v. in one example, the luminance sample values of G0[0] and G1[1] are compared, and G0[0] and G1[1] are swapped if the luminance sample value of G0[0] is greater than (or less than, or not greater than, or not less than) the luminance sample value of G1[1 ].

(a) In one example, the luminance sample values of G0[0] and G1[0] are compared, and G0[0] and G1[0] are swapped if the luminance sample value of G0[0] is greater than (or less than, or not greater than, or not less than) the luminance sample value of G1[0 ].

(b) In one example, the luminance sample values of G0[1] and G1[0] are compared, and G0[1] and G1[0] are swapped if the luminance sample value of G0[1] is greater than (or less than, or not greater than, or not less than) the luminance sample value of G1[0 ].

(c) In one example, the luminance sample values of G0[1] and G1[1] are compared, and G0[1] and G1[1] are swapped if the luminance sample value of G0[1] is greater than (or less than, or not greater than, or not less than) the luminance sample value of G1[1 ].

In one example, maxY is calculated as the average of the luminance sample values of G0[0] and G0[1], and maxC is calculated as the average of the chrominance sample values of G0[0] and G0[1 ]. (a) Alternatively, maxY is calculated as the average of the luminance sample values of G1[0] and G1[1], and maxC is calculated as the average of the chrominance sample values of G1[0] and G1[1 ].

In one example, minY is calculated as the average of the luminance sample values of G0[0] and G0[1], and minC is calculated as the average of the chrominance sample values of G0[0] and G0[1 ]. Alternatively, minY is calculated as the average of the luminance sample values of G1[0] and G1[1], and minC is calculated as the average of the chrominance sample values of G1[0] and G1[1 ].

d. In one example, if only two neighboring chroma samples (and/or their corresponding luma samples that may be downsampled) are available, they are first padded with four chroma samples (and/or their corresponding luma samples) and then the four chroma samples (and/or their corresponding luma samples) are used to derive the CCLM parameters.

i. In one example, two filled chroma samples (and/or their corresponding luma samples) are copied from two available neighboring chroma samples (and/or their corresponding luma samples that may be downsampled).

Example 19: in all of the above examples, the selected chroma samples should be located in the upper row (i.e., with W samples) and/or left column (i.e., with H samples) as depicted in fig. 10, where W and H are the width and height of the current block.

a. Alternatively, the above restriction may be applied when the current block is coded in the normal LM mode.

b. Alternatively, the selected chroma samples should be located in the upper row (i.e., with W samples) and the upper right row with H samples.

i. Further, alternatively, the above restriction may be applied when the current block is coded in the LM-a mode.

Additionally, alternatively, the above restriction may be applied when the current block is coded in LM-a mode or normal LM mode, where upstream is available but left column is not.

c. Alternatively, the selected chroma samples should lie within the left column (i.e., with H samples) and the left-bottom column with W samples.

i. Further, alternatively, the above restriction may be applied when the current block is coded in the LM-L mode.

Additionally, alternatively, the above restriction may be applied when the current block is coded in LM-L mode or normal LM mode, where up-row is not available but left-column is available.

Example 20

In one example, only neighboring luma samples at locations where corresponding chroma samples are needed to derive the CCLM parameters need to be downsampled.

Example 21

How the methods disclosed in this document are performed may depend on the color format (such as 4:2:0 or 4:4: 4).

a. Alternatively, how to perform the methods disclosed in this document may depend on the bit depth (such as 8 bits or 10 bits).

b. Alternatively, how the methods disclosed in this document are performed may depend on the color representation method (e.g., RGB or YCbCr).

c. Alternatively, how the methods disclosed in this document are performed may depend on the color representation method (e.g., RGB or YCbCr).

d. Alternatively, how to perform the methods disclosed in this document may depend on the chroma downsampling position.

Example 22

Whether to derive the maximum/minimum values for the luma and chroma components used to derive the CCLM parameters may depend on the availability of the left and upper neighbors. For example, if both left-neighboring and upper-neighboring blocks are not available, the maximum/minimum values of the luma and chroma components used to derive the CCLM parameters may not be derived.

a. Whether to derive the maximum/minimum values for the luma and chroma components used to derive the CCLM parameters may depend on the number of available neighboring samples. For example, if numSampL ═ 0 and numSampT ═ 0, the maximum/minimum values of luminance and chrominance components used to derive the CCLM parameters may not be derived. In another example, if numSampL + numsamppt ═ 0, the maximum/minimum values of the luminance and chrominance components used to derive the CCLM parameter may not be derived. In both examples, numSampL and numSampT are the number of available contiguous samples in the left contiguous block and the upper contiguous block.

b. Whether to derive the maximum/minimum values for the luma and chroma components used to derive the CCLM parameters may depend on the number of chosen samples used to derive the parameters. For example, if cntL ═ 0 and cntT ═ 0, then the maximum/minimum values for the luminance and chrominance components used to derive the CCLM parameters may not be derived. In another example, if cntL + cntT ═ 0, the maximum/minimum values of the luminance and chrominance components used to derive the CCLM parameters may not be derived. In both examples, cntL and cntT are the number of chosen samples in the left-neighboring block and the top-neighboring block.

Example 23

In one example, the proposed method of deriving parameters for use in CCLM can be used to derive parameters for use in LIC or other codec tools that rely on linear models.

a. The examples disclosed above may be applied to LIC, such as by replacing "chroma neighboring samples" with "neighboring samples of the current block" and replacing "corresponding luma samples" with "neighboring samples of the reference block".

b. In one example, the samples used for LIC parameter derivation may exclude samples at a particular location in the upper row and/or left column.

i. In one example, the samples used for LIC parameter derivation may exclude the first sample in the upstream.

(a) Assuming the coordinates of the top left sample point are (x0, y0), exclusion (x0, y0-1) is proposed to use the LIC parameters.

in one example, the samples used for LIC parameter derivation may exclude the first sample in the left column.

(a) Assuming the coordinates of the top left sample point are (x0, y0), exclusion (x0-1, y0) is proposed to use the LIC parameters.

Whether the above method is applied and/or how a particular location is defined may depend on the availability of left columns/uplinks.

Whether the above method is applied and/or how a particular location is defined may depend on the block size.

c. In one example, N neighboring samples of the current block (which may be downsampled) and N corresponding neighboring samples of the reference block (which may be downsampled accordingly) may be used to derive the parameters for LIC.

i. For example, N is 4.

in one example, N adjacent samples may be defined as N/2 samples from an upper row; and N/2 samples from the left column.

(a) Alternatively, N neighboring samples may be defined as N samples from the upper or left column.

in another example, N is equal to min (L, T), where T is the total number of available neighboring samples (which may be downsampled) for the current block.

(a) In one example, L is set to 4.

in one example, the selection of the coordinates of the N samples may follow the rules used to select the N samples in the CCLM process.

v. in one example, the selection of the coordinates of the N samples may follow the rules for selecting the N samples in the LM-a process.

In one example, the selection of the coordinates of the N samples may follow the rules for selecting the N samples in the LM-L process.

In one example, how to select the N samples may depend on the availability of the up/left column.

d. In one example, N neighboring samples of the current block (which may be downsampled) and N corresponding neighboring samples of the reference block (which may be downsampled accordingly) are used to derive parameters used in LIC, which may be chosen based on sample position.

i. The selection method may depend on the width and height of the current block.

The picking method may depend on the availability of neighboring blocks.

For example, if both upper and left neighboring samples are available, K1 neighboring samples may be picked from the left neighboring samples and K2 neighboring samples may be picked from the upper neighboring samples. For example, K1 ═ K2 ═ 2.

For example, if only left neighboring samples are available, K1 neighboring samples may be picked from the left neighboring samples. For example, K1 ═ 4.

v. for example, if only upper neighboring samples are available, K2 neighboring samples may be picked from the upper neighboring samples. For example, K2 ═ 4.

For example, a upsampling point may be picked with a first position offset value (denoted F) and a step value (denoted S) that may depend on the size of the current block and the availability of neighboring blocks. (a) For example, the method disclosed in example 16 may be applied to derive F and S.

For example, a left sample point may be chosen with a first position offset value (denoted F) and a step value (denoted S) that may depend on the size of the current block and the availability of neighboring blocks. (a) For example, the method disclosed in example 17 may be applied to derive F and S.

e. In one example, the proposed method of deriving parameters used in CCLM can also be used to derive parameters used in LIC when the current block is affine coded.

f. The above method can be used to derive parameters used in other codec tools that rely on linear models.

In another example, a cross-component prediction mode is proposed in which chroma samples are predicted with corresponding reconstructed luma samples according to a prediction model, as shown in equation 12. In equation 12, PredC(x, y) represents the predicted sampling point of the chromaticity. α and β are two model parameters. Rec' L (x, y) is the downsampled luminance sample point.

predC(x,y)=α×Rec′L(x,y)+β, (12)

The luma downsampling process for block a in fig. 11 introduces a six-tap filter as shown in equation 13.

Rec′L(x,y)=(2×RecL(2x,2y)+2×RecL(2x,2y+1)+RecL(2x-1,2y)+RecL(2x+1,2y)+RecL(2x-1,2y+1)+RecL(2x+1,2y+1)+4)>>3. (13)

The upper ambient brightness reference samples with shading in fig. 11 are downsampled using a 3-tap filter as shown in equation 14. The left ambient brightness reference samples are down-sampled according to equation 15. If the left or upper samples are not available, the 2-tap filter defined in equations 16 and 17 will be used.

Rec′L(x,y)=(2×RecL(2x,2y)+RecL(2x-1,2y)+RecL(2x+1,2y))>>2 (14)

Rec′L(x,y)=(2×RecL(2x,2y)+RecL(2x,2y+1)+RecL(2x,2y-1))>>2 (15)

Rec′L(x,y)=(3×RecL(2x,2y)+RecL(2x+1,2y)+2)>>2 (16)

Rec′L(x,y)=(3×RecL(2x,2y)+RecL(2x,2y+1)+2)>>2 (17)

In particular, the ambient luminance reference samples are downsampled to a size equal to the chrominance reference samples. The dimensions are denoted width and height. To derive α and β only two or four neighboring samples are involved. When deriving α and β, a look-up table is applied to avoid division operations. The derivation method is as follows.

3.1 exemplary method with Up to two samples

(1) The ratio r of width and height is calculated as shown in equation 18.

(2) If both the top and left blocks are available, then 2 samples of posA at the first top line and posL at the first left line are selected. For simplicity of description, it is assumed that the width is a long side. The derivation of posA and posL is shown in equation 19 (position index starts from 0). Fig. 12 shows some examples of different width and height ratios (1, 2, 4 and 8, respectively). The selected samples are shaded.

posA=width-r

posL=height (19)

(3) If the upper block is available and the left block is not, the first point to go online and the posA point are selected, as shown in fig. 13.

(4) If the left block is available and the upper block is not, then the first point of the left line and the posL point are selected, as shown in FIG. 14.

(5) And deriving a chrominance prediction model according to the luminance value and the chrominance value of the selected sampling point.

(6) If neither the left block nor the upper block is available, a default prediction model is used, where α equals 0 and β equals 1< < (BitDepth-1), where BitDepth represents the bit depth of the chroma samples.

3.2 exemplary method with Up to four samples

(1) The ratio r of width and height is calculated as equation 18.

(2) If both upper and left blocks are available, 4 samples are selected that lie on the first of the first upper line and posA, the first of the first left line and posL. The derivation of posA and posL is shown in equation 19. Fig. 15 shows some examples of different width and height ratios (1, 2, 4 and 8, respectively). The selected samples are shaded.

(3) If the upper block is available and the left block is not, the first point to go online and the posA point are selected, as shown in fig. 13.

(4) If the left block is available and the upper block is not, then the first point of the left line and the posL point are selected, as shown in FIG. 14.

(5) If neither the left block nor the upper block is available, a default prediction model is used, where α equals 0 and β equals 1< < (BitDepth-1), where BitDepth represents the bit depth of the chroma samples.

3.3 exemplary method of Using lookup tables in LM derivation

Fig. 16 shows examples of lookup tables with 128, 64 and 32 entries, and each entry is represented by 16 bits. The two-point LM derivation process is simplified as shown in table 1 and fig. 17, which has 64 entries. It should be noted that the first entry may not be stored in a table.

It should also be noted that although each entry in the exemplary table is designed to have 16 bits, it can be easily converted to a number having fewer bits (such as 8 bits or 12 bits). For example, a table with entries of 8 bits may be obtained as:

g_aiLMDivTableHighSimp_64_8[i]=(g_aiLMDivTableHighSimp_64[i]+128)>>8。

for example, a table with 12-bit entries may be obtained as:

g_aiLMDivTableHighSimp_64_12[i]=(g_aiLMDivTableHighSimp_64[i]+8)>>4。

table 1: simplified LM derivation process

It should be noted that maxLuma and minLuma may indicate the maximum and minimum luminance sample values for the selected location. Alternatively, they may indicate a function, such as an average, of the maximum and minimum intensity sample values for the selected location. When only 4 positions are selected, they may also indicate the average of two larger luminance values and the average of two smaller luminance values. It is further noted that, in fig. 17, maxChroma and minChroma denote chroma values corresponding to maxLuma and minLuma.

3.3 method #4 with Up to four samples

Assume that the block width and block height of the current chroma block are W and H, respectively. And the upper left coordinate of the current chrominance block is 0, 0.

If both the upper and left blocks are available and the current mode is the normal LM mode (excluding LM-a and LM-L), then 2 chroma samples in the upper row and 2 chroma samples in the left column are selected.

The coordinates of the two spots are floor (W/4), -1] and floor (3W/4), -1 ].

The coordinates of the two left-hand points are [ -1, floor (H/4) ] and [ -1, floor (3 × H/4) ].

As depicted in fig. 20A, the selected spots are colored red.

Subsequently, 4 samples were classified according to the intensity of the luminance samples and classified into 2 groups. The two larger samples and the two smaller samples are averaged, respectively. The cross-component prediction model was derived using 2 mean points. Alternatively, the maximum and minimum of the four samples are used to derive the LM parameter.

If the upper block is available and the left block is not available, four chroma samples are selected from the upper block when W >2, and 2 chroma samples are selected when W is 2.

The coordinates of the four selected sampling points are [ W/8, -1], [ W/8+ W/4, -1], [ W/8+ 2W/4, -1] and [ W/8+ 3W/4, -1 ].

As depicted in fig. 20B, the selected spots are colored red.

If the left block is available and the upper block is not available, then four chroma samples are selected from the left block when H >2, and 2 chroma samples are selected when H ═ 2.

The coordinates of the four left-hand points selected are [ -1, H/8], [ -1, H/8+ H/4], [ -1, H/8+ 2H/4, -1], and [ -1, H/8+ 3H/4 ].

If neither the left block nor the upper block is available, default prediction is used, where α equals 0 and β equals 1< < (BitDepth-1), where BitDepth denotes the bit depth of the chroma samples.

If the current mode is the LM-a mode, four chroma samples are selected from the upper block when W '> 2, and 2 chroma samples are selected when W' > 2. W' is the available number of upper adjacent spots, which may be 2 × W.

The coordinates of the four selected sampling points are [ W '/8, -1], [ W '/8 + W '/4, -1], [ W '/8 + 2W '/4, -1] and [ W '/8 + 3W '/4, -1 ].

If the current mode is the LM-L mode, four chroma samples are selected from the left block when H '> 2, and 2 chroma samples are selected when H' ═ 2. H' is the available number of left-adjacent spots, which may be 2 × H.

The coordinates of the four left-hand points selected are [ -1, H '/8 ], [ -1, H '/8 + H '/4 ], [ -1, H '/8 + 2H '/4, -1], and [ -1, H '/8 + 3H '/4 ].

3.5 example embodiment for modifying the current VVC standard to use CCLM prediction.

8.3.4.2.8 specification of INTRA prediction modes INTRA _ LT _ CCLM, INTRA _ L _ CCLM and INTRA _ T _ CCLM

Equations are described in this section using equation numbers corresponding to those in the current draft of the VVC standard.

The inputs to this process are:

-an intra prediction mode predModeIntra,

sample position (xTbC, yTbC) of an upper left sample of the current transform block relative to an upper left sample of the current picture,

a variable nTbW specifying the transform block width,

a variable nTbH specifying the transform block height,

-chroma neighborhood samples p [ x ] [ y ], where x-1, y-0.2 × nTbH-1, and x-0.2 × nTbW-1, y-1.

The output of this process is a predicted sample point predSamples [ x ] [ y ], where x is 0.

The current luminance position (xTbY, yTbY) is derived as follows:

(xTbY,yTbY)=(xTbC<<1,yTbC<<1) (8-155)

the variables avail L, avail T and avail TL are derived as follows:

……

if predModeIntra is equal to INTRA _ LT _ CCLM, then the following applies:

numSampT=availTnTbW:0 (8-156)

numSampL=availLnTbH:0 (8-157)

otherwise, the following applies:

numSampT=(availT&&predModeIntra==INTRA_T_CCLM)?(nTbW+numTopRight):0 (8-158)

numSampL=(availL&&predModeIntra==INTRA_L_CCLM)?(nTbH+numLeftBelow):0 (8-159)

the variable bCTUboundary is derived as follows:

bCTUboundary=(yTbC&(1<<(CtbLog2SizeY-1)-1)==0)?TRUE:FALSE. (8-160)

the predicted sample point predSamples [ x ] [ y ] (where x ═ 0.. nTbW-1, y ═ 0.. nTbH-1) is derived as follows:

-if both numSampL and numSampT are equal to 0, then the following applies:

predSamples[x][y]=1<<(BitDepthC-1) (8-161)

otherwise, the following ordered steps apply:

1.… … [ No change in current Specifications ]

2.……

3.……

4.……

5.……

6.… … [ No change in current Specifications ]

7. The variables minY, maxY, minC, and maxC are derived as follows:

-the variable minY is set equal to 1<<(BitDepthY) +1 and the variable maxY is set equal to-1.

-if avail l equals TRUE and predModeIntra equals INTRA — LT — CCLM, the variable above 4 is set equal to 0; otherwise, it is set equal to 1.

-if availT equals TRUE and predModeIntra equals INTRA — LT _ CCLM, the variable LeftIs4 is set equal to 0; otherwise, it is set equal to 1.

The variable arrays startPos [ ] and pickStep [ ] are derived as follows:

–startPos[0]=actualTopTemplateSampNum>>(2+aboveIs4);

–pickStep[0]=std::max(1,actualTopTemplateSampNum>>(1+aboveIs4));

–startPos[1]=actualLeftTemplateSampNum>>(2+leftIs4);

–pickStep[1]=std::max(1,actualLeftTemplateSampNum>>(1+leftIs4));

-the variable cnt is set equal to 0.

If predModeIntra is equal to INTRA _ LT _ CCLM, then variable nSX is set equal to nTbW, nSY is set equal to nTbH; otherwise, nSX is set equal to numSampLT and nSY is set equal to numSampL.

If availT equals TRUE and predModeIntra does not equal INTRA _ L _ CCLM, the variables selectlapix, selectcrmapix are derived as follows:

-when startPos [0] + cnt pickStep [0] < nSX and cnt <4, the following applies:

–selectLumaPix[cnt]=pTopDsY[startPos[0]+cnt*pickStep[0]];

–selectChromaPix[cnt]=p[startPos[0]+cnt*pickStep[0]][-1];

–cnt++;

if avail is equal to TRUE and predModeIntra is not equal to INTRA _ T _ CCLM,

the variables selectLumaPix, selectcromepix are derived as follows:

-when startPos [1] + cnt pickStep [1] < nSY and cnt <4, the following applies:

–selectLumaPix[cnt]=pLeftDsY[startPos[1]+cnt*pickStep[1]];

–selectChromaPix[cnt]=p[-1][startPos[1]+cnt*pickStep[1]];

–cnt++;

if cnt is equal to 2, the following applies:

-if selectLumaPix [0] > selectLumaPix [1], then minY is set equal to selectLumaPix [1], minC is set equal to selectChromaPix [1], maxY is set equal to selectLumaPix [0], and maxC is set equal to selectChromaPix [0 ]; otherwise, maxY is set equal to selectrlumaPix [1], maxC is set equal to selectrlumaPix [1], minY is set equal to selectrlumaPix [0], and minC is set equal to selectrlumaPix [0]

Otherwise, if cnt is equal to 4, the following applies:

the variable arrays minGrpIdx and maxGrpIdx are initialized to:

–minGrpIdx[0]=0,minGrpIdx[1]=1,maxGrpIdx[0]=2,maxGrpIdx[1]=3;

the following apply

-exchanging minGrpIdx [0] and minGrpIdx [1] if selectLumaPix [ minGrpIdx [0] ] > selectLumaPix [ minGrpIdx [1] ];

-swapping maxGrpIdx [0] and maxGrpIdx [1] if selectLumaPix [ maxGrpIdx [0] ] > selectLumaPix [ maxGrpIdx [1] ];

-if selectLumaPix [ minGrpIdx [0] ] > selectLumaPix [ maxGrpIdx [1] ], swapping minGrpIdx and maxGrpIdx;

-if selectLumaPix [ minGrpIdx [1] ] > selectLumaPix [ maxGrpIdx [0] ], swapping minGrpIdx [1] and maxGrpIdx [0 ];

maxY, maxC, minY and minC are derived as follows:

–maxY=(selectLumaPix[maxGrpIdx[0]]+selectLumaPix[maxGrpIdx[1]]+1)>>1;

–maxC=(selectChromaPix[maxGrpIdx[0]]+selectChromaPix[maxGrpIdx[1]]+1)>>1;

–maxY=(selectLumaPix[minGrpIdx[0]]+selectLumaPix[minGrpIdx[1]]+1)>>1;

–maxC=(selectChromaPix[minGrpIdx[0]]+selectChromaPix[minGrpIdx[1]]+1)>>1;

8. the variables a, b and k are derived as follows:

[ end of Change ]

3.6 Another exemplary working draft on proposed CCLM predictions

In this section, another exemplary embodiment is described that illustrates modifications that may be made to the current working draft of the VVC standard. Here, the equation number refers to a corresponding equation number in the VVC standard.

Specification of INTRA prediction modes INTRA _ LT _ CCLM, INTRA _ L _ CCLM and INTRA _ T _ CCLM.

[ addition to Current VVC working draft as follows ]

The number of available neighboring chroma samples numpossamp at the top and right and the number of available neighboring chroma samples nlefsamp at the left and lower left are derived as follows:

if predModeIntra is equal to INTRA _ LT _ CCLM, then the following applies:

numSampT=availTnTbW:0 (8-157)

numSampL=availLnTbH:0 (8-158)

otherwise, the following applies:

numSampT=(availT&&predModeIntra==INTRA_T_CCLM)?(nTbW+Min(numTopRight,nTbH)):0 (8-159)

numSampL=(availL&&predModeIntra==INTRA_L_CCLM)?(nTbH+Min(numLeftBelow,nTbW)):0 (8-160)

the variable bCTUboundary is derived as follows:

bCTUboundary=(yTbC&(1<<(CtbLog2SizeY-1)-1)==0)?TRUE:FALSE. (8-161)

the variable cntN and the array pickPosN [ ] (where N is replaced by L and T) are derived as follows:

the variable numIs4N is set equal to ((availN & & predModeIntra ═ INTRA _ LT _ CCLM).

The variable startPosN is set equal to numSampN > (2+ numIs 4N).

The variable pickStepN is set to Max (1, numBanpN > > (1+ numIs 4N)).

-if availN equals TRUE and predModeIntra equals INTRA or INTRA _ LT _ CCLM, cntN is set equal to (1+ numIs4N) < <1, and pickPosN [ pos ] is set equal to (startPosN + pos × pickStepN), where pos ═ 0. (cntN-1).

Else, cntN is set equal to 0.

The predicted sample point predSamples [ x ] [ y ] (where x ═ 0.. nTbW-1, y ═ 0.. nTbH-1) is derived as follows:

-if both numSampL and numSampT are equal to 0, then the following applies:

predSamples[x][y]=1<<(BitDepthC-1) (8-162)

otherwise, the following ordered steps apply:

1. the juxtaposed luminance sample pY [ x ] [ y ] (where x is 0.. nTbW 2-1, y is 0.. nTbH 2-1) is set equal to the reconstructed luminance sample before the deblocking filter process at the position (xTbY + x, yTbY + y).

2. The neighboring luminance samples pY [ x ] [ y ] are derived as follows:

when numSampL is greater than 0, the neighboring left luminance sample point pY [ x ] [ y ] (where x-1. -3, y-0.. 2 numSampL-1) is set equal to the reconstructed luminance sample point before the deblocking filtering process at the position (xTbY + x, yTbY + y).

-when numSampT is greater than 0, the neighboring top luminance sample pY [ x ] [ y ] (where x ═ 0..2 numSampT-1, y ═ 1, -2) is set equal to the reconstructed luminance sample before deblocking filtering at position (xTbY + x, yTbY + y).

-when availTL is equal to TRUE, the neighboring upper left luma sample pY [ x ] [ y ] (where x ═ -1, y ═ -1, -2) is set equal to the reconstructed luma sample before deblocking filter processing at position (xTbY + x, yTbY + y).

3. Downsampled collocated luminance samples pDsY [ x ] [ y ] (where x is 0.. nTbW-1, y is 0.. nTbH-1) are derived as follows:

if sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-pDsY [ x ] [ y ] (where x 1.. nTbW-1, y 1.. nTbH-1) is derived as follows:

pDsY[x][y]=(pY[2*x][2*y-1]+pY[2*x-1][2*y]+4*pY[2*x][2*y]+pY[2*x+1][2*y]+pY[2*x][2*y+1]+4)>>3 (8-163)

-if avail l equals TRUE, pDsY [0] [ y ] (where y ═ 1.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y-1]+pY[-1][2*y]+4*pY[0][2*y]+pY[1][2*y]+pY[0][2*y+1]+4)>>3 (8-164)

else pDsY [0] [ y ] (where y 1.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y-1]+2*pY[0][2*y]+pY[0][2*y+1]+2)>>2 (8-165)

-if availT equals TRUE, pDsY [ x ] [0] (where x ═ 1.. nTbW-1) is derived as follows:

pDsY[x][0]=(pY[2*x][-1]+pY[2*x-1][0]+4*pY[2*x][0]+pY[2*x+1][0]+pY[2*x][1]+4)>>3 (8-166)

else pDsY [ x ] [0] (where x ═ 1.. nTbW-1) is derived as follows:

pDsY[x][0]=(pY[2*x-1][0]+2*pY[2*x][0]+pY[2*x+1][0]+2)>>2 (8-167)

-if avail L equals TRUE and avail T equals TRUE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[0][-1]+pY[-1][0]+4*pY[0][0]+pY[1][0]+pY[0][1]+4)>>3 (8-168)

otherwise, if avail L equals TRUE and avail T equals FALSE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[-1][0]+2*pY[0][0]+pY[1][0]+2)>>2 (8-169)

otherwise, if avail L equals FALSE and avail T equals TRUE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[0][-1]+2*pY[0][0]+pY[0][1]+2)>>2 (8-170)

else (avail L equals FALSE and avail T equals FALSE), pDsY [0] [0] is derived as follows:

pDsY[0][0]=pY[0][0] (8-171)

otherwise, the following applies:

-pDsY [ x ] [ y ] (where x 1.. nTbW-1, y 0.. nTbH-1) is derived as follows:

pDsY[x][y]=(pY[2*x-1][2*y]+pY[2*x-1][2*y+1]+2*pY[2*x][2*y]+2*pY[2*x][2*y+1]+pY[2*x+1][2*y]+pY[2*x+1][2*y+1]+4)>>3 (8-172)

-if avail l equals TRUE, pDsY [0] [ y ] (where y ═ 0.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[-1][2*y]+pY[-1][2*y+1]+2*pY[0][2*y]+2*pY[0][2*y+1]+pY[1][2*y]+pY[1][2*y+1]+4)>>3 (8-173)

else pDsY [0] [ y ] (where y ═ 0.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y]+pY[0][2*y+1]+1)>>1 (8-174)

4. when numSampL is greater than 0, the selected neighboring left chrominance samples pSelC [ idx ] are set equal to p [ -1] [ pickPosL [ idx ] ] (where idx is 0. (cntL-1)), and the selected down-sampled neighboring left luminance samples pSelDsY [ idx ] (where idx is 0. (cntL-1)) are derived as follows:

the variable y is set equal to pickPosL [ idx ].

If sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-if y >0| | | availTL ═ TRUE,

pSelDsY[idx]=(pY[-2][2*y-1]+pY[-3][2*y]+4*pY[-2][2*y]+pY[-1][2*y]+pY[-2][2*y+1]+4)>>3 (8-175)

-if not, then,

pSelDsY[idx]=(pY[-3][0]+2*pY[-2][0]+pY[-1][0]+2)>>2 (8-177)

otherwise, the following applies:

pSelDsY[idx]=(pY[-1][2*y]+pY[-1][2*y+1]+2*pY[-2][2*y]+2*pY[-2][2*y+1]+pY[-3][2*y]+pY[-3][2*y+1]+4)>>3 (8-178)

5. when numSampT is greater than 0, the selected neighboring top chroma sample point pSelC [ idx ] is set equal to p [ pickPosT [ idx ] ] [ -1] (where idx is 0. (cntT-1)), and the down-sampled neighboring top luma sample point pSelDsY [ idx ] (where idx is cntL. (cntL + cntT-1)) is specified as follows:

variable x is set equal to pickPosT [ idx-cntL ].

If sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-if x > 0:

-if bCTUboundary equals FALSE, the following applies:

pSelDsY[idx]=(pY[2*x][-3]+pY[2*x-1][-2]+4*pY[2*x][-2]+pY[2*x+1][-2]+pY[2*x][-1]+4)>>3 (8-179)

else (bCTUboundary equals TRUE), the following applies:

pSelDsY[idx]=(pY[2*x-1][-1]+2*pY[2*x][-1]+pY[2*x+1][-1]+2)>>2 (8-180)

-otherwise:

if availTL equals TRUE and bCTUboundary equals FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-3]+pY[-1][-2]+4*pY[0][-2]+pY[1][-2]+pY[0][-1]+4)>>3 (8-181)

otherwise, if availTL equals TRUE and bCTUboundary equals TRUE, then the following applies:

pSelDsY[idx]=(pY[-1][-1]+2*pY[0][-1]+pY[1][-1]+2)>>2 (8-182)

otherwise, if availTL is equal to FALSE and bCTUboundary is equal to FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-3]+2*pY[0][-2]+pY[0][-1]+2)>>2 (8-183)

else (availTL equal FALSE and bCTUboundary equal TRUE), then the following applies:

pSelDsY[idx]=pY[0][-1] (8-184)

otherwise, the following applies:

-if x > 0:

-if bCTUboundary equals FALSE, the following applies:

pSelDsY[idx]=(pY[2*x-1][-2]+pY[2*x-1][-1]+2*pY[2*x][-2]+2*pY[2*x][-1]+pY[2*x+1][-2]+pY[2*x+1][-1]+4)>>3 (8-185)

else (bCTUboundary equals TRUE), the following applies:

pSelDsY[idx]=(pY[2*x-1][-1]+2*pY[2*x][-1]+pY[2*x+1][-1]+2)>>2 (8-186)

-otherwise:

if availTL equals TRUE and bCTUboundary equals FALSE, then the following applies:

pSelDsY[idx]=(pY[-1][-2]+pY[-1][-1]+2*pY[0][-2]+2*pY[0][-1]+pY[1][-2]+pY[1][-1]+4)>>3 (8-187)

otherwise, if availTL equals TRUE and bCTUboundary equals TRUE, then the following applies:

pSelDsY[idx]=(pY[-1][-1]+2*pY[0][-1]+pY[1][-1]+2)>>2 (8-188)

otherwise, if availTL is equal to FALSE and bCTUboundary is equal to FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-2]+pY[0][-1]+1)>>1 (8-189)

else (availTL equal FALSE and bCTUboundary equal TRUE), the following applies:

pSelDsY[idx]=pY[0][-1] (8-190)

6. the variables minY, maxY, minC, and maxC are derived as follows:

-when cntT + cntL is equal to 2, pSelC [ idx +2] ═ pSelC [ idx ] and pSelDsY [ idx +2] ═ pSelDsY [ idx ], are set, where idx is 0 and 1.

The arrays minGrpIdx [ ] and maxGrpIdx [ ] are set to: minGrpIdx [0] ═ 0, minGrpIdx [1] ═ 1, maxGrpIdx [0] ═ 2, and maxgtpidx [1] ═ 3.

-Swap (minGrpIdx [0], minGrpIdx [1]) if pSelDsY [ minGrpIdx [0] ] > pSelDsY [ minGrpIdx [1] ].

-Swap (maxGrpIdx [0], maxGrpIdx [1]) if pSelDsY [ maxGrpIdx [0] ] > pSelDsY [ maxGrpIdx [1] ].

-Swap (minGrpIdx, maxGrpIdx) if pSelDsY [ minGrpIdx [0] ] > pSelDsY [ maxGrpIdx [1] ].

-Swap (minGrpIdx [1], maxGrpIdx [0]) if pSelDsY [ minGrpIdx [1] ] > pSelDsY [ maxGrpIdx [0] ].

–maxY=(pSelDsY[maxGrpIdx[0]]+pSelDsY[maxGrpIdx[1]]+1)>>1。

–maxC=(pSelC[maxGrpIdx[0]]+pSelC[maxGrpIdx[1]]+1)>>1。

–minY=(pSelDsY[minGrpIdx[0]]+pSelDsY[minGrpIdx[1]]+1)>>1。

–minC=(pSelC[minGrpIdx[0]]+pSelC[minGrpIdx[1]]+1)>>1。

7. The variables a, b and k are derived as follows:

-if numSampL equals 0 and numSampT equals 0, then the following applies:

k=0 (8-208)

a=0 (8-209)

b=1<<(BitDepthC-1) (8-210)

otherwise, the following applies:

diff=maxY-minY (8-211)

if diff is not equal to 0, the following applies:

diffC=maxC-minC (8-212)

x=Floor(Log2(diff)) (8-213)

normDiff=((diff<<4)>>x)&15 (8-214)

x+=(normDiff!=0)?1:0 (8-215)

y=Floor(Log2(Abs(diffC)))+1 (8-216)

a=(diffC*(divSigTable[normDiff]|8)+2y-1)>>y (8-217)

k=((3+x-y)<1)?1:3+x-y (8-218)

a=((3+x-y)<1)?Sign(a)*15:a (8-219)

b=minC-((a*minY)>>k) (8-220)

where divSigTable [ ] is specified as follows:

divSigTable[]={0,7,6,5,5,4,4,3,3,2,2,1,1,1,1,0} (8-221)

else (diff equals 0), the following applies:

k=0(8-222)

a=0(8-223)

b=minC(8-224)

8. the predicted sample point predSamples [ x ] [ y ] (where x ═ 0.. nTbW-1, y ═ 0.. nTbH-1) is derived as follows:

predSamples[x][y]=Clip1C(((pDsY[x][y]*a)>>k)+b) (8-225)

[ examples of embodiment ] end

3.7 Another exemplary working draft on proposed CCLM predictions

In this section, another exemplary embodiment is described that illustrates modifications that may be made to the current working draft of the VVC standard. Here, the equation number refers to a corresponding equation number in the VVC standard.

Specification of INTRA prediction modes of INTRA _ LT _ CCLM, INTRA _ L _ CCLM and INTRA _ T _ CCLM

……

The number of available neighboring chroma samples numpossamp at the top and right and the number of available neighboring chroma samples nlefsamp at the left and lower left are derived as follows:

if predModeIntra is equal to INTRA _ LT _ CCLM, then the following applies:

numSampT=availTnTbW:0 (8-157)

numSampL=availLnTbH:0 (8-158)

otherwise, the following applies:

numSampT=(availT&&predModeIntra==INTRA_T_CCLM)?(nTbW+Min(numTopRight,nTbH)):0 (8-159)

numSampL=(availL&&predModeIntra==INTRA_L_CCLM)?(nTbH+Min(numLeftBelow,nTbW)):0 (8-160)

the variable bCTUboundary is derived as follows:

bCTUboundary=(yTbC&(1<<(CtbLog2SizeY-1)-1)==0)?TRUE:FALSE. (8-161)

the variable cntN and the array pickPosN [ ] (where N is replaced by L and T) are derived as follows:

the variable numIs4N is set equal to ((availN & & predModeIntra ═ INTRA _ LT _ CCLM).

The variable startPosN is set equal to numSampN > (2+ numIs 4N).

The variable pickStepN is set to Max (1, numBanpN > > (1+ numIs 4N)).

-if availN equals TRUE and predModeIntra equals INTRA or INTRA _ LT _ CCLM, cntN is set equal to Min (numampn, (1+ numIs4N) < <1), and pickPosN [ pos ] is set equal to (startPosN + pos × pickStepN), where pos is 0. (cntN-1).

Else, cntN is set equal to 0.

The predicted sample point predSamples [ x ] [ y ] (where x ═ 0.. nTbW-1, y ═ 0.. nTbH-1) is derived as follows:

-if both numSampL and numSampT are equal to 0, then the following applies:

predSamples[x][y]=1<<(BitDepthC-1) (8-162)

otherwise, the following ordered steps apply:

1. the juxtaposed luminance sample pY [ x ] [ y ] (where x is 0.. nTbW 2-1, y is 0.. nTbH 2-1) is set equal to the reconstructed luminance sample before the deblocking filter process at the position (xTbY + x, yTbY + y).

2. The neighboring luminance samples pY [ x ] [ y ] are derived as follows:

when numSampL is greater than 0, the neighboring left luminance sample point pY [ x ] [ y ] (where x-1. -3, y-0.. 2 numSampL-1) is set equal to the reconstructed luminance sample point before the deblocking filtering process at the position (xTbY + x, yTbY + y).

-when numSampT is greater than 0, the neighboring top luminance sample pY [ x ] [ y ] (where x ═ 0..2 numSampT-1, y ═ 1, -2) is set equal to the reconstructed luminance sample before deblocking filtering at position (xTbY + x, yTbY + y).

-when availTL is equal to TRUE, the neighboring upper left luma sample pY [ x ] [ y ] (where x ═ -1, y ═ -1, -2) is set equal to the reconstructed luma sample before deblocking filter processing at position (xTbY + x, yTbY + y).

3. Downsampled collocated luminance samples pDsY [ x ] [ y ] (where x is 0.. nTbW-1, y is 0.. nTbH-1) are derived as follows:

if sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-pDsY [ x ] [ y ] (where x 1.. nTbW-1, y 1.. nTbH-1) is derived as follows:

pDsY[x][y]=(pY[2*x][2*y-1]+pY[2*x-1][2*y]+4*pY[2*x][2*y]+pY[2*x+1][2*y]+pY[2*x][2*y+1]+4)>>3 (8-163)

-if avail l equals TRUE, pDsY [0] [ y ] (where y ═ 1.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y-1]+pY[-1][2*y]+4*pY[0][2*y]+pY[1][2*y]+pY[0][2*y+1]+4)>>3 (8-164)

else pDsY [0] [ y ] (where y 1.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y-1]+2*pY[0][2*y]+pY[0][2*y+1]+2)>>2 (8-165)

-if availT equals TRUE, pDsY [ x ] [0] (where x ═ 1.. nTbW-1) is derived as follows:

pDsY[x][0]=(pY[2*x][-1]+pY[2*x-1][0]+4*pY[2*x][0]+pY[2*x+1][0]+pY[2*x][1]+4)>>3 (8-166)

else pDsY [ x ] [0] (where x ═ 1.. nTbW-1) is derived as follows:

pDsY[x][0]=(pY[2*x-1][0]+2*pY[2*x][0]+pY[2*x+1][0]+2)>>2 (8-167)

-if avail L equals TRUE and avail T equals TRUE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[0][-1]+pY[-1][0]+4*pY[0][0]+pY[1][0]+pY[0][1]+4)>>3 (8-168)

otherwise, if avail L equals TRUE and avail T equals FALSE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[-1][0]+2*pY[0][0]+pY[1][0]+2)>>2 (8-169)

otherwise, if avail L equals FALSE and avail T equals TRUE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[0][-1]+2*pY[0][0]+pY[0][1]+2)>>2 (8-170)

else (avail L equals FALSE and avail T equals FALSE), pDsY [0] [0] is derived as follows:

pDsY[0][0]=pY[0][0] (8-171)

otherwise, the following applies:

-pDsY [ x ] [ y ] (where x 1.. nTbW-1, y 0.. nTbH-1) is derived as follows:

pDsY[x][y]=(pY[2*x-1][2*y]+pY[2*x-1][2*y+1]+2*pY[2*x][2*y]+2*pY[2*x][2*y+1]+pY[2*x+1][2*y]+pY[2*x+1][2*y+1]+4)>>3 (8-172)

-if avail l equals TRUE, pDsY [0] [ y ] (where y ═ 0.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[-1][2*y]+pY[-1][2*y+1]+2*pY[0][2*y]+2*pY[0][2*y+1]+pY[1][2*y]+pY[1][2*y+1]+4)>>3 (8-173)

else pDsY [0] [ y ] (where y ═ 0.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y]+pY[0][2*y+1]+1)>>1 (8-174)

4. when numSampL is greater than 0, the selected neighboring left chrominance samples pSelC [ idx ] are set equal to p [ -1] [ pickPosL [ idx ] ] (where idx is 0. (cntL-1)), and the selected down-sampled neighboring left luminance samples pSelDsY [ idx ] (where idx is 0. (cntL-1)) are derived as follows:

the variable y is set equal to pickPosL [ idx ].

If sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-if y >0| | | availTL ═ TRUE,

pSelDsY[idx]=(pY[-2][2*y-1]+pY[-3][2*y]+4*pY[-2][2*y]+pY[-1][2*y]+pY[-2][2*y+1]+4)>>3 (8-175)

-if not, then,

pSelDsY[idx]=(pY[-3][0]+2*pY[-2][0]+pY[-1][0]+2)>>2 (8-177)

otherwise, the following applies:

pSelDsY[idx]=(pY[-1][2*y]+pY[-1][2*y+1]+2*pY[-2][2*y]+2*pY[-2][2*y+1]+pY[-3][2*y]+pY[-3][2*y+1]+4)>>3 (8-178)

5. when numSampT is greater than 0, the selected neighboring top chroma sample point pSelC [ idx ] is set equal to p [ pickPosT [ idx ] ] [ -1] (where idx is 0. (cntT-1)), and the down-sampled neighboring top luma sample point pSelDsY [ idx ] (where idx is cntL. (cntL + cntT-1)) is specified as follows:

variable x is set equal to pickPosT [ idx-cntL ].

If sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-if x > 0:

-if bCTUboundary equals FALSE, the following applies:

pSelDsY[idx]=(pY[2*x][-3]+pY[2*x-1][-2]+4*pY[2*x][-2]+pY[2*x+1][-2]+pY[2*x][-1]+4)>>3 (8-179)

else (bCTUboundary equals TRUE), the following applies:

pSelDsY[idx]=(pY[2*x-1][-1]+2*pY[2*x][-1]+pY[2*x+1][-1]+2)>>2 (8-180)

-otherwise:

if availTL equals TRUE and bCTUboundary equals FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-3]+pY[-1][-2]+4*pY[0][-2]+pY[1][-2]+pY[0][-1]+4)>>3 (8-181)

otherwise, if availTL equals TRUE and bCTUboundary equals TRUE, then the following applies:

pSelDsY[idx]=(pY[-1][-1]+2*pY[0][-1]+pY[1][-1]+2)>>2 (8-182)

otherwise, if availTL is equal to FALSE and bCTUboundary is equal to FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-3]+2*pY[0][-2]+pY[0][-1]+2)>>2 (8-183)

else (availTL equal FALSE and bCTUboundary equal TRUE), then the following applies:

pSelDsY[idx]=pY[0][-1] (8-184)

otherwise, the following applies:

-if x > 0:

-if bCTUboundary equals FALSE, the following applies:

pSelDsY[idx]=(pY[2*x-1][-2]+pY[2*x-1][-1]+2*pY[2*x][-2]+2*pY[2*x][-1]+pY[2*x+1][-2]+pY[2*x+1][-1]+4)>>3 (8-185)

else (bCTUboundary equals TRUE), the following applies:

pSelDsY[idx]=(pY[2*x-1][-1]+2*pY[2*x][-1]+pY[2*x+1][-1]+2)>>2 (8-186)

-otherwise:

if availTL equals TRUE and bCTUboundary equals FALSE, then the following applies:

pSelDsY[idx]=(pY[-1][-2]+pY[-1][-1]+2*pY[0][-2]+2*pY[0][-1]+pY[1][-2]+pY[1][-1]+4)>>3 (8-187)

otherwise, if availTL equals TRUE and bCTUboundary equals TRUE, then the following applies:

pSelDsY[idx]=(pY[-1][-1]+2*pY[0][-1]+pY[1][-1]+2)>>2 (8-188)

otherwise, if availTL is equal to FALSE and bCTUboundary is equal to FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-2]+pY[0][-1]+1)>>1 (8-189)

else (availTL equal FALSE and bCTUboundary equal TRUE), the following applies:

pSelDsY[idx]=pY[0][-1] (8-190)

6. when cntT + cntL is not equal to 0, the variables minY, maxY, minC, and maxC are derived as follows:

-when cntT + cntL is equal to 2, setting pSelComp [3] equal to pSelComp [0], pSelComp [2] equal to pSelComp [1], pSelComp [0] equal to pSelComp [1], and pSelComp [1] equal to pSelComp [3], where Comp is replaced by DsY and C.

The arrays minGrpIdx [ ] and maxGrpIdx [ ] are set to: minGrpIdx [0] ═ 0, minGrpIdx [1] ═ 1, maxGrpIdx [0] ═ 2, and maxgtpidx [1] ═ 3.

-Swap (minGrpIdx [0], minGrpIdx [1]) if pSelDsY [ minGrpIdx [0] ] > pSelDsY [ minGrpIdx [1] ].

-Swap (maxGrpIdx [0], maxGrpIdx [1]) if pSelDsY [ maxGrpIdx [0] ] > pSelDsY [ maxGrpIdx [1] ].

-Swap (minGrpIdx, maxGrpIdx) if pSelDsY [ minGrpIdx [0] ] > pSelDsY [ maxGrpIdx [1] ].

-Swap (minGrpIdx [1], maxGrpIdx [0]) if pSelDsY [ minGrpIdx [1] ] > pSelDsY [ maxGrpIdx [0] ].

–maxY=(pSelDsY[maxGrpIdx[0]]+pSelDsY[maxGrpIdx[1]]+1)>>1。

–maxC=(pSelC[maxGrpIdx[0]]+pSelC[maxGrpIdx[1]]+1)>>1。

–minY=(pSelDsY[minGrpIdx[0]]+pSelDsY[minGrpIdx[1]]+1)>>1。

–minC=(pSelC[minGrpIdx[0]]+pSelC[minGrpIdx[1]]+1)>>1。

7. The variables a, b and k are derived as follows:

-if numSampL equals 0 and numSampT equals 0, then the following applies:

k=0 (8-208)

a=0 (8-209)

b=1<<(BitDepthC-1) (8-210)

otherwise, the following applies:

diff=maxY-minY (8-211)

if diff is not equal to 0, the following applies:

diffC=maxC-minC (8-212)

x=Floor(Log2(diff)) (8-213)

normDiff=((diff<<4)>>x)&15 (8-214)

x+=(normDiff!=0)?1:0 (8-215)

y=Floor(Log2(Abs(diffC)))+1 (8-216)

a=(diffC*(divSigTable[normDiff]|8)+2y-1)>>y (8-217)

k=((3+x-y)<1)?1:3+x-y (8-218)

a=((3+x-y)<1)?Sign(a)*15:a (8-219)

b=minC-((a*minY)>>k) (8-220)

where divSigTable [ ] is specified as follows:

divSigTable[]={0,7,6,5,5,4,4,3,3,2,2,1,1,1,1,0} (8-221)

else (diff equals 0), the following applies:

k=0 (8-222)

a=0 (8-223)

b=minC (8-224)

8. the predicted sample point predSamples [ x ] [ y ] (where x ═ 0.. nTbW-1, y ═ 0.. nTbH-1) is derived as follows:

predSamples[x][y]=Clip1C(((pDsY[x][y]*a)>>k)+b) (8-225)

3.8 alternative working draft on proposed CCLM prediction

In this section, alternative exemplary embodiments are described that show another modification that can be made to the current working draft of the VVC standard. Here, the equation number refers to a corresponding equation number in the VVC standard.

Specification of INTRA prediction modes INTRA _ LT _ CCLM, INTRA _ L _ CCLM and INTRA _ T _ CCLM.

……

The number of available neighboring chroma samples numpossamp at the top and right and the number of available neighboring chroma samples nlefsamp at the left and lower left are derived as follows:

if predModeIntra is equal to INTRA _ LT _ CCLM, then the following applies:

numSampT=availTnTbW:0 (8-157)

numSampL=availLnTbH:0 (8-158)

otherwise, the following applies:

numSampT=(availT&&predModeIntra==INTRA_T_CCLM)?(nTbW+Min(numTopRight,nTbH)):0 (8-159)

numSampL=(availL&&predModeIntra==INTRA_L_CCLM)?(nTbH+Min(numLeftBelow,nTbW)):0 (8-160)

the variable bCTUboundary is derived as follows:

bCTUboundary=(yTbC&(1<<(CtbLog2SizeY-1)-1)==0)?TRUE:FALSE. (8-161)

the variable cntN and the array pickPosN [ ] (where N is replaced by L and T) are derived as follows:

the variable numIs4N is set equal to ((avail t & & avail l & & predModeIntra ═ INTRA _ LT _ CCLM).

The variable startPosN is set equal to numSampN > (2+ numIs 4N).

The variable pickStepN is set to Max (1, numBanpN > > (1+ numIs 4N)).

-if availN equals TRUE and predModeIntra equals INTRA or INTRA _ LT _ CCLM, cntN is set equal to Min (numampn, (1+ numIs4N) < <1), and pickPosN [ pos ] is set equal to (startPosN + pos × pickStepN), where pos is 0. (cntN-1).

Else, cntN is set equal to 0.

The predicted sample point predSamples [ x ] [ y ] (where x ═ 0.. nTbW-1, y ═ 0.. nTbH-1) is derived as follows:

-if both numSampL and numSampT are equal to 0, then the following applies:

predSamples[x][y]=1<<(BitDepthC-1) (8-162)

otherwise, the following ordered steps apply:

1. the juxtaposed luminance sample pY [ x ] [ y ] (where x is 0.. nTbW 2-1, y is 0.. nTbH 2-1) is set equal to the reconstructed luminance sample before the deblocking filter process at the position (xTbY + x, yTbY + y).

2. The neighboring luminance samples pY [ x ] [ y ] are derived as follows:

when numSampL is greater than 0, the neighboring left luminance sample point pY [ x ] [ y ] (where x-1. -3, y-0.. 2 numSampL-1) is set equal to the reconstructed luminance sample point before the deblocking filtering process at the position (xTbY + x, yTbY + y).

-when numSampT is greater than 0, the neighboring top luminance sample pY [ x ] [ y ] (where x ═ 0..2 numSampT-1, y ═ 1, -2) is set equal to the reconstructed luminance sample before deblocking filtering at position (xTbY + x, yTbY + y).

-when availTL is equal to TRUE, the neighboring upper left luma sample pY [ x ] [ y ] (where x ═ -1, y ═ -1, -2) is set equal to the reconstructed luma sample before deblocking filter processing at position (xTbY + x, yTbY + y).

3. Downsampled collocated luminance samples pDsY [ x ] [ y ] (where x is 0.. nTbW-1, y is 0.. nTbH-1) are derived as follows:

if sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-pDsY [ x ] [ y ] (where x 1.. nTbW-1, y 1.. nTbH-1) is derived as follows:

pDsY[x][y]=(pY[2*x][2*y-1]+pY[2*x-1][2*y]+4*pY[2*x][2*y]+pY[2*x+1][2*y]+pY[2*x][2*y+1]+4)>>3 (8-163)

-if avail l equals TRUE, pDsY [0] [ y ] (where y ═ 1.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y-1]+pY[-1][2*y]+4*pY[0][2*y]+pY[1][2*y]+pY[0][2*y+1]+4)>>3 (8-164)

else pDsY [0] [ y ] (where y 1.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y-1]+2*pY[0][2*y]+pY[0][2*y+1]+2)>>2 (8-165)

-if availT equals TRUE, pDsY [ x ] [0] (where x ═ 1.. nTbW-1) is derived as follows:

pDsY[x][0]=(pY[2*x][-1]+pY[2*x-1][0]+4*pY[2*x][0]+pY[2*x+1][0]+pY[2*x][1]+4)>>3 (8-166)

else pDsY [ x ] [0] (where x ═ 1.. nTbW-1) is derived as follows:

pDsY[x][0]=(pY[2*x-1][0]+2*pY[2*x][0]+pY[2*x+1][0]+2)>>2 (8-167)

-if avail L equals TRUE and avail T equals TRUE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[0][-1]+pY[-1][0]+4*pY[0][0]+pY[1][0]+pY[0][1]+4)>>3 (8-168)

otherwise, if avail L equals TRUE and avail T equals FALSE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[-1][0]+2*pY[0][0]+pY[1][0]+2)>>2 (8-169)

otherwise, if avail L equals FALSE and avail T equals TRUE, pDsY [0] [0] is derived as follows:

pDsY[0][0]=(pY[0][-1]+2*pY[0][0]+pY[0][1]+2)>>2 (8-170)

else (avail L equals FALSE and avail T equals FALSE), pDsY [0] [0] is derived as follows:

pDsY[0][0]=pY[0][0] (8-171)

otherwise, the following applies:

-pDsY [ x ] [ y ] (where x 1.. nTbW-1, y 0.. nTbH-1) is derived as follows:

pDsY[x][y]=(pY[2*x-1][2*y]+pY[2*x-1][2*y+1]+2*pY[2*x][2*y]+2*pY[2*x][2*y+1]+pY[2*x+1][2*y]+pY[2*x+1][2*y+1]+4)>>3 (8-172)

-if avail l equals TRUE, pDsY [0] [ y ] (where y ═ 0.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[-1][2*y]+pY[-1][2*y+1]+2*pY[0][2*y]+2*pY[0][2*y+1]+pY[1][2*y]+pY[1][2*y+1]+4)>>3 (8-173)

else pDsY [0] [ y ] (where y ═ 0.. nTbH-1) is derived as follows:

pDsY[0][y]=(pY[0][2*y]+pY[0][2*y+1]+1)>>1 (8-174)

4. when numSampL is greater than 0, the selected neighboring left chrominance samples pSelC [ idx ] are set equal to p [ -1] [ pickPosL [ idx ] ] (where idx is 0. (cntL-1)), and the selected down-sampled neighboring left luminance samples pSelDsY [ idx ] (where idx is 0. (cntL-1)) are derived as follows:

the variable y is set equal to pickPosL [ idx ].

If sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-if y >0| | | availTL ═ TRUE,

pSelDsY[idx]=(pY[-2][2*y-1]+pY[-3][2*y]+4*pY[-2][2*y]+pY[-1][2*y]+pY[-2][2*y+1]+4)>>3 (8-175)

-if not, then,

pSelDsY[idx]=(pY[-3][0]+2*pY[-2][0]+pY[-1][0]+2)>>2 (8-177)

otherwise, the following applies:

pSelDsY[idx]=(pY[-1][2*y]+pY[-1][2*y+1]+2*pY[-2][2*y]+2*pY[-2][2*y+1]+pY[-3][2*y]+pY[-3][2*y+1]+4)>>3 (8-178)

5. when numSampT is greater than 0, the selected neighboring top chroma sample point pSelC [ idx ] is set equal to p [ pickPosT [ idx-cntL ] ] [ -1] (where idx ═ cntL. (cntL + cnt-1)), and the down-sampled neighboring top luma sample point pSelDsY [ idx ] (where idx ═ cntL. (cntL + cnt-1)) is specified as follows:

variable x is set equal to pickPosT [ idx-cntL ].

If sps _ cclm _ colocated _ chroma _ flag is equal to 1, then the following applies:

-if x > 0:

-if bCTUboundary equals FALSE, the following applies:

pSelDsY[idx]=(pY[2*x][-3]+pY[2*x-1][-2]+4*pY[2*x][-2]+pY[2*x+1][-2]+pY[2*x][-1]+4)>>3 (8-179)

else (bCTUboundary equals TRUE), the following applies:

pSelDsY[idx]=(pY[2*x-1][-1]+2*pY[2*x][-1]+pY[2*x+1][-1]+2)>>2 (8-180)

-otherwise:

if availTL equals TRUE and bCTUboundary equals FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-3]+pY[-1][-2]+4*pY[0][-2]+pY[1][-2]+pY[0][-1]+4)>>3 (8-181)

otherwise, if availTL equals TRUE and bCTUboundary equals TRUE, then the following applies:

pSelDsY[idx]=(pY[-1][-1]+2*pY[0][-1]+pY[1][-1]+2)>>2 (8-182)

otherwise, if availTL is equal to FALSE and bCTUboundary is equal to FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-3]+2*pY[0][-2]+pY[0][-1]+2)>>2 (8-183)

else (availTL equal FALSE and bCTUboundary equal TRUE), then the following applies:

pSelDsY[idx]=pY[0][-1] (8-184)

otherwise, the following applies:

-if x > 0:

-if bCTUboundary equals FALSE, the following applies:

pSelDsY[idx]=(pY[2*x-1][-2]+pY[2*x-1][-1]+2*pY[2*x][-2]+2*pY[2*x][-1]+pY[2*x+1][-2]+pY[2*x+1][-1]+4)>>3 (8-185)

else (bCTUboundary equals TRUE), the following applies:

pSelDsY[idx]=(pY[2*x-1][-1]+2*pY[2*x][-1]+pY[2*x+1][-1]+2)>>2 (8-186)

-otherwise:

if availTL equals TRUE and bCTUboundary equals FALSE, then the following applies:

pSelDsY[idx]=(pY[-1][-2]+pY[-1][-1]+2*pY[0][-2]+2*pY[0][-1]+pY[1][-2]+pY[1][-1]+4)>>3 (8-187)

otherwise, if availTL equals TRUE and bCTUboundary equals TRUE, then the following applies:

pSelDsY[idx]=(pY[-1][-1]+2*pY[0][-1]+pY[1][-1]+2)>>2 (8-188)

otherwise, if availTL is equal to FALSE and bCTUboundary is equal to FALSE, then the following applies:

pSelDsY[idx]=(pY[0][-2]+pY[0][-1]+1)>>1 (8-189)

else (availTL equal FALSE and bCTUboundary equal TRUE), the following applies:

pSelDsY[idx]=pY[0][-1] (8-190)

6. when cntT + cntL is not equal to 0, the variables minY, maxY, minC, and maxC are derived as follows:

-when cntT + cntL is equal to 2, setting pSelComp [3] equal to pSelComp [0], pSelComp [2] equal to pSelComp [1], pSelComp [0] equal to pSelComp [1], and pSelComp [1] equal to pSelComp [3], where Comp is replaced by DsY and C.

The arrays minGrpIdx [ ] and maxGrpIdx [ ] are set to: minGrpIdx [0] ═ 0, minGrpIdx [1] ═ 2, maxGrpIdx [0] ═ 1, and maxgtpidx [1] ═ 3.

-Swap (minGrpIdx [0], minGrpIdx [1]) if pSelDsY [ minGrpIdx [0] ] > pSelDsY [ minGrpIdx [1] ].

-Swap (maxGrpIdx [0], maxGrpIdx [1]) if pSelDsY [ maxGrpIdx [0] ] > pSelDsY [ maxGrpIdx [1] ].

-Swap (minGrpIdx, maxGrpIdx) if pSelDsY [ minGrpIdx [0] ] > pSelDsY [ maxGrpIdx [1] ].

-Swap (minGrpIdx [1], maxGrpIdx [0]) if pSelDsY [ minGrpIdx [1] ] > pSelDsY [ maxGrpIdx [0] ].

–maxY=(pSelDsY[maxGrpIdx[0]]+pSelDsY[maxGrpIdx[1]]+1)>>1。–maxC=(pSelC[maxGrpIdx[0]]+pSelC[maxGrpIdx[1]]+1)>>1。

–minY=(pSelDsY[minGrpIdx[0]]+pSelDsY[minGrpIdx[1]]+1)>>1。

–minC=(pSelC[minGrpIdx[0]]+pSelC[minGrpIdx[1]]+1)>>1。

7. The variables a, b and k are derived as follows:

-if numSampL equals 0 and numSampT equals 0, then the following applies:

k=0 (8-208)

a=0 (8-209)

b=1<<(BitDepthC-1) (8-210)

otherwise, the following applies:

diff=maxY-minY (8-211)

if diff is not equal to 0, the following applies:

diffC=maxC-minC (8-212)

x=Floor(Log2(diff)) (8-213)

normDiff=((diff<<4)>>x)&15 (8-214)

x+=(normDiff!=0)?1:0 (8-215)

y=Floor(Log2(Abs(diffC)))+1 (8-216)

a=(diffC*(divSigTable[normDiff]|8)+2y-1)>>y (8-217)

k=((3+x-y)<1)?1:3+x-y (8-218)

a=((3+x-y)<1)?Sign(a)*15:a (8-219)

b=minC-((a*minY)>>k) (8-220)

where divSigTable [ ] is specified as follows:

divSigTable[]={0,7,6,5,5,4,4,3,3,2,2,1,1,1,1,0} (8-221)

else (diff equals 0), the following applies:

k=0 (8-222)

a=0 (8-223)

b=minC (8-224)

8. the predicted sample point predSamples [ x ] [ y ] (where x ═ 0.. nTbW-1, y ═ 0.. nTbH-1) is derived as follows:

predSamples[x][y]=Clip1C(((pDsY[x][y]*a)>>k)+b) (8-225)

the examples described above may be incorporated in the context of methods described below (e.g., methods 2010, 2020, 2030, 2910, 2920, 2930), which may be implemented at a video encoder and/or decoder.

Fig. 18A shows a flow diagram of an exemplary method for video processing. Method 2030 includes, at step 2032, for a conversion between a current video block of video as a chroma block and a codec representation of the video, determining whether to derive maxima and/or minima of a luma component and a chroma component for deriving parameters of a cross-component linear model (CCLM) based on availabilities of left and upper neighboring blocks of the current video block. The method 2030 further includes, at step 2034, performing the conversion based on the determination.

Fig. 18B shows a flow diagram of an exemplary method for video processing. Method 2010 includes, at step 2012, determining, for a transition between a current video block of the video as a chroma block and a codec representation of the video, a location at which a luma sample point is downsampled, wherein the downsampled luma sample point is used to determine parameters of a cross-component linear model (CCLM) based on the chroma sample point and the downsampled luma sample point, wherein the downsampled luma sample point is at a location corresponding to a location of the chroma sample point used to derive the parameters of the CCLM. The method 2010 further includes, at step 2014, performing a conversion based on the determination.

Fig. 18C shows a flow diagram of an exemplary method for video processing. Method 2020 includes, at step 2022, for a transition between a current video block of video as a chroma block and a codec representation of the video, determining a method to derive parameters of a cross-component linear model (CCLM) using chroma samples and luma samples based on codec conditions associated with the current video block. The method 2020 further includes, at step 2024, performing the conversion based on the determination.

Fig. 18D shows a flow diagram of an exemplary method for video processing. Method 2910 includes, at step 2912, determining, for a transition between a current video block of the video and a codec representation of the video, parameters of a codec tool using a linear model based on the selected neighboring samples of the current video block and corresponding neighboring samples of the reference block. Method 2910 also includes, at step 2914, performing a conversion based on the determination.

Fig. 18E shows a flow diagram of an exemplary method for video processing. Method 2920 includes, at step 2922, for a transition between a current video block of a video and a codec representation of the video, determining parameters of a Local Illumination Compensation (LIC) tool based on N neighboring samples of the current video block and N corresponding neighboring samples of a reference block, wherein the N neighboring samples of the current video block are selected based on locations of the N neighboring samples. The method 2920 further includes, at step 2924, performing a transition based on the determination. The LIC tool uses a linear model of the illumination changes in the current video block during the conversion.

Fig. 18F shows a flow diagram of an exemplary method for video processing. Method 2930 includes, at step 2932, for a transition between a current video block of video as a chroma block and a codec representation of the video, determining parameters across a component linear model (CCLM) based on the chroma samples and corresponding luma samples. The method 2930 further includes, at step 2934, performing a transition based on the determination. In an example, some of the chroma samples are obtained by a padding operation, and the chroma samples and corresponding luma samples are grouped into two arrays G0 and G1, each array including two chroma samples and corresponding luma samples.

4 example implementation of the disclosed technology

Fig. 19A is a block diagram of the video processing apparatus 3000. The apparatus 3000 may be used to implement one or more of the methods described herein. The apparatus 3000 may be implemented in a smartphone, tablet, computer, Internet of Things (IoT) receiver, or the like. The apparatus 3000 may include one or more processors 3002, one or more memories 3004 and video processing hardware 3006. The processor(s) 3002 may be configured to implement one or more methods described in this document (including, but not limited to, the methods as shown in fig. 18-29C). The memory(s) 3004 may be used to store data and code for implementing the methods and techniques described herein. The video processing hardware 3006 may be used to implement some of the techniques described in this document in hardware circuitry.

Fig. 19B is another example of a block diagram of a video processing system in which the disclosed techniques may be implemented. Fig. 19B is a block diagram illustrating an example video processing system 3100 in which various techniques disclosed herein may be implemented. Various embodiments may include some or all of the components of system 3100. System 3100 can include an input 3102 for receiving video content. The video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format. Input 3102 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of Network interfaces include wired interfaces (such as ethernet, Passive Optical Network (PON), etc.) and wireless interfaces (such as Wi-Fi or cellular interfaces).

System 3100 can include a codec component 3104 that can implement various codecs or encoding methods described in this document. The codec component 3104 may reduce the average bit rate of the video from the codec component 3104 input 3102 to output to produce a codec representation of the video. Thus, codec techniques are sometimes referred to as video compression or video transcoding techniques. The output of codec component 3104 may be stored or transmitted via a connected communication, as shown by component 3106. Stored or communicatively conveyed bitstream (or codec) representation of video received at input 3102 may be used by component 3108 to generate pixel values or displayable video that is sent to display interface 3110. The process of generating a video viewable by a user from a bit stream representation is sometimes referred to as video decompression. Additionally, while certain video processing operations are referred to as "codec" operations or tools, it should be understood that codec tools or operations are used at the encoder and corresponding decoding tools or operations that are the inverse of the results of the codec will be performed by the decoder.

Examples of a peripheral Bus Interface or a display Interface may include a Universal Serial Bus (USB) or a High Definition Multimedia Interface (HDMI) or a Displayport (Displayport), and the like. Examples of storage interfaces include SATA (Serial Advanced Technology Attachment), PCI, IDE interfaces, and the like. The techniques described in this document may be implemented in various electronic devices, such as mobile phones, laptops, smart phones, or other devices capable of performing digital data processing and/or video display.

In some embodiments, the video codec method may be implemented using an apparatus implemented on a hardware platform as described with reference to fig. 19A or 19B.

Various techniques, preferably incorporated in some embodiments, may be described using the following clause-based format.

The first set of clauses describes certain features and aspects of the disclosed technology listed in the previous section.

1.A method for video processing, comprising: determining, for a current video block comprising a chroma block and based on two or more chroma samples selected from a group of neighboring chroma samples of the chroma block, a set of values of parameters of a linear model; and reconstructing the current video block based on the linear model.

2. The method of clause 1, wherein the upper-left sample of the chroma block is (x, y), wherein the width and height of the chroma block are W and H, respectively, and wherein the group of adjacent chroma samples comprises:

a sample point A having coordinates (x-1, y),

a sample point D having coordinates (x-1, y + H-1),

a sample point J having coordinates (x, y-1), and

a sample point M having coordinates (x + W-1, y-1).

3. The method of clause 2, wherein an upper-left neighboring block and an upper neighboring block of the current video block are available, and wherein the two or more chroma samples comprise samples A, D, J and M.

4. The method of clause 2, wherein an upper left neighboring block of the current video block is available, and wherein the two or more chroma samples comprise samples a and D.

5. The method of clause 2, wherein an upper neighboring block of the current video block is available, and wherein the two or more chroma samples comprise samples J and M.

6. A method for video processing, comprising: for a current video block comprising a chroma block, generating a plurality of groups of chroma samples and luma samples of neighboring blocks comprising the current video block; determining maximum and minimum values of the chrominance and luminance samples based on the plurality of groups; determining a set of values of parameters of the linear model based on the maximum and minimum values; and reconstructing the current video block based on the linear model.

7. The method of clause 6, wherein generating the plurality of groups is based on availability of neighboring blocks to the current video block.

8. The method of clause 6, wherein the plurality of groups comprises S0And S1Wherein the maximum luminance value is calculated as maxL ═ f1 (maxL)S0,maxLS1,…,maxLSm) Wherein f1 is a first function, and maxLSiIs a group S of a plurality of groupsiWherein the maximum chroma value is calculated as maxC ═ f2 (maxC)S0,maxCS1,…,maxCSm) Wherein f2 is a second function, and maxCSiIs a group SiA maximum chromaticity value of whereinThe minimum brightness value is calculated as minL ═ f3 (minL)S0,minLS1,…,minLSm) Wherein f3 is a third function, and minLSiIs a group SiWherein the minimum chroma value is calculated as minC ═ f4 (minC)S0,minCS1,…,minCSm) Where f4 is a fourth function, and minCSiIs a group SiAnd wherein the parameters of the linear model include alpha and beta, which are calculated as

α=(maxC-minC)/(maxL-minL)andβ=minC-α×minL。

9. The method of clause 8, wherein the upper-left sample of the chroma block is (x, y), wherein the width and height of the chroma block are W and H, respectively, and wherein the group of adjacent chroma samples comprises:

a sample point A having coordinates (x-1, y),

a sample point D having coordinates (x-1, y + H-1),

a sample point J having coordinates (x, y-1), and

a sample point M having coordinates (x + W-1, y-1).

10. The method of clause 9, wherein an upper-left neighboring block and an upper neighboring block of the current video block are available, wherein group S0And minimum luminance and chrominance values (maxL, respectively)S0、maxCS0、minLS0And minLS0) Based on samples A and D, where the group S1And minimum luminance and chrominance values (maxL, respectively)S1、maxCS1、minLS1And minLS1) Based on sample points J and M, and wherein

maxL=(maxLS0+maxLS1)/2,maxC=(maxCS0+maxCS1)/2,

minL=(minLS0+minLS1)/2and minC=(minCS0+minCS1)/2。

11. The method of clause 9, wherein an upper left neighboring block of the current video block is available, and wherein maxL, maxC, minL, and minC are based on samples a and D.

12. The method of clause 9, wherein an upper neighboring block to the current video block is available, and wherein maxL, maxC, minL, and minC are based on samples J and M.

13. The method of clause 6, wherein the parameters of the linear model include α and β, which are calculated as

α ═ 0 and β ═ 1< < (bitDepth-1),

where bitDepth is the bit depth of the chroma samples.

14. The method of clause 6, wherein generating the plurality of groups is based on a height or width of the current video block.

15. A method for video processing, comprising: generating downsampled chroma samples and luma samples by downsampling chroma samples and luma samples of neighboring blocks of a current video block having a height (H) and a width (W); determining a set of values of parameters of a linear model of the current video block based on the downsampled chrominance sample points and the luminance sample points; and reconstructing the current video block based on the linear model.

16. The method of clause 15, wherein the downsampling is based on height or width.

17. The method of clause 16, wherein W < H.

18. The method of clause 16, wherein W > H.

19. The method of clause 15, wherein the upper-left samples of the current video block are R [0,0], wherein the downsampled chroma samples comprise samples R [ -1, K × H/W ], and wherein K is a non-negative integer ranging from 0 to W-1.

20. The method of clause 15, wherein the upper-left samples of the current video block are R [0,0], wherein the downsampled chroma samples comprise samples R [ K × H/W, -1], and wherein K is a non-negative integer ranging from 0 to H-1.

21. The method of clause 15, wherein the refinement process is performed on the downsampled chroma samples and luma samples prior to the set of values used to determine the parameters of the linear model for the current video block.

22. The method of clause 21, wherein the refinement process comprises a filtering process.

23. The method of clause 21, wherein the refinement process comprises a nonlinear process.

24. The method of clause 15, wherein the parameters of the linear model are α and β, wherein α ═ C1-C0)/(L1-L0) and β ═ C0- α L0, wherein C0 and C1 are chroma samples, and wherein L0 and L1 are luma samples.

25. The method of clause 24, wherein C0 and L0 are represented as { Lx1, Lx2, …, LxS } and { Cx1, Cx2, …, CxS } based on S downsampled chroma and luma samples, respectively, wherein C1 and L1 are represented as { Ly1, Ly2, …, LyT } and { Cy1, Cy2, …, CyT } based on T downsampled chroma and luma samples, respectively,

among them, C0 ═ f0(Cx1, Cx2, …, CxS), L0 ═ f1(Lx1, Lx2, …, LxS), C1 ═ f2(Cy1, Cy2, …, CyT), and L1 ═ f1(Ly1, Ly2, …, LyT), and

where f0, f1, f2, and f3 are functions.

26. The method of clause 25, wherein f0 and f1 are first functions.

27. The method of clause 25, wherein f2 and f3 are second functions.

28. The method of clause 25, wherein f0, f1, f2, and f3 are third functions.

29. The method of clause 28, wherein the third function is an averaging function.

30. The method of clause 25, wherein S-T.

31. The method of clause 25, wherein { Lx1, Lx2, …, LxS } is the smallest sample in the set of luma samples.

32. The method of clause 25, wherein { Lx1, Lx2, …, LxS } is the largest sample in the set of luma samples.

33. The method of clause 31 or 32, wherein the set of luminance samples comprises all neighboring samples used in VTM-3.0 to derive parameters of the linear model.

34. The method of clause 31 or 32, wherein the set of luminance samples comprises a subset of neighboring samples used in VTM-3.0 to derive parameters of the linear model, and wherein the subset excludes all neighboring samples.

35. The method of clause 1, wherein the two or more chroma samples are selected from one or more of a left column, an upper column, a right upper column, or a left lower column relative to the current video block.

36. The method of clause 1, wherein the two or more chroma samples are selected based on a ratio of a height of the current video block to a width of the current video block.

37. The method of clause 1, wherein the two or more chroma samples are selected based on a coding mode of the current video block.

38. The method of clause 37, wherein the codec mode of the current video block is a first linear mode different from a second linear mode using only left neighboring samples and a third linear mode using only up neighboring samples, wherein the coordinates of the up-left samples of the current video block are (x, y), and wherein the width and height of the current video block are W and H, respectively.

39. The method of clause 38, wherein the two or more chroma samples comprise samples having coordinates (x-1, y), (x, y-1), (x-1, y + H-1), and (x + W-1, y-1).

40. The method of clause 38, wherein the two or more chroma samples comprise samples having coordinates (x-1, y), (x, y-1), (x-1, y + H-H/W-1), and (x + W-1, y-1), and wherein H > W.

41. The method of clause 38, wherein the two or more chroma samples comprise samples having coordinates (x-1, y), (x, y-1), (x-1, y + H-1), and (x + W-W/H-1, y-1), and wherein H < W.

42. The method of clause 38, wherein the two or more chroma samples comprise samples having coordinates (x-1, y), (x, y-1), (x-1, y + H-max (1, H/W)) and (x + W-max (1, W/H), y-1).

43. The method of clause 38, wherein the two or more chromatic samples comprise samples having coordinates (x, y-1), (x + W/4, y-1), (x + 2W/4, y-1), and (x + 3W/4, y-1).

44. The method of clause 38, wherein the two or more chroma samples comprise samples having coordinates (x, y-1), (x + W/4, y-1), (x + 3W/4, y-1), and (x + W-1, y-1).

45. The method of clause 38, wherein the two or more chromatic samples comprise samples having coordinates (x, y-1), (x + (2W)/4, y-1), (x +2 x (2W)/4, y-1), and (x +3 x (2W)/4, y-1).

46. The method of clause 38, wherein the two or more colorimetric spots comprise spots having coordinates (x, y-1), (x + (2W)/4, y-1), (x +3 x (2W)/4, y-1), and (x + (2W) -1, y-1).

47. The method of clause 38, wherein the two or more chromatic samples comprise samples having coordinates (x-1, y), (x-1, y + H/4), (x-1, y +2 x H/4), and (x-1, y +3 x H/4).

48. The method of clause 38, wherein the two or more chromatic samples comprise samples having coordinates (x-1, y), (x-1, y + 2H/4), (x-1, y + 3H/4), and (x-1, y + H-1).

49. The method of clause 38, wherein the two or more chromatic samples comprise samples having coordinates (x-1, y), (x-1, y + (2H)/4), (x-1, y +2 x (2H)/4), and (x-1, y +3 x (2H)/4).

50. The method of clause 38, wherein the two or more chromatic samples comprise samples having coordinates (x-1, y), (x-1, y +2 (2H)/4), (x-1, y +3 (2H)/4), and (x-1, y + (2H) -1).

51. The method of any of clauses 39-50, wherein exactly two of the four samples are selected to determine a set of values of parameters of the linear model.

52. A video decoding apparatus comprising a processor configured to implement the method according to one or more of clauses 1 to 51.

53. A video encoding apparatus comprising a processor configured to implement the method according to one or more of clauses 1 to 51.

54. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of clauses 1-51.

55. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing the method according to any of clauses 1 to 51.

A second set of clauses describes certain features and aspects of the disclosed technology listed in the previous section, including, for example, examples 20, 21, 22.

1.A method for video processing, comprising: for a transition between a current video block of video as a chroma block and a codec representation of the video, determining whether to derive maxima and/or minima of a luma component and a chroma component for deriving parameters of a cross-component linear model (CCLM) based on availability of left and upper neighboring blocks of the current video block; and performing a conversion based on the determination.

2. The method of clause 1, wherein the maximum and/or minimum values are not derived if left and upper neighboring blocks are unavailable.

3. The method of clause 1, wherein the determining is based on a number of available neighboring samples for the current video block, and wherein the available neighboring samples are used to derive parameters of a cross-component linear model.

4. The method of clause 3, wherein the maximum and/or minimum values are not derived in the case where numSampL ═ 0 and numSampT ═ 0, numSampL and numSampT indicate the number of available neighboring samples from the left neighboring block and the number of available neighboring samples from the upper neighboring block, respectively, and wherein the available neighboring samples from the left neighboring block and the available neighboring samples from the upper neighboring block are used to derive parameters of the cross-component linear model.

5. The method of clause 3, wherein the maximum and/or minimum values are not derived in the case of numSampL + numSampT ═ 0, numSampL and numSampT indicate the number of available neighboring samples from the left neighboring block and the number of available neighboring samples from the upper neighboring block, respectively, and wherein the available neighboring samples from the left neighboring block and the available neighboring samples from the upper neighboring block are used to derive the parameters of the cross-component linear model.

6. The method of clause 1, wherein the determining is based on a number of the selected samples used to derive parameters of the cross-component linear model.

7. The method of clause 6, wherein in the case where cntL ═ 0 and cntT ═ 0, maximum and/or minimum values are not derived, cntL and cntT indicate the number of selected samples from the left neighboring block and the number of selected samples from the upper neighboring block, respectively.

8. The method of clause 6, wherein in the case of cntL + cntT ═ 0, no maximum and/or minimum values are derived, cntL and cntT indicating the number of selected samples from the left neighboring block and the number of selected samples from the upper neighboring block, respectively.

9. A method for video processing, comprising: determining, for a transition between a current video block of video as a chroma block and a codec representation of the video, a location at which a luma sample point is downsampled, wherein the downsampled luma sample point is used to determine parameters of a cross-component linear model (CCLM) based on the chroma sample point and the downsampled luma sample point, wherein the downsampled luma sample point is at a location corresponding to a location of the chroma sample point used to derive the parameters of the CCLM; and performing a conversion based on the determination.

10. The method of clause 9, wherein the luminance samples are not downsampled at locations outside the current video block and are not used to determine parameters of the CCLM.

11. A method for video processing, comprising: for a transition between a current video block of video as a chroma block and a codec representation of the video, determining a method to derive parameters of a cross-component linear model (CCLM) using chroma samples and luma samples based on codec conditions associated with the current video block; and performing a conversion based on the determination.

12. The method of clause 11, wherein the codec condition corresponds to a color format of the current video block.

13. The method of clause 12, wherein the color format is 4:2:0 or 4:4: 4.

14. The method of clause 11, wherein the coding/decoding condition corresponds to a color representation method of the current video block.

15. The method of clause 14, wherein the color representation method is RGB or YCbCr.

16. The method of clause 11, wherein the chroma sampling points are downsampled, and the determining is dependent on positions of the downsampled chroma sampling points.

17. The method of clause 11, wherein the method to derive the parameters comprises determining the parameters of the CCLM based on chroma samples and luma samples selected from a group of neighboring chroma samples based on a position rule.

18. The method of clause 11, wherein the method to derive the parameters comprises determining the parameters of the CCLM based on the maximum and minimum values of the chroma and luma samples.

19. The method of clause 11, wherein the method for deriving the parameters comprises determining parameters of a CCLM, wherein the parameters of the CCLM are fully determinable from two chroma samples and corresponding two luma samples.

20. The method of clause 11, wherein the method to derive the parameters comprises determining the parameters of the CCLM using a parameter table, wherein entries of the parameter table are retrieved from two chroma sample values and two luma sample values.

21. The method of any of clauses 1-20, wherein performing the transformation comprises generating a codec representation from the current block.

22. The method of any of clauses 1-20, wherein performing the transformation comprises generating the current block from a codec representation.

23. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of clauses 1-22.

24. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing the method according to any of clauses 1 to 22.

A third set describes certain features and aspects of the disclosed techniques listed in the previous section, including, for example, example 23.

1.A method for video processing, comprising: for a transition between a current video block of the video and a codec representation of the video, determining parameters of a codec tool using a linear model based on selected neighboring samples of the current video block and corresponding neighboring samples of a reference block; and performing a conversion based on the determination.

2. The method of clause 1, wherein the codec tool is a Local Illumination Compensation (LIC) tool that includes a linear model that uses illumination changes in the current video block during the conversion.

3. The method of clause 2, wherein the neighboring samples of the current video block and the neighboring samples of the reference block are selected based on a location rule.

4. The method of clause 2, wherein the parameters of the coding tool are determined based on maximum and minimum values of neighboring samples of the current video block and neighboring samples of the reference block.

5. The method of clause 2, wherein the parameters of the codec tool are determined using a parameter table, wherein entries of the parameter table are retrieved from two neighboring samples of the current video block and two neighboring samples of the reference block.

6. The method of clause 2, wherein neighboring samples of the current video block and neighboring samples of the reference block are downsampled to derive parameters of a coding tool.

7. The method of clause 2, wherein the neighboring samples used to derive the parameters of the LIC tool exclude samples at a particular location in the upper and/or left column of the current video block.

8. The method of clause 2, wherein the upper left sample of the current video block has coordinates (x0, y0), and the sample having coordinates (x0, y0-1) is not used to derive parameters of the LIC tool.

9. The method of clause 2, wherein the upper left sample of the current video block has coordinates (x0, y0), and the sample having coordinates (x0-1, y0) is not used to derive parameters of the LIC tool.

10. The method of clause 7, wherein the particular location depends on the availability of the upstream and/or left column.

11. The method of clause 7, wherein the particular location depends on a block size of the current video block.

12. The method of clause 1, wherein the determining is dependent on availability of the upstream and/or left column.

13. The method of clause 2, wherein N neighboring samples of the current video block and N neighboring samples of the reference block are used to derive parameters of the LIC tool.

14. The method of clause 13, wherein N is 4.

15. The method of clause 13, wherein the N adjacent samples of the current video block comprise N/2 samples from an upper row of the current video block and N/2 samples from a left column of the current video block.

16. The method of clause 13, wherein N is equal to min (L, T), T is the total number of available neighboring samples for the current video block, and L is an integer.

17. The method of clause 13, wherein the N neighboring samples are selected based on the same rules applicable for selecting samples to derive parameters of the CCLM.

18. The method of clause 13, wherein the N neighboring samples are selected based on the same rules applicable to selecting samples to derive parameters for the first mode of the CCLM that use only the upper neighboring samples.

19. The method of clause 13, wherein the N neighboring samples are selected based on the same rules applicable to selecting samples to derive parameters for the second mode of the CCLM that use only left neighboring samples.

20. The method of clause 13, wherein the N neighboring samples of the current video block are selected based on availability of an upper or left column of the current video block.

21. A method for video processing, comprising: for a transition between a current video block of a video and a codec representation of the video, determining parameters of a Local Illumination Compensation (LIC) tool based on N neighboring samples of the current video block and N corresponding neighboring samples of a reference block, wherein the N neighboring samples of the current video block are selected based on locations of the N neighboring samples; and performing a transformation based on the determination, wherein the LIC tool uses a linear model of the illumination changes in the current video block during the transformation.

22. The method of clause 21, wherein the N neighboring samples of the current video block are selected based on the width and height of the current video block.

23. The method of clause 21, wherein the N neighboring samples of the current video block are selected based on availability of neighboring blocks of the current video block.

24. The method of claim 21, wherein N neighboring samples of the current video block are selected using a first position offset value (F) and a step size value (S) that depend on the size of the current video block and the availability of neighboring blocks.

25. The method of any of clauses 1-24, wherein the current video block is affine coded.

26. A method for video processing, comprising: for a transition between a current video block of video as a chroma block and a codec representation of the video, determining parameters of a cross-component linear model (CCLM) based on the chroma samples and corresponding luma samples; and performing a conversion based on the determination, wherein some of the chroma samples are obtained by a padding operation, and the chroma samples and corresponding luma samples are grouped into two arrays G0 and G1, each array including two chroma samples and corresponding luma samples.

27. The method of clause 26, wherein, where the sum of cntT and cntL is equal to 2, the following operations are performed in order: i) pSelComp [3] is set equal to pSelComp [0], ii) pSelComp [2] is set equal to pSelComp [1], iii) pSelComp [0] is set equal to pSelComp [1], and iv) pSelComp [1] is set equal to pSelComp [3], wherein cntT and cntL indicate the number of samples selected from the upper and left neighboring blocks, respectively, and wherein pSelComp [0] through pSelComp [3] indicate pixel values of color components of the selected corresponding samples.

28. The method of clause 26, wherein determining parameters comprises initializing values of G0[0], G0[1], G1[0], and G1[1 ].

29. The method of clause 28, wherein G0[0] ═ 0, G0[1] ═ 2, G1[0] ═ 1, and G1[1] ═ 3.

30. The method of clause 28, wherein determining the parameters further comprises, after initializing the values, exchanging the chroma samples of G0[0] and their corresponding luma samples with the chroma samples of G0[1] and their corresponding luma samples after comparing the two luma sample values of G0[0] and G0[1 ].

31. The method of clause 30, wherein in the event that the luma sample value of G0[0] is greater than the luma sample value of G0[1], the chroma samples of G0[0] and their corresponding luma samples are exchanged with the chroma samples of G0[1] and their corresponding luma samples.

32. The method of clause 28, wherein determining the parameters further comprises, after initializing the values, exchanging the chroma samples of G1[0] and their corresponding luma samples with the chroma samples of G1[1] and their corresponding luma samples after comparing the two luma sample values of G1[0] and G1[1 ].

33. The method of clause 32, wherein in the event that the luma sample value of G1[0] is greater than the luma sample value of G1[1], the chroma samples of G1[0] and their corresponding luma samples are exchanged with the chroma samples of G1[1] and their corresponding luma samples.

34. The method of clause 28, wherein determining the parameters further comprises, after initializing the values, exchanging the chroma samples of G0[0] or G0[1] and their corresponding luma samples with the chroma samples of G1[0] or G1[1] and their corresponding luma samples after comparing the two luma sample values of G0[0] and G1[1 ].

35. The method of clause 34, wherein chroma samples of G0[0] or G0[1] and their corresponding luma samples are swapped with chroma samples of G1[0] or G1[1] and their corresponding luma samples in the event that the luma sample value of G0[0] is greater than the luma sample value of G1[1 ].

36. The method of clause 28, wherein determining the parameters further comprises, after initializing the values, exchanging the chroma samples of G0[1] and their corresponding luma samples with the chroma samples of G1[0] and their corresponding luma samples after comparing the two luma sample values of G0[1] and G1[0 ].

37. The method of clause 36, wherein in the event that the luma sample values of G0[1] are greater than the luma sample values of G1[0], the chroma samples of G0[1] and their corresponding luma samples are exchanged with the chroma samples of G1[0] and their corresponding luma samples.

38. The method of clause 28, wherein determining the parameters further comprises, after initializing the values, performing the following swapping operations in order after the comparison of the two luminance sample values of G0[0], G0[1], G1[0], and G1[1 ]: i) the method includes the steps of (i) exchanging the chroma samples of G0[0] and the corresponding luminance samples thereof with the chroma samples of G0[1] and the corresponding luminance samples thereof, (ii) exchanging the chroma samples of G1[0] and the corresponding luminance samples thereof with the chroma samples of G1[1] and the corresponding luminance samples thereof, iii) exchanging the chroma samples of G0[0] or G0[1] and the corresponding luminance samples thereof with the chroma samples of G1[0] or G1[1] and the corresponding luminance samples thereof, and iv) exchanging the chroma samples of G0[1] and the corresponding luminance samples thereof with the chroma samples of G1[0] and the corresponding luminance samples thereof.

39. The method of any of clauses 1-38, wherein performing the transformation comprises generating a codec representation from the current block.

40. The method of any of clauses 1-38, wherein performing the transformation comprises generating the current block from a codec representation.

41. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement the method of any of clauses 1-40.

42. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing the method according to any of clauses 1 to 40.

From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited, except as by the appended claims.

Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term "data processing unit" or "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

The specification and drawings are, accordingly, to be regarded in an illustrative sense only and are intended to be exemplary. As used herein, the use of "or" is intended to include "and/or" unless the context clearly indicates otherwise.

Although this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.

Only a few embodiments and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

75页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:图像解码装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类