Adaptive loop filtering for video coding and decoding

文档序号:1943007 发布日期:2021-12-07 浏览:18次 中文

阅读说明:本技术 用于视频编解码的自适应环路滤波 (Adaptive loop filtering for video coding and decoding ) 是由 刘鸿彬 张莉 张凯 庄孝强 邓智玭 于 2020-04-16 设计创作,主要内容包括:描述了用于自适应环路滤波的设备、系统和方法。在示例性方面,一种用于视频处理的方法包括:对视频的当前视频块执行滤波过程,其中该滤波过程使用滤波器系数并且包括具有至少一个中间结果的两个或更多个操作;对至少一个中间结果应用裁剪操作;以及基于至少一个中间结果来执行当前视频块和视频的比特流表示之间的转换,其中,至少一个中间结果基于滤波器系数和当前视频块的当前样点与当前样点的临近样点之间的差的加权和。(Apparatus, systems, and methods for adaptive loop filtering are described. In an exemplary aspect, a method for video processing includes: performing a filtering process on a current video block of the video, wherein the filtering process uses filter coefficients and includes two or more operations with at least one intermediate result; applying a clipping operation to the at least one intermediate result; and performing a conversion between the current video block and a bitstream representation of the video based on at least one intermediate result, wherein the at least one intermediate result is based on a weighted sum of filter coefficients and differences between a current sample of the current video block and neighboring samples of the current sample.)

1. A method for video processing, comprising:

performing a filtering process on a current video block of a video, wherein the filtering process uses filter coefficients and comprises two or more operations with at least one intermediate result;

applying a clipping operation to the at least one intermediate result; and

performing a conversion between the current video block and a bitstream representation of the video based on the at least one intermediate result,

wherein the at least one intermediate result is based on a weighted sum of the filter coefficients and differences between a current sample of the current video block and neighboring samples of the current sample.

2. The method of claim 1, further comprising:

classifying neighboring samples of the current sample into a plurality of groups for the current sample, wherein a clipping operation is applied to intermediate results in each of the plurality of groups using different parameters.

3. The method of claim 2, wherein the at least one intermediate result comprises a weighted average of differences between the current sample and neighboring samples in each of the plurality of groups.

4. The method of claim 1, wherein a plurality of adjacent samples of the current video block share filter coefficients, and wherein the clipping operation is applied once to each of the plurality of adjacent samples.

5. The method of claim 4, wherein at least two of the plurality of neighboring samples are symmetrically located with respect to a sample of the current video block.

6. The method of claim 4 or 5, wherein the filter shape associated with the filtering process is a symmetric pattern.

7. The method of any of claims 4 to 6, wherein one or more parameters of the clipping operation are signaled in the bitstream representation.

8. The method of claim 1, wherein the samples of the current video block comprise N neighboring samples, wherein the clipping operation is applied once to M1 of the N neighboring samples, wherein M1 and N are positive integers and M1 ≦ N.

9. The method of claim 1, further comprising:

classifying N neighboring ones of the samples into M2 groups for samples of the current video block, wherein the clipping operation is applied once to each of the M2 groups, and wherein M2 and N are positive integers.

10. The method of claim 1, wherein the clipping operation is applied to a luma component associated with the current video block.

11. The method of claim 1, wherein the clipping operation is applied to a Cb component or a Cr component associated with the current video block.

12. The method of any of claims 1-11, wherein the clipping operation is defined as K (min, max, input), where input is an input to the clipping operation, min is a nominal minimum value of an output of the clipping operation, and max is a nominal maximum value of the output of the clipping operation.

13. The method of claim 12, wherein an actual maximum value of the output of the clipping operation is less than the nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is greater than the nominal minimum value.

14. The method of claim 12, wherein an actual maximum value of the output of the clipping operation is equal to the nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is greater than the nominal minimum value.

15. The method of claim 12, wherein an actual maximum value of the output of the clipping operation is less than the nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is equal to the nominal minimum value.

16. The method of claim 12, wherein an actual maximum value of the output of the clipping operation is equal to the nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is equal to the nominal minimum value.

17. The method of claim 1, wherein the filtering process comprises an Adaptive Loop Filtering (ALF) process configured with multiple ALF filter coefficient sets.

18. The method of claim 17, wherein at least one parameter for the clipping operation is predefined for one or more of the plurality of sets of ALF filter coefficients.

19. The method of claim 17, wherein at least one parameter for the cropping operation is signaled in a bitstream representation comprising a slice group, slice, picture, or slice of the current video block.

20. The method of claim 19, wherein the at least one parameter is signaled only for one or more color components associated with the current video block.

21. The method of claim 17, wherein at least one ALF filter coefficient set of the plurality of ALF filter coefficient sets and one or more parameters for the clipping operation are stored in a same memory location, and wherein the at least one ALF filter coefficient set of the plurality of ALF filter coefficient sets or the one or more parameters are inherited by a Coding Tree Unit (CTU), a Coding Unit (CU), a slice group, a slice, or a picture that comprises a coding of the current video block.

22. The method of claim 21, wherein the clipping operation is configured to use one or more parameters corresponding to a temporal ALF coefficient set of the plurality of ALF filter coefficient sets when determining the temporal ALF coefficient set for a filtering process that includes a CTU, CU, slice group, slice, or picture of the current video block.

23. The method of claim 22, wherein the one or more parameters corresponding to the set of temporal ALF coefficients are for only one or more color components associated with the current video block.

24. The method of claim 21, wherein the one or more parameters corresponding to a temporal ALF coefficient set of the plurality of ALF filter coefficient sets are signaled in the bitstream representation when determining that the temporal ALF coefficient set is used in a filtering process for a CTU, CU, slice group, slice, or picture that comprises the current video block.

25. The method of claim 24, wherein the one or more parameters corresponding to the set of temporal ALF coefficients are signaled only for one or more color components associated with the current video block.

26. The method of claim 21, wherein a first set of one or more parameters of a first color component associated with the current video block is signaled, and wherein a second set of one or more parameters of a second color component associated with the current video block is inherited.

27. The method of any of claims 1-26, wherein the converting generates the current video block from the bitstream representation.

28. The method of any of claims 1-26, wherein the converting generates the bitstream representation from the current video block.

29. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement a method according to one or more of claims 1-28.

30. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing a method according to one or more of claims 1 to 28.

Technical Field

This patent document relates to video encoding and decoding techniques, devices and systems.

Background

Despite advances in video compression technology, digital video still occupies the greatest bandwidth usage on the internet and other digital communication networks. As the number of networked user devices capable of receiving and displaying video increases, the demand for bandwidth for digital video usage is expected to continue to grow.

Disclosure of Invention

Devices, systems, and methods related to digital video coding, and more particularly, to adaptive loop filtering for video coding are described. The described methods may be applied to existing Video codec standards (e.g., High Efficiency Video Coding (HEVC)) and future Video codec standards (e.g., universal Video Coding (VVC)) or codecs.

The video codec standard has evolved largely through the development of the well-known ITU-T and ISO/IEC standards. ITU-T produces H.261 and H.263, ISO/IEC produces MPEG-1 and MPEG-4Visual, and these two organizations jointly produce the H.262/MPEG-2 Video and 264/MPEG-4 Advanced Video Coding (AVC) standard and the H.265/HEVC standard. Since h.262, video codec standards have been based on hybrid video codec structures, in which temporal prediction plus transform coding is utilized. In order to explore future Video coding and decoding technologies beyond HEVC, VCEG and MPEG united in 2015 to form Joint Video Exploration Team (jfet). Thereafter, JFET adopted many new methods and placed them into a reference software named Joint Exploration Model (JEM). In month 4 of 2018, a Joint Video Expert Team (jviet) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11(MPEG) was established to address the VVC standard with a 50% bit rate reduction compared to HEVC.

In one representative aspect, the disclosed technology can be used to provide a method for video processing. The method comprises the following steps: performing a filtering process on a current video block of the video, wherein the filtering process uses filter coefficients and includes two or more operations with at least one intermediate result; applying a clipping operation to the at least one intermediate result; and performing a conversion between the current video block and a bitstream representation of the video based on at least one intermediate result, wherein the at least one intermediate result is based on a weighted sum of filter coefficients and differences between a current sample of the current video block and neighboring samples of the current sample.

In another representative aspect, the disclosed technology can be used to provide a method for video processing. The method comprises the following steps: encoding a current video block of a video into a bitstream representation of the video, wherein the current video block is encoded using an Adaptive Loop Filter (ALF); and selectively include, in the bitstream representation, an indication of a set of time domain adaptive filters within the one or more sets of time domain adaptive filters based on availability or use of the one or more sets of time domain adaptive filters.

In yet another representative aspect, the disclosed technology can be used to provide a method for video processing. The method comprises the following steps: determining availability or use of one or more sets of temporal adaptive filters based on an indication of the sets of temporal adaptive filters in a bitstream representation of the video, wherein the one or more sets of temporal adaptive filters comprise sets of temporal adaptive filters applicable to a current video block of the video coded with an Adaptive Loop Filter (ALF); and generating a decoded current video block from the bitstream representation by selectively applying a set of time-domain adaptive filters based on the determination.

In yet another representative aspect, the disclosed technology can be used to provide a method for video processing. The method comprises the following steps: determining a plurality of sets of temporal Adaptive Loop Filter (ALF) coefficients for a current video block coded with an adaptive loop filter based on a set of available ALF coefficients, wherein the set of available temporal ALF coefficients has been encoded or decoded prior to the determining, and wherein the plurality of sets of ALF coefficients are for a slice group, slice, picture, Coded Tree Block (CTB), or video unit that includes the current video block; and performing a conversion between the current video block and a bitstream representation of the current video block based on the plurality of sets of time domain ALF coefficients.

In yet another representative aspect, the disclosed technology can be used to provide a method for video processing. The method comprises the following steps: determining, for a transition between a current video block of a video and a bitstream representation of the video, an indication of Adaptive Loop Filtering (ALF) in a header of a video region of the video equal to an indication of ALF in an Adaptive Parameter Set (APS) Network Abstraction Layer (NAL) unit associated with the bitstream representation; and performing the conversion.

In yet another representative aspect, the disclosed technology can be used to provide a method for video processing. The method comprises the following steps: selectively enabling a non-linear Adaptive Loop Filtering (ALF) operation for transitions between a current video block of the video and a bitstream representation of the video based on a type of adaptive loop filter used by a video region of the video; and performing a transition after the selectively enabling.

In yet another representative aspect, the above-described methods are embodied in the form of processor executable code and stored in a computer readable program medium.

In yet another representative aspect, an apparatus configured or operable to perform the above-described method is disclosed. The apparatus may include a processor programmed to implement the method.

In yet another representative aspect, a video decoder device may implement a method as described herein.

The above and other aspects and features of the disclosed technology are described in more detail in the accompanying drawings, the description and the claims.

Drawings

Fig. 1 shows an example of an encoder block diagram for video codec.

Fig. 2A, 2B, and 2C show examples of adaptive loop filter (GALF) filter shapes based on geometric transforms.

Figure 3 shows an example of a flow chart of GALF encoder decisions.

Fig. 4A-4D illustrate example sub-sampled laplacian calculations for Adaptive Loop Filter (ALF) classification.

Fig. 5 shows an example of a luminance filter shape.

Fig. 6 shows an example of region division of a Wide Video Graphics Array (WVGA) sequence.

Fig. 7 shows an exemplary flow diagram of a decoding procedure with reshaping (reshape).

FIG. 8 shows an example of an optical flow trace used by the bi-directional optical flow (BIO) algorithm.

FIGS. 9A and 9B show example snapshots using a bi-directional optical flow (BIO) algorithm without block expansion.

Fig. 10 shows an example of prediction refinement with optical flow (PROF).

11A-11F illustrate flow diagrams of example methods for adaptive loop filtering in accordance with the disclosed technology.

Fig. 12 is a block diagram of an example of a hardware platform for implementing the visual media decoding or visual media encoding techniques described in this document.

FIG. 13 is a block diagram of an example video processing system in which the disclosed techniques may be implemented.

Detailed Description

Due to the growing demand for higher resolution video, video coding methods and techniques are ubiquitous in modern technology. Video codecs typically include electronic circuits or software that compress or decompress digital video and are continually being improved to provide higher codec efficiency. Video codecs convert uncompressed video into a compressed format and vice versa. There is a complex relationship between video quality, the amount of data used to represent the video (determined by the bit rate), the complexity of the coding and decoding algorithms, susceptibility to data loss and errors, ease of editing, random access, and end-to-end delay (latency). The compression format typically conforms to a standard video compression specification, such as the High Efficiency Video Codec (HEVC) standard (also referred to as h.265 or MPEG-H part 2), the universal video codec (VVC) standard to be finalized, or other current and/or future video codec standards.

In some embodiments, reference software called Joint Exploration Model (JEM) is used to explore future video codec techniques. In JEM, sub-block based prediction is employed in several codec tools, such as affine prediction, optional temporal motion vector prediction (ATMVP), spatial-temporal motion vector prediction (STMVP), bi-directional optical flow (BIO), frame rate up-conversion (FRUC), Locally Adaptive Motion Vector Resolution (LAMVR), Overlapped Block Motion Compensation (OBMC), Local Illumination Compensation (LIC), and decoder-side motion vector refinement (DMVR).

Embodiments of the disclosed techniques may be applied to existing video codec standards (e.g., HEVC, h.265) and future standards to improve runtime performance. Section headings are used in this document to improve readability of the description, and the discussion or embodiments (and/or implementations) are not limited in any way to only the corresponding sections.

1 example of color space and chroma sub-sampling

A color space, also called a color model (or color system), is an abstract mathematical model that simply describes a range of colors as a tuple of numbers, typically 3 or 4 values or color components (e.g., RGB). Fundamentally, color space is a specification of coordinate systems and subspaces.

For video compression, the most common color spaces are YCbCr and RGB.

YCbCr, Y 'CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y' CBCR, are a family of color spaces that are used as part of a color image pipeline in video and digital photography systems. Y' is the luminance component and CB and CR are the blue-difference and red-difference chrominance components. Y' (having primary colors) is different from Y as luminance, meaning that light intensity is based on gamma corrected RGB primary color non-linear codec.

Chroma subsampling is the practice of encoding images by implementing a lower resolution for chroma information than for luma information, taking advantage of the lower acuity of the human visual system to chroma than to luma.

1.14: 4:4 color format

Each of the three Y' CbCr components has the same sampling rate and therefore no chrominance subsampling. This scheme is sometimes used in high-end movie scanners and movie post-production.

1.24: 2:2 color format

The two chrominance components are sampled at half the luminance sampling rate, e.g., the horizontal chrominance resolution is halved. This reduces the bandwidth of the uncompressed video signal by one third with little visual difference.

1.34: 2:0 color format

At 4:2:0, the horizontal sampling is doubled compared to 4:1:1, but since the Cb and Cr channels are only sampled on each alternate line in this scheme, the vertical resolution is halved. The data rate is therefore the same. Cb and Cr are each sub-sampled horizontally and vertically by a factor of 2. There are three variants of the 4:2:0 scheme with different horizontal and vertical selection points (typing).

In MPEG-2, Cb and Cr are horizontally co-located. Cb and Cr are located between the pixels in the vertical direction (in the middle).

In JPEG/JFIF, H.261, and MPEG-1, Cb and Cr are located in the middle, in the middle of the alternating luminance samples.

In 4:2:0DV, Cb and Cr are co-located in the horizontal direction. In the vertical direction, they are co-located on alternate lines.

Codec flow example for 2-representative video codec

Fig. 1 shows an example of an encoder block diagram for a VVC, which contains three loop filter blocks: deblocking Filter (DF), Sample Adaptive Offset (SAO), and ALF. Unlike DF using a predefined filter, SAO and ALF utilize the original samples of the current picture, signaling the coding side information of offsets and filter coefficients, reducing the mean square error between the original samples and the reconstructed samples by adding offsets and by applying a Finite Impulse Response (FIR) filter, respectively. ALF is located in the last processing stage of each picture and can be seen as a tool trying to capture and fix artifacts (artifacts) created in the previous stage.

Example of adaptive Loop Filter based on geometric transformation in 3 JEM

In JEM, a geometric transform-based adaptive loop filter (GALF) using block-based filter adaptation [3] is applied. For the luminance component, one of 25 filters is selected for each 2x2 block based on the direction and activity of the local gradient.

3.1 examples of Filter shapes

In JEM, up to three diamond filter shapes (5 × 5 diamonds, 7 × 7 diamonds, and 9 × 9 diamonds, respectively, as shown in fig. 2A, 2B, and 2C) may be selected for the luminance component. The index is signaled at the picture level to indicate the filter shape for the luma component. For chroma components in a picture, a5 × 5 diamond shape is always used.

3.1.1 Block Classification

Each 2x2 block is classified into one of 25 classes. Classification index C is based on its directionality D and activityIs derived as follows:

to calculate D andthe gradients in the horizontal, vertical and two diagonal directions are first calculated using the 1-D Laplacian:

the indices i and j represent the coordinates of the top-left samples in the 2x2 block, and R (i, j) represents the reconstructed sample at coordinate (i, j).

Then the maximum and minimum values of D for the gradients in the horizontal and vertical directions are set as:

and the maximum and minimum values of the gradients in the two diagonal directions are set as:

to derive the values of the directivity D, these values are compared with each other and with two thresholds t1And t2And (3) comparison:

step 1, ifAndboth true, D is set to 0.

Step 2, ifContinuing from step 3; otherwise, continue from step 4.

Step 3, ifD is set to 2; otherwise, D is set to 1.

Step 4, ifD is set to 4; otherwise, D is set to 3.

The activity value a is calculated as:

a is further quantized to a range of 0 to 4 (including 0 and 4), and the quantized value is represented as

For two chroma components in a picture, no classification method is applied, i.e. a single set of ALF coefficients is applied for each chroma component.

3.1.2 geometric transformation of Filter coefficients

Before each 2x2 block is filtered, a geometric transformation, such as rotation or diagonal and vertical flipping, is applied to the filter coefficients f (k, l) according to the gradient values calculated for that block. This is equivalent to applying these transforms to samples in the filter support area. This idea makes different blocks to which ALF is applied more similar by aligning their directionality.

Three geometric transformations were introduced, including diagonal, vertical flip, and rotation:

diagonal line: f. ofD(k,l)=f(l,k),

And (3) vertically overturning: f. ofV(k,l)=f(k,K-l-1), (9)

Rotating: f. ofR(k,l)=f(K-l-1,k).

Here, K is the size of the filter, and 0 ≦ K, l ≦ K-1 is the coefficient coordinate, such that position (0,0) is in the upper left corner and position (K-1 ) is in the lower right corner. A transform is applied to the filter coefficients f (k, l) based on the gradient values calculated for the block. Table 1 summarizes the relationship between the transformation and the four gradients in the four directions.

TABLE 1 mapping of gradients and transformations computed for a block

Gradient value Transformation of
gd2<gd1And g ish<gv Without conversion
gd2<gd1And g isv<gh Diagonal line
gd1<gd2And g ish<gv Vertically flipped
gd1<gd2And g isv<gh Rotate

3.1.3 Signaling of Filter parameters

In JEM, the GALF filter parameters are signaled for the first CTU, i.e., after the slice header of the first CTU and before the SAO parameters. Up to 25 sets of luma filter coefficients may be signaled. To reduce the bit overhead, filter coefficients of different classes may be combined. Further, the GALF coefficients of the reference picture are stored and allowed to be reused as the GALF coefficients of the current picture. The current picture may choose to use the GALF coefficients stored for the reference picture and avoid the GALF coefficient signaling. In this case, only the index of one of the reference pictures is signaled and the stored GALF coefficients of the indicated reference picture are inherited for the current picture.

To support GALF temporal prediction, a candidate list of GALF filter sets is maintained. At the beginning of decoding a new sequence, the candidate list is empty. After decoding one picture, the corresponding filter set may be added to the candidate list. Once the size of the candidate list reaches the maximum allowed value (i.e., 6 in the current JEM), the new filter set overwrites the oldest set in decoding order, i.e., a first-in-first-out (FIFO) rule is applied to update the candidate list. To avoid repetition, a set can only be added to a list if the corresponding picture does not use GALF temporal prediction. To support temporal scalability, there are multiple candidate lists of filter sets, and each candidate list is associated with a temporal layer. More specifically, each array assigned by a temporal layer index (TempIdx) may consist of a set of filters with previously decoded pictures equal to a lower TempIdx. For example, the kth array is assigned to be associated with a TempIdx equal to k, and it contains only filter sets from pictures with TempIdx less than or equal to k. After a certain picture is coded, the filter set associated with the picture will be used to update those arrays associated with TempIdx or higher.

Temporal prediction of GALF coefficients is used for inter-coded frames to minimize signaling overhead. For intra frames, temporal prediction is not available and a set of 16 fixed filters is assigned to each class. To indicate the use of fixed filters, the flag for each class is signaled and, if necessary, the index of the selected fixed filter. Even when a fixed filter is selected for a given class, the coefficients f (k, l) of the adaptive filter may still be transmitted for that class, in which case the coefficients of the filter to be applied to the reconstructed image are the sum of two sets of coefficients.

The filtering process of the luminance component may be controlled at the CU level. A signaling flag to indicate whether GALF is applied to the luma component of the CU. For chroma components, whether or not GALF is applied is indicated only at the picture level.

3.1.4 Filter Process

At the decoder side, when GALF is enabled for a block, each sample R (i, j) within the block is filtered, resulting in a sample value R' (i, j) as shown below, where L represents the filter length, fm,nRepresents the filter coefficients, and f (k, l) represents the decoded filter coefficients.

3.1.5 encoder-side Filter parameter determination procedure

The entire encoder decision process for GALF is shown in figure 3. For each CU's luma samples, the encoder decides whether or not to apply GALF, and the appropriate signaling flag is included in the slice header. For chroma samples, the decision to apply the filter is made based on the picture level instead of the CU level. Furthermore, only when luma GALF is enabled for a picture, the chroma GALF of the picture is checked.

Example of adaptive loop Filter based on geometric transformations in 4 VVC

The current design of GALF in VVC has the following major changes compared to that in JEM:

1) the adaptive filter shape is removed. Only 7 × 7 filter shapes are allowed for the luminance component and 5 × 5 filter shapes are allowed for the chrominance component.

2) Both the temporal prediction of the ALF parameters and the prediction from the fixed filter are removed.

3) For each CTU, a bit flag is signaled whether ALF is enabled or disabled.

4) The calculation of the category index is performed at the 4x4 level instead of the 2x2 level. Furthermore, as suggested in JFET-L0147, a subsampled Laplacian calculation method for ALF classification was utilized. More specifically, there is no need to calculate the horizontal/vertical/45 diagonal/135 degree gradient for each sample point within a block. Instead, 1:2 sub-sampling is utilized.

Example of region-based adaptive Loop Filter in 5 AVS2

ALF is the last stage of loop filtering. There are two stages in the process. The first stage is filter coefficient derivation. To train the filter coefficients, the encoder classifies the reconstructed pixels of the luminance component into 16 regions and trains a set of filter coefficients for each class using the wiener-hopf equation to minimize the mean square error between the original and reconstructed frames. To reduce redundancy among these 16 sets of filter coefficients, the encoder will adaptively combine them based on rate-distortion performance. Up to 16 different filter sets may be assigned to the luma component and only one filter set may be assigned to the chroma component. The second phase is the filtering decision, which includes both the frame level and the LCU level. First, the encoder decides whether to perform frame-level adaptive loop filtering. If the frame level ALF is on, the encoder further determines whether to perform LCU level ALF.

5.1 Filter shape

The filter shape used in AVS-2 is a7 x 7 cross, with a3 x 3 square superimposed, for both luminance and chrominance components, as shown in fig. 5. Each square in fig. 5 corresponds to a sampling point. Thus, a total of 17 samples are used to derive a filtered value for the sample at location C8. In view of the overhead of sending coefficients, a point symmetric filter is used, where only nine coefficients { C0, C1.., C8} remain, which reduces the number of filter coefficients and the number of multiplications in the filtering to half. The point symmetric filter may also reduce the computation by half of one filter sample, e.g. only 9 multiplications and 14 additions per filter sample.

5.2 region-based adaptive merging

To accommodate different codec errors, AVS-2 employs a region-based multiple adaptive loop filter for the luminance component. The luminance component is divided into 16 elementary regions of approximately equal size, where each elementary region is aligned with the Largest Coding Unit (LCU) boundary, as shown in fig. 6, and a Wiener filter is derived for each region. The more filters that are used, the more distortion is reduced, but the bits used to encode these coefficients increase with the number of filters. To achieve the best rate distortion performance, the regions may be combined into fewer larger regions, which share the same filter coefficients. To simplify the merging process, each region is assigned an index according to a modified Hilbert order based on the prior relevance of the images. Two regions with consecutive indices may be merged based on the rate-distortion cost.

The mapping information between the regions should be signaled to the decoder. In AVS-2, the number of basic regions is used to represent the merging result, and the filter coefficients are sequentially compressed according to their region order. For example, when {0,1}, {2,3,4}, {5,6,7,8,9} are respectively merged with the left-hand base region into one region, only three integers are coded to represent the merged mapping, i.e., 2,3, 5.

5.3 Signaling of side information

Multiple handoff flags are also used. The sequence switch flag adaptive _ loop _ filter _ enable is used to control whether the adaptive loop filter is applied to the entire sequence. The picture switch flag picture _ ALF _ enable [ i ] controls whether ALF is applied to the corresponding ith picture component. The corresponding LCU level flag and filter coefficients for the color component are sent only when picture _ alf _ enable [ i ] is enabled. LCU level flags LCU _ ALF _ enable [ k ], control whether ALF is enabled for the corresponding kth LCU, and are interleaved into stripe data. The decision of the different level adjustment flags is based on the rate-distortion cost. The high flexibility further enables ALF to improve coding efficiency more significantly.

In some embodiments, there may be up to 16 sets of filter coefficients for the luminance component.

In some embodiments, one set of filter coefficients may be sent for each chroma component (Cb and Cr).

GALF IN 6 VTM-4

In VTM4.0, the filtering process of the adaptive loop filter is performed as follows:

O(x,y)=∑(i,j)w(i,j).I(x+i,y+j) (11)

where samples I (x + I, y + j) are input samples, O (x, y) are filtered output samples (i.e., filter results), and w (I, j) represents filter coefficients. Indeed, in VTM4.0, it is implemented using an integer algorithm for fixed point precision calculation:

where L denotes the filter length and where w (i, j) is the filter coefficient for fixed point precision.

7 nonlinear Adaptive Loop Filter (ALF)

7.1 Filter reformation (reformation)

Equation (11) can be reformulated into the following expression without affecting the codec efficiency:

O(x,y)=I(x,y)+∑(i,j)≠(0,0)w(i,j).(I(x+i,y+j)-I(x,y)) (13)

herein, w (i, j) is the same filter coefficient as in equation (11) [ except that it is equal to 1 in equation (13) and equal to 1- Σ in equation (11) ](i,j)≠(0,0)w (0,0) of w (i, j)]。

7.2 modified Filter

Using the above filter equation (13), we can easily introduce non-linearities to make ALF more efficient by using a simple clipping function to reduce the effects of neighboring sample values I (x + I, y + j), when they are too different from the current sample value I (x, y) being filtered.

In this proposal, the ALF filter is modified as follows:

O′(x,y)=I9x,y)+∑(i,j)≠(0,0)w(i,j).K(I(x+i,y+j)-I(x,y),k(i,j)) (14)

herein, K (d, b) ═ min (b, max (-b, d)) is the clipping function, and K (i, j) is the clipping parameter depending on the (i, j) filter coefficients. The encoder performs an optimization to find the best k (i, j).

In the jfet-N0242 embodiment, a clipping parameter k (i, j) is specified for each ALF filter, and each filter coefficient signals a clipping value. This means that at most 12 clipping values can be signaled in the bitstream for each luma filter and at most 6 clipping values can be signaled for chroma filters.

To limit signaling overhead and encoder complexity, we limit the estimation of the clipped value to a small set of possible values. In the proposal we only use 4 fixed values, which are the same for inter and intra slice groups.

Since the variance of local differences in luminance is usually higher than chrominance, we use two different sets for the luminance and chrominance filters. We also include the maximum sample value in each set (here, the bit depth of 10 bits is 1024) so clipping can be disabled if not needed.

The set of trim values used in the JFET-N0242 test is provided in Table 2. These 4 values have been selected by dividing the entire range of sample values for luminance (coded over 10 bits) and the range of chrominance (from 4 to 1024) roughly equally in the logarithmic domain.

More precisely, the brightness table of the clipping values is obtained by the following formula:

wherein M is 210And N ═ 4.

Similarly, the chroma table of clipping values is obtained according to the following formula:

wherein M is 210N ═ 4 and a ═ 4.

Table 2: authorized clipping value

The selected clipping value is coded in the "alf _ data" syntax element by the Golomb coding scheme using the index corresponding to the clipping value in table 2 above. The coding scheme is the same as the coding scheme of the filter index.

CTU-based ALF in 8 JVT-N0415

Stripe level time domain filter. An Adaptive Parameter Set (APS) is employed in VTM4. Each APS contains a set of signaled ALF filters, up to 32 APS supported. In this proposal, a band-level time-domain filter is tested. The slice group may reuse the ALF information from the APS to reduce overhead. The APS is updated as a first-in-first-out (FIFO) buffer.

CTB-based ALF. For the luma component, when ALF is applied to the luma CTB, a selection among 16 fixed, 5 temporal or 1 signaled filter sets is indicated. Only the filter set index is signaled. For a stripe only one new set of 25 filters can be signaled. If the new is signaled for the stripeAll luminances CTB in the same stripe share the set. The fixed filter set may be used to predict a new slice level filter set and may also be used as a candidate filter set for luminance CTB. The number of filters is 64 in total.

For the chroma components, when ALF is applied to the chroma CTB, the CTB uses the new filter if it is signaled for the slice, otherwise applies the nearest temporal chroma filter that satisfies the temporal scalability constraint.

As a slice level time domain filter, the APS is updated as a first-in-first-out (FIFO) buffer.

Norm of

The following text modifies, based on JVET-K1001-v6, using { { fixed filter } } ({ { fixed filter } }), [ [ temporal filters ] ] ([ [ temporal filter ] ]), and ((CTB-based filter index)) (((filter index based on CTB))) (i.e., using double braces, and double round braces)

7.3.3.2 adaptive loop filter data syntax

7.3.4.2 coding and decoding tree unit syntax

7.4.4.2 adaptive loop filter data semantics

An (alf _ signal _ new _ filter _ luma) designation equal to 1 signals a new luma filter set. Alf _ signal _ new _ filter _ luma equal to 0 specifies that no new luma filter set is signaled. And 0 when absent.

{ { alf _ luma _ use _ fixed _ filter _ flag } equal to 1 specifies a fixed filter set for signaling the adaptive loop filter. Alf _ luma _ use _ fixed _ filter _ flag equal to 0 specifies that the fixed filter set is not used to signal the adaptive loop filter.

{ { alf _ luma _ fixed _ filter _ set _ index } } specifies a fixed filter set index. It may be 0 … 15.

{ { alf _ luma _ fixed _ filter _ use _ pattern } equal to 0 specifies that all new filters use fixed filters. Alf _ luma _ fixed _ filter _ usage _ pattern equal to 1 specifies that some of the new filters use fixed filters while others do not.

{ { alf _ luma _ fixed _ filter _ use [ i ] } equal to 1 specifies that the ith filter uses a fixed filter. Alf _ luma _ fixed _ filter _ usage [ i ] equal to 0 specifies that the ith filter does not use a fixed filter. When it is not present, it is inferred to be 1.

An (alf _ signal _ new _ filter _ chroma) designation equal to 1 signals the new chroma filter. Alf _ signal _ new _ filter _ chroma equal to 0 specifies that no new chroma filter is signaled.

((alf _ num _ available _ temporal _ filter _ sets _ luma)) specifies the number of available temporal filter sets that may be used for the current stripe, which may be from 0.. 5. And 0 when absent.

The variable alf _ num _ available _ filter _ sets is derived as 16+ alf _ signal _ new _ filter _ luma + alf _ num _ available _ temporal _ filter _ sets _ luma.

((if alf _ signal _ new _ filter _ luma is 1, the following procedure is performed))

The variable filterCoefficients [ sigFiltIdx ] [ j ] (where sigFiltIdx ═ 0.. alf _ luma _ num _ filters _ signed _ minus1, j ═ 0..11) is initialized as follows:

filterCoefficients[sigFiltIdx][j]=alf_luma_coeff_delta_abs[sigFiltIdx][j]* (7-50)

(1-2*alf_luma_coeff_delta_sign[sigFiltIdx][j])

when alf _ luma _ coeff _ delta _ prediction _ flag is equal to 1, filterCoefficients [ sigFiltIdx ] [ j ] (where sigFiltIdx ═ 1.. alf _ luma _ num _ filters _ signalled _ minus1 and j ═ 0..11) is modified as follows:

filterCoefficients[sigFiltIdx][j]+=filterCoefficients[sigFiltIdx-1][j] (7-51)

luminance filter coefficient AlfCoeffL(the element is AlfCoeff)L[filtIdx][j]Where filtIdx is 0.. NumAlfFilters-1 and j is 0..11) is derived as follows

AlfCoeffL[filtIdx][j]=filterCoefficients[alf_luma_coeff_delta_idx[filtIdx]][j] (7-52)

{ { if alf _ luma _ use _ fixed _ filter _ flag is 1, and alf _ luma _ fixed _ filter _ usage [ filtiddx ] is 1, the following applies:

AlfCoeffL[filtIdx][j]=AlfCoeffL[filtIdx][j]+AlfFixedFilterCoeff[AlfClassToFilterMapping[alf_luma_fixed_filter_index][filtidx]][j]}}

final filter coefficient alfCoeffL[filtIdx][12]Where filtIdx ═ 0.. NumAlfFilters-1 is derived as follows:

AlfCoeffL[filtIdx][12]=128-Σk(AlfCoeffL[filtIdx][k]<<1) wherein k is 0..11 (7-53)

The requirement for bitstream conformance is AlfCoeffL[filtIdx][j](where filtIdx 0.. NumAlfFilters-1, j 0..11) should have a value of-27To 27-1 (including-2)7And 27-1), and AlfCoeffL[filtIdx][12]Should be in the range of 0 to 28-1 (including 0 to 2)8-1).

((luminance filter coefficient)) AlfCoeffLumaAll(the element is AlfCoeff)LumaAll[filtSetIdx][filtIdx][j]Where filtSetIdx ═ 0..15, filtSetIdx ═ 0.. NumAlfFilters-1, and j ═ 0..12) are derived as follows

AlfCoeffLumaAll[filtSetIdx][filtIdx][j]={{AlfFixedFilterCoeff[AlfClassToFilterMapping[}}filtSetIdx{{][filtidx]][j]}}

((luminance filter coefficient)) AlfCoeffLumaAll(the element is AlfCoeff)LumaAll[filtSetIdx][filtIdx][j]Where filtSetIdx is 16, filtSetIdx is 0.. NumAlfFilters-1 and j is 0..12) is derived as follows

The variable close temporal index is initialized to-1. Tid is the time domain layer index of the current stripe. ((if alf _ signal _ new _ filter _ luma is 1))

AlfCoeffLumaAll[16][filtIdx][j]=AlfCoeffL[filtIdx][j]

((otherwise, call the following procedure))

AlfCoeffLumaAll[16][filtIdx][j]=TempL[closest_temporal_index][filtIdx][j]

((luminance filter coefficient)) AlfCoeffLumaAll(the element is AlfCoeff)LumaAll[filtSetIdx][filtIdx][j]Wherein, filetsetidx 17.. alf _ num _ available _ filter _ sets-1, filetsetidx 0.. NumAlfFilters-1, and j 0..12) are derived as follows

((if alf _ signal _ new _ filter _ chroma is 1, the following procedure is performed))

Chroma filter coefficient AlfCoeffC[j](where j ═ 0..5) is derived as follows:

AlfCoeffC[j]=alf_chroma_coeff_abs[j]*(1-2*alf_chroma_coeff_sign[j]) (7-57)

the last filter coefficient for j 6 is derived as follows:

AlfCoeffC[6]=128-Σk(AlfCoeffC[k]<<1) wherein k is 0..5 (7-58)

The requirement for bitstream conformance is AlfCoeffC[j](where j ═ 0..5) should be at-27-1 to 27-1 (including-2)7-1 and 27-1), and AlfCoeffC[6]Should be in the range of 0 to 28-1 (including 0 and 2)8-1).

Otherwise (((alf _ signal _ new _ filter _ chroma is 0))), the following is called

Chroma filter coefficient AlfCoeffC[j](where j ═ 0..6) is derived as follows:

AlfCoeffC[j]=TempC[closest_temporal_index][j]

7.4.5.2 coding and decoding tree unit semantics

((alf _ luma _ CTB _ filter _ set _ index [ xCtb > > Log2CtbSize ] [ yCtb > > Log2CtbSize ])) specifies the filter set index of the luma CTB at the location (xCtb, yCtb).

((alf _ use _ new _ filter)) equal to 1 specifies that alf _ luma _ ctb _ filter _ set _ index [ xCTb > > Log2CtbSize ] [ yCtb > > Log2CtbSize ] is 16. alf _ use _ new _ filter equal to 0 specifies that alf _ luma _ ctb _ filter _ set _ index [ xcttb > > Log2CtbSize ] [ yCtb > > Log2CtbSize ] is not 16.

((alf _ use _ fixed _ filter)) equal to 1 specifies that one of the fixed filter sets is used. alf _ use _ fixed _ filter equal to 0 specifies that the current luminance CTB does not use any fixed filter set.

((alf _ fixed _ filter _ index)) specifies a fixed filter set index, which may be from 0 to 15.

((alf _ temporal _ index)) specifies the time-domain filter set index, which may range from 0 to alf _ num _ available _ temporal _ filter _ sets _ luma-1.

[ [8.5.1 general ] ]

1. When sps _ alf _ enabled _ flag is equal to 1, the following applies:

- [ [ invoke the temporal filter refresh process specified in section 8.5.4.5. ]]

To reconstruct the wafer sample array SL、SCbAnd SCrInvoking as input the adaptive loop filter process specified in section 8.5.4.1, adaptively offsetting the modified reconstructed picture sample array S 'after sample adaptive offset'L、S′CbAnd S'CrAs an output.

-array S'L、S′CbAnd S'CrAre respectively assigned to the arrays SL、SCbAnd SCr(representing decoded pictures).

- [ [ invoke the time domain filter update procedure specified in clause 8.5.4.6. ]]

((8.5.4.2 encoding and decoding tree block filtering process of luminance samples))

-the array of luminance filter coefficients f [ j ] corresponding to the filter specified by filtIdx [ x ] [ y ] is derived as follows, where j ═ 0.. 12:

f[j]=((AlfCoeffLumaAll))[alf_luma_ctb_filter_set_index[xCtb>>Log2CtbSize][yCtb>>Log2CtbSize]]][filtIdx[x][y]][j] (8-732)

[ [8.5.4.5 time-domain filter refresh ] ]

If any of the following conditions is true,

-the current picture is an IDR picture

-the current picture is a BLA picture

The current picture is the first picture in decoding order with POC larger than the POC of the last decoded IRAP picture, i.e. after the leading picture and before the trailing picture.

Then temp _ size _ L and temp _ size _ C are set to 0.

[ [8.5.4.6 time-domain filter update ] ]

If slice _ alf _ enabled _ flag is 1 and alf _ signal _ new _ filter _ luma is 1, the following applies.

If the luminance temporal filter buffer size temp _ size _ L <5, temp _ size _ L ═ temp _ size _ L + 1.

TempL[i][j][k](where i ═ temp _ size _ L-1 … 1, j ═ 0 … NumAlfFilters-1 and k ═ 0 … 12) is updated as:

TempL[i][j][k]=TempL[i-1][j][k]

TempL[0][j][k](where j-0 … NumAlfFilters-1 and k-0.. 12) is updated to

TempL[0][j][k]=AlfCoeffL[j][k]

TempTid_L[i](wherein i ═ temp _ size _ L-1 … 1) is updated to

TempTid_L[i]=TempTid_L[i-1]

TempTid_L[0]Set to the time domain layer index Tid of the current stripe

If alf _ chroma _ idx is not 0 and alf _ signal _ new _ filter _ chroma is 1, the following applies

Tempc[i][j](where i ═ temp _ size _ c-1 … 1 and j ═ 0 … 6) is updated as:

Tempc[i][j]=Tempc[i-1][j]

Tempc[0][j](where j is 0 … 6) is updated to

Tempc[0][j]=AlfCoeffC[j]

TempTid_C[i](wherein i ═ temp _ size _ C-1 … 1) is updated to

TempTid_C[i]=TempTid_C[i-1]

TempTid_C[0]Set to Tid of the current stripe

TABLE 9-4-syntax elements and associated binarization

Table 9-10-ctxInc assignment to syntax elements with context codec bins

9 in-loop reshaping (ILR) in JFET-M0427

The basic idea of in-loop reshaping (ILR) is to transform the original signal (predicted/reconstructed signal) (in the first domain) into the second domain (reshaped domain).

The in-loop luma reshaper is implemented as a pair of look-up tables (LUTs), but only one of the two LUTs needs to be signaled, since the other can be computed from the signaled LUT. Each LUT is a one-dimensional, 10-bit, 1024 entry mapping table (1D-LUT). One LUT is a forward LUT, FwdLUT, which will input a luminance code value YiMapping to a modified value Yr:Yr=FwdLUT[Yi]. The other LUT is an inverse LUT, InvLUT, which modifies the code value YrMapping to (Represents YiThe reconstructed value of (a). ).

9.1PWL model

Conceptually, piecewise Linear (PWL) is implemented in the following way:

suppose x1, x2 are two input pivot points (pivot points) and y1, y2 are the corresponding output pivot points of a segment of them. The output value y of any input value x between x1 and x2 may be interpolated by the following equation:

y=((y2-y1)/(x2-x1))*(x-x1)+y1

in a fixed point implementation, the equation may be rewritten as:

y=((m*x+2FP_PREC-1)>>FP_PREC)+c

here, m is a scalar, c is an offset, and FP _ PREC is a constant value specifying precision.

Note that in CE-12 software, the PWL model is used to pre-compute 1024 entry FwdLUT and InvLUT mapping tables; the PWL model also allows embodiments to compute the same mapping values on the fly (on-the-fly) without pre-computing the LUT.

9.2 testing CE12-2 in the fourth VVC conference

9.2.1 Brightness reshaping

Test 2 of in-loop luma reshaping (i.e., CE12-2 in the proposal) provides a lower complexity pipeline, which also eliminates the decoding delay of block intra prediction in inter slice reconstruction. Intra prediction is performed in the reshaped domain of inter and intra slices.

Intra prediction is always performed in the reshaped domain regardless of the slice type. With such an arrangement, intra prediction may start immediately after the previous TU reconstruction is completed. Such an arrangement may also provide a uniform process for intra mode rather than being stripe dependent. Fig. 7 shows a block diagram of a mode-based CE12-2 decoding process.

CE12-2 also tested a 16-segment piecewise linear (PWL) model for luma and chroma residual scaling, rather than the 32-segment PWL model of CE 12-1.

Interframe band reconstruction with in-loop luma reshaper in CE12-2 (lighter shaded blocks indicate signal in reshaped domain: luma residual; predicted intra luma; and reconstructed intra luma)

9.2.2 luma-dependent chroma residual scaling

Luma-dependent chroma residual scaling is a multiplication process implemented with fixed-point integer arithmetic. The chroma residual scaling compensation interacts with the luma signal of the chroma signal. Chroma residual scaling is applied to the TU level. More specifically, the average of the blocks is predicted using the corresponding luminance.

The average is used to identify an index in the PWL model. The index identifies the scaling factor cscalelnv. The chroma residual is multiplied by this number.

Note that the chroma scaling factor is calculated from the forward mapped predicted luma values instead of the reconstructed luma values.

9.2.3 Signaling of ILR side information

The parameters (current) are transmitted in the slice header (similar to ALF). These reportedly require 40-100 bits.

The following specification is based on version 9 of JFET-L1001. The added grammar is highlighted in yellow.

In the 7.3.2.1 sequence parameter set RBSP syntax

In 7.3.3.1 general slice header grammar

Adding a new grammar table set to reshaper model:

{ { in the generic sequence parameter set RBSP semantics, add the following semantics: }}

The sps _ residual _ enabled _ flag equal to 1 specifies the use of a reshaper in a Codec Video Sequence (CVS). The sps _ residual _ enabled _ flag equal to 0 specifies that the reshaper is not used in the CVS.

{ { in slice header syntax, add the following semantics }

Tile _ group _ restore _ model _ present _ flag equal to 1 specifies that tile _ group _ restore _ model () is present in the slice group header. Tile _ group _ residual _ model _ present _ flag equal to 0 specifies that tile _ group _ residual _ model () is not present in the slice group header. When tile _ group _ remaining _ model _ present _ flag is not present, it is inferred to be equal to 0.

Tile _ group _ remaining _ enabled _ flag equal to 1 specifies that the reshaper is enabled for the current slice group. Tile _ group _ remaining _ enabled _ flag equal to 0 specifies that the reshaper is not enabled for the current slice group. When tile _ group _ remaining _ enable _ flag is not present, it is inferred to be equal to 0.

Tile _ group _ residual _ chroma _ residual _ scale _ flag equal to 1 specifies that chroma residual scaling is enabled for the current slice group. Tile _ group _ residual _ chroma _ residual _ scale _ flag equal to 0 specifies that chroma residual scaling is not enabled for the current slice group. When tile _ group _ average _ chroma _ residual _ scale _ flag is not present, it is inferred to be equal to 0.

{ { Add tile _ group _ restore _ model () syntax }

reshape _ model _ min _ bin _ idx specifies the minimum bin (or segment) index to be used in the reshaper construction process. The value of reshape _ model _ min _ bin _ idx should be in the range of 0 to MaxBinIdx (including 0 and MaxBinIdx). The value of MaxBinIdx should be equal to 15.

reshape _ model _ delta _ max _ bin _ idx specifies the maximum allowed bin (or segment) index MaxBinIdx minus the maximum bin index to be used in the reshaper construction process. The value of reshape _ model _ max _ bin _ idx is set equal to MaxBinIdx-reshape _ model _ delta _ max _ bin _ idx.

The restore _ model _ bin _ delta _ abs _ CW _ prec _ minus1 plus 1 specifies the number of bits used to represent the syntax, restore _ model _ bin _ delta _ abs _ CW [ i ].

reshape _ model _ bin _ delta _ abs _ CW [ i ] specifies the absolute offset (delta) codeword value for the ith binary bit.

reshaper _ model _ bin _ delta _ sign _ CW _ flag [ i ] specifies the sign of reshape _ model _ bin _ delta _ abs _ CW [ i ] as follows:

-if reshape _ model _ bin _ delta _ sign _ CW _ flag [ i ] is equal to 0, the corresponding variable rsptdeltacw [ i ] is a positive value.

Otherwise (reshape _ model _ bin _ delta _ sign _ CW _ flag [ i ] is not equal to 0), the corresponding variable rspgeltachw [ i ] is a negative value.

When reshape _ model _ bin _ delta _ sign _ CW _ flag [ i ] is not present, it is inferred to be equal to 0.

The variable rsptdeltacw [ i ] (12 × resume _ model _ bin _ delta _ sign _ CW [ i ]) resume _ model _ bin _ delta _ abs _ CW [ i ];

the variable RspCW [ i ] is derived as follows:

the variable OrgCW is set equal to (1)<<BitDepthY)/(MaxBinIdx+1)。

-if reshaper _ model _ min _ bin _ idx [ - ] i [ - ]

reshaper_model_max_bin_idx

Then RspCW [ i ] ═ OrgCW + rsptycw [ i ].

-otherwise, RspCW [ i ] ═ 0.

If BitDepthYIs equal to 10, then RspCW [ i ]]Should be in the range of 32 to 2x OrgCW-1.

The variable InputPivot [ i ] (where i ranges from 0 to MaxBinIdx +1 (including 0 and MaxBinIdx + 1)) is derived as follows

InputPivot[i]=i*OrgCW

The variables reshapipivot [ i ] (where i ranges from 0 to MaxBinIdx +1 (including 0 and MaxBinIdx + 1)), the variables ScaleCoef [ i ], and InvScaleCoeff [ i ] (where i ranges from 0 to MaxBinIdx (including 0 and MaxBinIdx)) were derived as follows:

the variable ChromaScaleCoef [ i ] (where i ranges from 0 to MaxBinIdx (including 0 and MaxBinIdx)) is derived as follows:

9.2.4 use of ILR

At the encoder side, each picture (or slice group) is first converted to the reshaped domain. And all the codec processes are performed in the reshaped domain. For intra prediction, the neighboring blocks are in the reshaped domain; for inter prediction, the reference block (generated from the original domain of the decoded picture buffer) is first converted to the reshaped domain. Then, a residual is generated and codec to the bitstream.

After the entire picture (or slice group) is encoded/decoded, samples in the reshaped domain are converted to the original domain, and then the deblocking filter and other filters are applied.

The positive reshaping of the prediction signal is disabled for the following cases:

that the current block is intra coded

O the Current Block is coded and decoded as CPR (Current Picture reference, also known as Intra Block copy, IBC)

The current block is coded as a combined inter-intra mode (CIIP) and positive reshaping is disabled for intra prediction blocks

10 bidirectional optical flow (BDOF)

10.1 overview and analysis of BIO

In BIO, motion compensation is first performed to generate a first prediction of the current block (in each prediction direction). The first prediction is used to derive the spatial gradient, temporal gradient, and optical flow for each sub-block or pixel within the block, which is then used to generate a second prediction, e.g., a final prediction of the sub-block or pixel. The details are described below.

The bi-directional optical flow (BIO) method is a sample-wise motion refinement performed on the basis of bi-directionally predicted block-wise motion compensation. In some embodiments, the sample level motion refinement does not use signaling.

Let I(k)Luminance values from reference k (k 0,1) after block motion compensation, and will be separately processedAndis shown as I(k)Horizontal and vertical of gradientAnd (4) components. Assuming that the optical flow is valid, the motion vector field (v)x,vy) Given by:

combining the optical flow equation with a Hermite interpolation for the motion trajectory of each sample point, resulting in the sum function value I(k)And derivatives thereofAnda matching unique third order polynomial. When t is 0, the value of the polynomial is predicted for BIO:

FIG. 8 illustrates an example optical flow trace in a bi-directional optical flow (BIO) method. Here, τ0And τ1Indicating the distance to the reference frame. Distance tau0And τ1Based on Ref0And Ref1POC of (a) is calculated: tau is0POC (current) -POC (Ref)0),τ1POC (current) -POC (Ref)1). If the two predictions are from the same temporal direction (both from the past or both from the future), then the sign is different (e.g., τ0·τ1< 0). In this case, if the predictions are not from the same time instant (e.g., τ)0≠τ1) BIO is applied.

Motion vector field (v)x,vy) Is determined by minimizing the difference a between the values in points a and B. Fig. 8 shows an example of the intersection of the motion trajectory and the reference frame plane. The model uses only the first linear term of the local taylor expansion of Δ:

all values in the above equation depend on the sample position and are denoted as (i ', j'). Assuming that the motion is consistent in the local surrounding area, Δ may be minimized inside a (2M +1) × (2M +1) square window Ω centered on the current predicted point (i, j), where M equals 2:

for this optimization problem, JEM uses a simplified approach, first minimizing in the vertical direction, and then minimizing in the horizontal direction. This will result in the following formula:

wherein the content of the first and second substances,

to avoid division by zero or by very small values, the regularization parameters r and m can be introduced into equations (15) and (16), where:

r=500·4d-8equation (18)

m=700·4d-8Equation (19)

Here, d is the bit depth of the video samples.

To keep memory access to BIO the same as conventional bi-predictive motion compensation, all predictions and gradient values I(k), Is calculated for the position inside the current block. Fig. 9A shows an example of an access location outside of block 900. As shown in fig. 9A, in equation (17), a (2m +1) × (2m +1) square window Ω centered on the current prediction point on the boundary of the prediction block needs to access a position outside the block. In JEM, value I outside the block(k), Is set equal to the most recently available value inside the block. This may be implemented, for example, as filling area 901, as shown in fig. 9B.

With BIO it is possible that the motion field can be refined for each sample point. To reduce computational complexity, a block-based design of the BIO is used in JEM. The motion refinement may be calculated based on 4x4 blocks. In block-based BIO, s in equation (17) can be aggregated for all samples in a4 × 4 blocknIs then snIs used for the derived BIO motion vector offset of the 4x4 block. More specifically, the following formula may be used for block-based BIO derivation:

here, bkRepresents a set of samples belonging to the kth 4x4 block of the prediction block. S in equations (15) and (16)nIs replaced by(s)n,bk)>>4) To derive the associated motion vector offset.

In some scenarios, MV refinement of BIO may not be reliable due to noise or irregular motion. Thus, in BIO, the magnitude of MV refinement is clipped to the threshold. The threshold is determined based on whether the reference pictures of the current picture are all from one direction. For example, if all reference pictures of the current picture are from one direction, the value of the threshold is set to 12 × 214-d(ii) a Otherwise, it is set to 12 × 213-d

The gradient of the BIO may be simultaneously computed using motion compensated interpolation using operations consistent with the HEVC motion compensation process (e.g., 2D separable Finite Impulse Response (FIR)). In some embodiments, the input to the 2D separable FIR is the same reference frame sample point as the motion compensation process and fractional position (fracX, fracY) from the fractional portion of the block motion vector. For horizontal gradientsThe signal is first vertically interpolated using the bianters corresponding to the fractional position fracY with de-scaling shift d-8. The gradient filter BIOfiltG is then applied in the horizontal direction corresponding to the fractional position fracX with the de-scaling shift 18-d. For vertical gradientsThe gradient filter is applied vertically using the bianterg corresponding to the fractional position fracY with the de-scaling shift d-8. The signal shifting is then performed in the horizontal direction corresponding to the fractional position fracX with the de-scaling shift 18-d using bialters. The length of the interpolation filter bialterg for gradient calculations and the interpolation filter bialterf for signal displacement may be shorter (e.g. 6 taps) in order to keep a reasonable complexity. Table 2 shows an example filter that can be used for gradient computation for different fractional positions of block motion vectors in BIO. Table 3 shows an example interpolation filter that may be used for prediction signal generation in the BIO.

Table 2: exemplary Filter for gradient computation in BIO

Table 2: exemplary interpolation Filter for prediction Signal Generation in BIO

Fractional precision position Interpolation filter for prediction signal (BIOfilters)
0 {0,0,64,0,0,0}
1/16 {1,-3,64,4,-2,0}
1/8 {1,-6,62,9,-3,1}
3/16 {2,-8,60,14,-5,1}
1/4 {2,-9,57,19,-7,2}
5/16 {3,-10,53,24,-8,2}
3/8 {3,-11,50,29,-9,2}
7/16 {3,-11,44,35,-10,3}
1/2 {3,-10,35,44,-11,3}

In JEM, when the two predictions are from different reference pictures, the BIO may be applied to all bi-predicted blocks. When Local Illumination Compensation (LIC) is enabled for a CU, the BIO may be disabled.

In some embodiments, OBMC is applied to a block after a normal MC procedure. To reduce computational complexity, BIO may not be applied during the OBMC process. This means that the BIO is applied to the MC procedure of a block when the MV of the block itself is used, and is not applied to the MC procedure when the MV of an adjacent block is used during the OBMC procedure.

Prediction refinement with optical flow (PROF) in 11 JFET-N0236

This document proposes a method for refining sub-block based affine motion compensated prediction using optical flow. After performing sub-block based affine motion compensation, the predicted samples are refined by adding the differences derived from the optical flow equations, which is referred to as prediction refinement with optical flow (PROF). The proposed method can achieve inter-frame prediction at pixel level granularity without increasing memory access bandwidth.

To achieve finer granularity of motion compensation, this document suggests a method for refining sub-block based affine motion compensation prediction using optical flow. After performing sub-block based affine motion compensation, the luma prediction samples are refined by adding the differences derived from the optical flow equations. The proposed PROF is described as the following four steps.

Step 1) performs sub-block based affine motion compensation to generate sub-block prediction I (I, j).

Step 2) use 3 tap filter [ -1, 0,1 [ -1]Calculating the spatial gradient g of the subblock prediction at each sample positionx(i, j) and gy(i,j)。

gx(i,j)=I(i+1,j)-I(i-1,j)

gy(i,j)=I(i,j+1)-I(i,j-1)

For gradient calculations, the sub-block prediction is extended by one pixel on each side. To reduce memory bandwidth and complexity, pixels on the extended boundary are copied from the nearest integer pixel position in the reference picture. Thus, additional interpolation of the fill area is avoided.

Step 3) luminance prediction refinement is calculated from the optical flow equation.

ΔI(i,j)=gx(i,j)*Δvx(i,j)+gy(i,j)*Δvy(i,j)

Here, Δ v (i, j) is the difference between the pixel MV (denoted by v (i, j)) calculated for the sampling point position (i, j) and the sub-block MV of the sub-block to which the pixel (i, j) belongs, as shown in fig. 10.

Since the affine model parameters and pixel position relative to the center of the sub-block do not change from sub-block to sub-block, Δ v (i, j) can be calculated for the first sub-block and reused for other sub-blocks in the same CU. Assuming that x and y are the horizontal and vertical offsets from the pixel location to the center of the sub-block, Δ v (x, y) can be derived from the following equation:

for a 4-parameter affine model,

for a 6-parameter affine model,

here, (v)0x,v0y)、(v1x,v1y)、(v2x,v2y) Are the top left, top right and bottom left control point motion vectors, and w and h are the width and height of the CU.

Step 4) finally, a luma prediction refinement is added to the sub-block prediction I (I, j). The final prediction I' is generated as follows:

I′(i,j)=I(i,j)+ΔI(i,j)

12 disadvantages of the prior embodiment

The nonlinear alf (nlalf) in the jfet-N0242 design has the following problems:

(1) many clipping operations are required in NLALF.

(2) In CTU-based ALF, when ALF _ num _ available _ temporal _ filter _ sets _ luma is equal to 0, there is no available temporal luma filter. However, the alf _ temporal _ index may still be signaled.

(3) In CTU-based ALF, when ALF _ signal _ new _ filter _ chroma is equal to 0, no new filter is signaled for the chroma component, and it is assumed that a time-domain chroma filter is used. However, there is no guarantee that the time-domain chrominance filter is available.

(4) In CTU-based ALF, ALF _ num _ available _ temporal _ filter _ sets _ luma may be larger than the available set of temporal filters.

Exemplary method of adaptive Loop Filtering for video codec

Embodiments of the presently disclosed technology overcome the disadvantages of the prior implementations to provide video codecs with higher codec efficiency. Based on the disclosed techniques, techniques for adaptive loop filtering may enhance existing and future video codec standards, set forth in the following examples described for various embodiments. The examples of the disclosed technology provided below illustrate the general concepts and are not meant to be construed as limiting. In examples, various features described in these examples may be combined unless explicitly stated to the contrary.

1. Instead of clipping the sample point differences, it is proposed to apply a clipping operation to the intermediate result during the filtering process. It is assumed that neighboring samples (adjacent or non-adjacent) of the current sample utilized in the filtering process may be classified into N (N > ═ 1) groups.

a. In one example, one or more intermediate results are computed for a group, and clipping may be performed on the one or more intermediate results.

i. For example, for one group, the difference between each neighboring pixel and the current pixel may be first calculated, and then the differences may be weighted averaged using the corresponding ALF coefficients (denoted as wAvgDiff). One clipping may be performed on the set of wavgdiffs.

b. Different clipping parameters may be used for different groups.

c. In one example, clipping is applied to the final weighted sum of filter coefficients multiplied by the sample point differences.

i. For example, N is 1, and clipping may be performed as follows, where K (d, b) is min (b, max (-b, d)) is a clipping function, and K is a clipping parameter.

O(x,y)=I(x,y)+K(∑(i,j)≠(0,0)w(i,j).(I(x+i,y+j)-I(x,y)),k)

1) Additionally, alternatively, the weighted sum Σ(i,j)≠(0,0)w (I, j) (I (x + I, y + j) -I (x, y)) may also be rounded to integer values, such as via shifting with or without rounding.

2. When filtering one sample, if N (N >1) neighboring samples share one filter coefficient, then all N neighboring pixels can be clipped once (e.g., as required by non-linear ALF).

a. For example, if I (x + I1, y + j1) and I (x + I2, y + j2) share a filter coefficient w (I1, j1) (or/and a clipping parameter k (I1, j1)), clipping may proceed once as follows: clipValue (I1, j1) ═ K (I (x + I1, y + j1) + I (x + I2, y + j2) -2 × I (x, y), K (I1, j1)), and w (I1, j1) × clipValue (I1, j1) can be used to replace w (I1, j1) × K (I (x + I1, y + j1) -I (x, y), K (I1, j1)) + w (I2, j2) × K (I (x + I2, y + j2) -I (x, y), K (I2, j2)) in equation (14).

i. In one example, i1 and i2 may be at symmetrical positions. Further, j1 and j2 may be at symmetrical positions.

1. In one example, i1 equals (-i2), and j1 equals (-j 2).

in one example, the distance between (x + i1, y + j1) and (x, y) and the distance between (x + i2, j + j2) and (x, y) may be the same.

The disclosed method in clause 2 is enabled when the filter shape is in the symmetric mode.

Further alternatively, clipping parameters associated with I (x + I1, y + j1) may be signaled/derived from the bitstream, denoted ClipParam, and k (I1, j1) mentioned above is derived from the signaled clipping parameters, such as 2 × ClipParam.

b. For example, if (i, j) ∈ C shares one filter coefficient w1 (or/and one clipping parameter k1), and C contains N elements, clipping may proceed once as follows:

where k1 is the clipping parameter associated with C, and clipValue w1 may be used to replace the following in equation (14):

i. further, alternatively, the clipping parameters associated with I (x + I, y + j) may be signaled/derived from the bitstream, denoted by ClipParam, and k1 is derived from the signaled clipping parameters (such as N × ClipParam).

Alternatively, Σ(i,j)∈CI (x + I, y + j) or (Σ)(i,j)∈CI (x + I, y + j)) -N × I (x, y) are right-shifted before cropping.

c. In one example, one crop may be performed on M1(M1< ═ N) of the N neighboring samples.

d. In one example, N neighboring samples may be classified into M2 groups, and each group may be clipped once.

e. In one example, the method may be applied to some or all of the color components.

i. For example, it may be applied to the luminance component.

For example, it may be applied to Cb or/and Cr components.

3. A clipping function K (min, max, input) that clips input to a range [ min, max ] that includes min and max may be used in this disclosure.

a. In one example, a clipping function K (min, max, input) that clips input to a range (min, max) that does not include min and max may be used in the above bullet.

b. In one example, a clipping function K (min, max, input) that clips input to within a range (min, max) that includes max but not min may be used in the above bullet.

c. In one example, a clipping function K (min, max, input) that clips input to within a range [ min, max) that includes min but not max may be used in the above bullet.

4. When a time-domain ALF coefficient set is not available (e.g., no ALF coefficients have been previously encoded/decoded, or the encoded/decoded ALF coefficients are marked as "unavailable"), signaling of an indication of which time-domain ALF coefficient set to use may be skipped.

a. In one example, when the temporal ALF coefficient set is not available, if neither the new ALF coefficients nor the fixed ALF coefficients are used for CTB/block/slice/picture, the ALF is inferred as not allowed for CTB/block/slice/picture.

i. Further, alternatively, in this case, even if it may be indicated (e.g., ALF _ CTB _ flag is true for CTU/block) that ALF is applied to CTB/block/slice/picture, ALF may be finally inferred as not being allowed for CTB/block/slice/picture.

b. In one example, when a temporal ALF coefficient set is not available, only new ALF coefficients or fixed ALF coefficients, etc., may be indicated for CTBs/blocks/slice groups/slices/pictures in the coherent bitstream.

i. For example, alf _ use _ new _ filter or alf _ use _ fixed _ filter should be true.

c. In one example, a bitstream is considered to be a non-uniform bitstream if the following conditions are met: when the temporal ALF coefficient set is not available, for CTBs/blocks/slice groups/slices/pictures in which ALF is indicated to be adopted, neither new ALF coefficients nor fixed ALF coefficients are indicated for it.

i. For example, a bitstream in which alf _ use _ new _ filter and alf _ use _ fixed _ filter are both false is considered to be a non-uniform bitstream.

d. In one example, the alf _ temporal _ index may not be signaled when alf _ num _ available _ temporal _ filter _ sets _ luma is equal to 0.

e. The proposed method can be applied differently to different color components.

5. How many temporal ALF coefficient sets can be used for slice groups/slices/pictures/CTBs/blocks/video units may depend on the available temporal ALF coefficient sets (denoted ALFavai) E.g., a set of previously encoded/decoded ALF coefficients marked as "available".

a. In one example, no more than ALFavaiThe multiple temporal ALF coefficient sets may be used for slice group/slice/picture/CTB/block.

b. Not more than min (N, ALF)avai) The single temporal ALF coefficient set may be used for slice group/slice/picture/CTB/block, where N is>0. For example, N ═ 5.

6. The new ALF coefficient sets may be marked as "available" after they are encoded/decoded. Meanwhile, all "available" ALF coefficient sets may be all marked as "unavailable" when an IRAP (intra random access point) access unit or/and an IRAP picture or/and an IDR (instantaneous decoding refresh) access unit or/and an IDR picture is encountered.

an "available" ALF coefficient set may be used as a temporal ALF coefficient set for later coded pictures/slices/slice groups/slices/CTBs/blocks.

b. The "available" ALF coefficient sets may be maintained in a list of ALF coefficient sets whose maximum size is equal to N (N > 0).

i. The list of ALF coefficient sets may be maintained in a first-in-first-out order.

c. When marked as "unavailable," the associated ALF APS information is removed from the bitstream or replaced by other ALF APS information.

7. A list of ALF coefficient sets may be maintained for each time domain layer.

8. One list of ALF coefficient sets may be maintained for K adjacent time domain layers.

9. Different ALF coefficient set lists may be maintained for different pictures depending on whether the picture is predicted only from the previous picture (in display order).

a. For example, one list of ALF coefficient sets may be maintained for pictures predicted only from previous pictures.

b. For example, one list of ALF coefficient sets may be maintained for pictures predicted from both preceding and following pictures.

10. After encountering an IRAP access unit or/and IRAP picture or/and IDR access unit or/and IDR picture, the ALF coefficient set list may be emptied.

11. Different lists of ALF coefficient sets may be maintained for different color components.

a. In one example, a list of ALF coefficient sets is maintained for the luma component.

b. In one example, a list of ALF coefficient sets is maintained for Cb or/and Cr components.

12. One list of ALF coefficient sets may be maintained, however, for different pictures/slice groups/slices/CTUs, the entries in the list may be assigned different indices (or priorities).

a. In one example, an ALF coefficient set may be assigned an ascending index of ascending absolute temporal layer difference between it and the current picture/slice group/slice/CTU.

b. In one example, an ALF coefficient set may be assigned an ascending index of the ascending absolute POC (picture order count) difference between it and the current picture/slice group/slice/CTU.

c. In one example, assuming that there are K ALF coefficient sets allowed by the current picture/slice group/slice/CTU, they may be the K ALF coefficient sets with the smallest indices.

d. In one example, the indication of which temporal ALF coefficient set is used by the current picture/slice group/slice/CTU may also depend on the assigned index instead of the original entry index in the list.

13. The neighboring samples used in the ALF may be classified into K (K > ═ 1) groups, and one set of clipping parameters may be signaled for each group.

14. Clipping parameters may be predefined for some or all of the fixed ALF filter sets.

a. Alternatively, the cropping parameters may be signaled for some or all of the fixed filter sets used by the current slice group/slice/picture/slice.

i. In one example, the cropping parameters may be signaled only for certain color components (e.g., luminance components).

b. Alternatively, when a fixed ALF filter set is used, clipping may not be performed.

i. In one example, clipping may be performed on certain color components, while clipping is not performed on other color components.

15. The clipping parameters may be stored together with the ALF coefficients and may be inherited by CTUs/CUs/slices/slice groups/slices/pictures that are later coded.

a. In one example, when a temporal ALF coefficient set is used by a CTU/CU/slice group/slice/picture, the corresponding ALF clipping parameters may also be used.

i. In one example, the clipping parameters may be inherited for only certain color components (e.g., luminance components).

b. Alternatively, the cropping parameters may be signaled when the temporal ALF coefficient set is used by CTU/CU/slice group/slice/picture.

i. In one example, the cropping parameters may be signaled only for certain color components (e.g., luminance components).

c. In one example, the clipping parameters may be inherited for certain color components and signaled for other color components.

d. In one example, when using a time-domain ALF coefficient set, no clipping is performed.

i. In one example, clipping may be performed on certain color components, while clipping is not performed on other color components.

16. Whether or not non-linear ALF is used may depend on the ALF filter set type (e.g., fixed ALF filter set, time domain ALF filter set, or signaled ALF coefficient set).

a. In one example, if the current CTU uses a fixed ALF filter set or a time-domain ALF filter set (also referred to as using a previously signaled filter set), then the non-linear ALF may not be used for the current CTU.

b. In one example, when ALF _ luma _ use _ fixed _ filter _ flag is equal to 1, the non-linear ALF may be used for the current slice/slice group/slice/CTU.

17. The non-linear ALF clipping parameters may be conditionally signaled according to ALF filter set type (e.g., fixed ALF filter set, time domain ALF filter set, or signaled ALF coefficient set).

a. In one example, the non-linear ALF clipping parameters may be signaled for all ALF filter sets.

b. In one example, the non-linear ALF clipping parameters may be signaled only for the signaled ALF filter coefficient set.

c. In one example, the non-linear ALF clipping parameters may be signaled only for a fixed set of ALF filter coefficients.

The examples described above may be incorporated in the context of the methods described below (e.g., methods 1110, 1120, 1130, 1140, 1150, and 1160) that may be implemented at a video decoder and/or a video encoder.

Fig. 11A shows a flow diagram of an exemplary method for video processing. The method 1110 includes, at operation 1112, performing a filtering process on a current video block of the video, wherein the filtering process uses filter coefficients and includes two or more operations with at least one intermediate result.

The method 1110 includes, at operation 1114, applying a clipping operation to at least one intermediate result.

The method 1110 includes, at operation 1116, performing a conversion between the current video block and a bitstream representation of the video based on the at least one intermediate result. In some embodiments, the at least one intermediate result is based on a weighted sum of the filter coefficients and differences between a current sample of the current video block and neighboring samples of the current sample.

Fig. 11B shows a flow diagram of an exemplary method for video processing. The method 1120 includes, at operation 1122, encoding a current video block of video into a bitstream representation of the video, wherein the current video block is coded with an Adaptive Loop Filter (ALF).

The method 1120 includes, at operation 1124, selectively including in the bitstream representation an indication of a set of time domain adaptive filters within the one or more sets of time domain adaptive filters based on availability or use of the one or more sets of time domain adaptive filters.

Fig. 11C shows a flow diagram of an exemplary method for video processing. The method 1130 includes, at operation 1132, determining availability or use of one or more sets of temporal adaptive filters based on an indication of the sets of temporal adaptive filters in a bitstream representation of the video, wherein the one or more sets of temporal adaptive filters include sets of temporal adaptive filters applicable to a current video block of the video coded with an Adaptive Loop Filter (ALF).

The method 1130 includes, at operation 1134, generating a decoded current video block from the bitstream representation by selectively applying a set of time-domain adaptive filters based on the determination.

Fig. 11D shows a flow diagram of an exemplary method for video processing. The method 1140 includes, in operation 1142, determining a plurality of sets of temporal Adaptive Loop Filter (ALF) coefficients for a current video block coded with an adaptive loop filter based on a set of available ALF coefficients, wherein the set of available temporal ALF coefficients has been encoded or decoded prior to the determining, and wherein the plurality of sets of ALF coefficients are for a slice group, a slice, a picture, a Coding Tree Block (CTB), or a video unit that includes the current video block.

Method 1140 includes, at operation 1144, performing a conversion between a current video block and a bitstream representation of the current video block based on a plurality of sets of time domain ALF coefficients.

Fig. 11E shows a flow diagram of an exemplary method for video processing. The method 1150 includes, at operation 1152, determining, for a transition between a current video block of the video and a bitstream representation of the video, that an indication of an Adaptive Loop Filtering (ALF) in a header of a video region of the video is equal to an indication of an ALF in an Adaptive Parameter Set (APS) Network Abstraction Layer (NAL) unit associated with the bitstream representation.

The method 1150 includes, at operation 1154, performing a transformation.

Fig. 11F shows a flow diagram of an exemplary method for video processing. The method 1160 includes, at operation 1162, selectively enabling a non-linear Adaptive Loop Filtering (ALF) operation for transitions between a current video block of the video and a bitstream representation of the video based on a type of adaptive loop filter used for a video region of the video.

The method 1160 includes, at operation 1164, performing the conversion after the selectively enabling.

10 exemplary embodiments of the technology disclosed

10.1 example #1

Assume that one ALF coefficient set list is maintained for luminance and chrominance, respectively, and the size of the two lists are lumaalffsetsize and chromaalffsetsize, respectively. The maximum sizes of the ALF coefficient set lists are lumaALFSetMax (e.g., lumaALFSetMax equals 5) and chromaALFSetMax (e.g., chromaALFSetMax equals 5), respectively.

Newly added parts are bracketed in double bold, i.e., { { a } } indicates that "a" is added, and deleted parts are bracketed in double bold, i.e., [ [ a ] ] indicates that "a" is deleted.

7.3.3.2 adaptive loop filter data syntax

7.3.4.2 coding and decoding tree unit syntax

Alf _ signal _ new _ filter _ luma equal to 1 specifies that a new luma filter set is signaled. Alf _ signal _ new _ filter _ luma equal to 0 specifies that the new luma filter set is not signaled. When not present, it is 0.

Alf _ luma _ use _ fixed _ filter _ flag equal to 1 specifies a fixed filter set for signaling the adaptive loop filter. Alf _ luma _ use _ fixed _ filter _ flag equal to 0 specifies that a fixed filter set is not used to signal the adaptive loop filter.

alf _ num _ available _ temporal _ filter _ sets _ luma, which specifies the number of available temporal filter sets that may be used for the current slice, may be from 0. [ [5] ] { { lumaFSetSize } }. When not present, it is 0.

{ { constraint alf _ signal _ new _ filter _ luma or alf _ luma _ use _ fixed _ filter _ flag must equal 1 when alf _ num _ available _ temporal _ filter _ sets _ luma is equal to zero. }}

Alf _ signal _ new _ filter _ chroma equal to 1 specifies that a new chroma filter is signaled. Alf _ signal _ new _ filter _ chroma equal to 0 specifies that the new chroma filter is not signaled.

{ { constraint alf _ signal _ new _ filter _ chroma must equal 1 when chromaalffsetsize equals 0. }}

10.2 example #2

Assume that one ALF coefficient set list is maintained for luminance and chrominance, respectively, and the size of the two lists are lumaalffsetsize and chromaalffsetsize, respectively. The maximum sizes of the ALF coefficient set lists are lumaALFSetMax (e.g., lumaALFSetMax equals 5) and chromaALFSetMax (e.g., chromaALFSetMax equals 5), respectively.

Newly added parts are bracketed in double bold, i.e., { { a } } indicates that "a" is added, and deleted parts are bracketed in double bold, i.e., [ [ a ] ] indicates that "a" is deleted.

7.3.3.2 adaptive loop filter data syntax

7.3.4.2 coding and decoding tree unit syntax

Alf _ signal _ new _ filter _ luma equal to 1 specifies that a new luma filter set is signaled. Alf _ signal _ new _ filter _ luma equal to 0 specifies that the new luma filter set is not signaled. When not present, it is 0.

Alf _ luma _ use _ fixed _ filter _ flag equal to 1 specifies a fixed filter set for signaling the adaptive loop filter. Alf _ luma _ use _ fixed _ filter _ flag equal to 0 specifies that a fixed filter set is not used to signal the adaptive loop filter.

alf _ num _ available _ temporal _ filter _ sets _ luma, which specifies the number of available temporal filter sets that may be used for the current slice, may be from 0. [ [5] ] { { lumaFSetSize } }. When not present, it is 0.

{ { constraint alf _ signal _ new _ filter _ luma or alf _ luma _ use _ fixed _ filter _ flag must equal 1 when alf _ num _ available _ temporal _ filter _ sets _ luma is equal to zero. }}

Alf _ signal _ new _ filter _ chroma equal to 1 specifies that a new chroma filter is signaled. Alf _ signal _ new _ filter _ chroma equal to 0 specifies that the new chroma filter is not signaled.

{ { constraint alf _ signal _ new _ filter _ chroma must equal 1 when chromaalffsetsize equals 0. }}

In some embodiments, the following technical solutions may be implemented:

A1. a method for video processing, comprising: performing a filtering process on a current video block of the video, wherein the filtering process uses filter coefficients and includes two or more operations with at least one intermediate result; applying a clipping operation to the at least one intermediate result; and performing a conversion between the current video block and a bitstream representation of the video based on at least one intermediate result, wherein the at least one intermediate result is based on a weighted sum of filter coefficients and differences between a current sample of the current video block and neighboring samples of the current sample.

A2. The method of solution a1, further comprising classifying, for a current sample, neighboring samples of the current sample into a plurality of groups, wherein the clipping operation is applied to intermediate results in each of the plurality of groups with different parameters.

A3. The method of solution a2, wherein the at least one intermediate result comprises a weighted average of differences between the current sample and neighboring samples in each of the plurality of groups.

A4. The method of solution a1, wherein a plurality of neighboring samples of the current video block share filter coefficients, and wherein the clipping operation is applied once to each of the plurality of neighboring samples.

A5. The method of solution a4, wherein at least two adjacent samples of the plurality of adjacent samples are located symmetrically with respect to a sample of the current video block.

A6. The method of solution a4 or a5, wherein the filter shape associated with the filtering process is a symmetric pattern.

A7. The method of any of solutions a 4-a 6, wherein one or more parameters of a clipping operation are signaled in a bitstream representation.

A8. The method of solution a1, wherein the samples of the current video block include N neighboring samples, wherein a clipping operation is applied once to M1 of the N neighboring samples, wherein M1 and N are positive integers and M1 ≦ N.

A9. The method of solution a1, further comprising classifying N neighboring samples of a current video block for a sample of the sample into M2 groups, wherein a clipping operation is applied once to each of the M2 groups, and wherein M2 and N are positive integers.

A10. The method of solution a1, wherein the clipping operation is applied to a luma component associated with a current video block.

A11. The method of solution a1, wherein a clipping operation is applied to a Cb component or a Cr component associated with a current video block.

A12. The method of any of solutions A1-A11, wherein a clipping operation is defined as K (min, max, input), where input is an input to the clipping operation, min is a nominal minimum value of an output of the clipping operation, and max is a nominal maximum value of the output of the clipping operation.

A13. The method of solution a12, wherein an actual maximum value of the output of the clipping operation is less than a nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is greater than a nominal minimum value.

A14. The method of solution a12, wherein an actual maximum value of the output of the clipping operation is equal to a nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is greater than a nominal minimum value.

A15. The method of solution a12, wherein an actual maximum value of the output of the clipping operation is less than a nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is equal to the nominal minimum value.

A16. The method of solution a12, wherein the actual maximum value of the output of the clipping operation is equal to the nominal maximum value, and wherein the actual minimum value of the output of the clipping operation is equal to the nominal minimum value.

A17. The method of solution a1, wherein the filtering process comprises an Adaptive Loop Filtering (ALF) process configured with multiple ALF filter coefficient sets.

A18. The method of solution a17, wherein at least one parameter for a clipping operation is predefined for one or more of a plurality of ALF filter coefficient sets.

A19. The method of solution a17, wherein at least one parameter for a cropping operation is signaled in a bitstream representation of a slice group, slice, picture, or slice that includes a current video block.

A20. The method of solution a19, wherein at least one parameter is signaled only for one or more color components associated with a current video block.

A21. The method of solution a17, wherein at least one of the plurality of ALF filter coefficient sets and one or more parameters for the clipping operation are stored in a same memory location, and wherein at least one of the plurality of ALF filter coefficient sets or the one or more parameters are inherited by a Codec Tree Unit (CTU), a Codec Unit (CU), a slice group, a slice, or a picture that includes a codec of the current video block.

A22. The method of solution a21, wherein the clipping operation is configured to use one or more parameters corresponding to a temporal ALF coefficient set of the plurality of ALF filter coefficient sets in determining the temporal ALF coefficient set for a filtering process for a CTU, CU, slice group, slice, or picture that includes the current video block.

A23. The method of solution a22, wherein the one or more parameters corresponding to the set of temporal ALF coefficients are for only one or more color components associated with the current video block.

A24. The method of solution a21, wherein, in determining a temporal ALF coefficient set for use in a filtering process that includes a CTU, CU, slice group, slice, or picture of a current video block, one or more parameters corresponding to the temporal ALF coefficient set of the plurality of ALF filter coefficient sets are signaled in a bitstream representation.

A25. The method of solution a24, wherein the one or more parameters corresponding to the set of temporal ALF coefficients are signaled only for the one or more color components associated with the current video block.

A26. The method of solution a21, wherein a first set of one or more parameters of a first color component associated with a current video block is signaled, and wherein a second set of one or more parameters of a second color component associated with the current video block is inherited.

A27. The method of any of solutions a 1-a 26, wherein the converting generates the current video block from a bit stream representation.

A28. The method of any of solutions a 1-a 26, wherein the converting generates a bitstream representation from a current video block.

A29. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement a method according to any of solutions a 1-a 28.

A30. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing a method according to any of solutions a 1-a 28.

In some embodiments, the following technical solutions may be implemented:

B1. a method for video processing, comprising: encoding a current video block of a video into a bitstream representation of the video, wherein the current video block is encoded using an Adaptive Loop Filter (ALF); and selectively include, in the bitstream representation, an indication of a set of time domain adaptive filters within the one or more sets of time domain adaptive filters based on availability or use of the one or more sets of time domain adaptive filters.

B2. The method according to solution B1, wherein in case a set of time-domain adaptive filters is not available, an indication of the set is excluded from the bitstream representation.

B3. The method according to solution B1 or B2, wherein in case a set of time domain adaptive filters is not available, an indication of this set is included in the bitstream representation. 4. The method according to any of solutions B1-3, wherein the indication is excluded from the bitstream representation in case none of the one or more sets of time domain adaptive filters is available.

B5. The method of any of solutions B1-B3, wherein the indication to use a fixed filter should equal true in case none of the one or more sets of time-domain adaptive filters is available.

B6. The method according to any of solutions B1-B3, wherein the indication to use a time-domain adaptive filter should be equal to false in case none of the one or more sets of time-domain adaptive filters is available.

B7. The method according to any of solutions B1-B3, wherein in case none of the one or more sets of time domain adaptive filters is available, an indication of the index of the fixed filter is included in the bitstream representation.

B8. A method for video processing, comprising: determining availability or use of one or more sets of temporal adaptive filters based on an indication of the sets of temporal adaptive filters in a bitstream representation of the video, wherein the one or more sets of temporal adaptive filters comprise sets of temporal adaptive filters applicable to a current video block of the video coded with an Adaptive Loop Filter (ALF); and generating a decoded current video block from the bitstream representation by selectively applying a set of time-domain adaptive filters based on the determination.

B9. The method according to solution B8, wherein, in case a set of time-domain adaptive filters is not available, the generating is performed without applying the set of time-domain adaptive filters.

B10. The method according to solution B8 or B9, wherein, in case the set of time-domain adaptive filters is not available, performing the generating comprises applying the set of time-domain adaptive filters.

B11. The method of any of solutions B1-B10, wherein one or more sets of time-domain adaptive filters are included in an Adaptive Parameter Set (APS), and wherein the indication is an APS index.

B12. The method of any of solutions B1-B10, further comprising: a filter index for at least one of the one or more sets of time-domain adaptive filters is determined based on the gradient calculations in the different directions.

B13. The method of any of solutions B1-B11, further comprising: determining that none of the one or more sets of temporal adaptive filters are available and that the new set of ALF coefficients and the fixed set of ALF coefficients are not used for a Coding Tree Block (CTB), block, slice group, slice, or picture that includes the current video block; and inferring that adaptive loop filtering is disabled based on the determination.

B14. The method of any of solutions B1-B11, wherein, in response to at least one of the one or more sets of time domain adaptive filters being unavailable, the bitstream representation includes a first indication of use of a new set of ALF coefficients and a second indication of use of a fixed set of ALF coefficients, and wherein exactly one of the first indication and the second indication is true in the bitstream representation.

B15. The method of solution B14, wherein the bitstream representation conforms to format rules associated with the operation of the ALF.

B16. The method of any of solutions B1-B11, wherein, in response to none of the one or more sets of time domain adaptive filters being available, the bitstream representation includes an indication that ALF is enabled and that new sets of ALF coefficients and fixed sets of ALF coefficients are not used for a Coding Tree Block (CTB), block, slice group, slice, or picture that includes the current video block.

B17. The method of solution B16, wherein the bitstream representation does not comply with format rules associated with the operation of ALF.

B18. The method of any of solutions B1-B17, wherein ALF is applied to one or more color components associated with a current video block.

B19. A method for video processing, comprising: determining a plurality of sets of temporal Adaptive Loop Filter (ALF) coefficients for a current video block coded with an adaptive loop filter based on a set of available ALF coefficients, wherein the set of available temporal ALF coefficients has been encoded or decoded prior to the determining, and wherein the plurality of sets of ALF coefficients are for a slice group, slice, picture, Coded Tree Block (CTB), or video unit that includes the current video block; and performing a conversion between the current video block and a bitstream representation of the current video block based on the plurality of sets of time domain ALF coefficients.

B20. The method of solution B19, wherein a maximum number of the plurality of sets of time domain ALF coefficients is set equal to a number of sets of available time domain ALF coefficients.

B21. The method of solution B20, wherein a number of sets of time domain ALF coefficients is set equal to the smaller of a number of sets of time domain ALF coefficients available and a predefined number N, where N is an integer, and where N ≧ 0.

B22. The method of solution B21, wherein N-5.

B23. A method of video processing, comprising: processing one or more new sets of Adaptive Loop Filtering (ALF) coefficients as part of a transition between a current video block of the video and a bitstream representation of the video, wherein the current video block is encoded with an adaptive loop filter; and after the processing, designating the one or more new ALF coefficient sets as available ALF coefficient sets.

B24. The method of solution B23, further comprising: encountering an Intra Random Access Point (IRAP) access unit, an IRAP picture, an Instantaneous Decode Refresh (IDR) access unit, or an IDR picture; and designating an available ALF coefficient set as an unavailable ALF coefficient set based on the encountering.

B25. The method of solution B23 or B24, wherein at least one of the available sets of ALF coefficients is a set of temporal ALF coefficients of a video block subsequent to the current video block.

B26. The method of any of solutions B23-B25, wherein available ALF coefficient sets are maintained in a list of ALF coefficient sets of maximum size N, where N is an integer.

B27. The method of solution B26, wherein the ALF coefficient set list is maintained in first-in-first-out (FIFO) order.

B28. The method of any of solutions B1-B27, wherein one list of ALF coefficient sets is maintained for each temporal layer associated with a current video block.

B29. The method of any of solutions B1-B27, wherein one list of ALF coefficient sets is maintained for K adjacent temporal layers associated with a current video block.

B30. The method of any of solutions B1-B27, wherein a first list of ALF coefficient sets is maintained for a current picture that includes a current video block, and wherein a second list of ALF coefficient sets is maintained for pictures subsequent to the current picture.

B31. The method of solution B30, wherein pictures subsequent to the current picture are predicted based on the current picture, and wherein the first list of ALF coefficient sets is the same as the second list of ALF coefficient sets.

B32. The method of solution B30, wherein the current picture is predicted based on a picture subsequent to the current picture and a picture prior to the current picture, and wherein the first list of ALF coefficient sets is the same as the second list of ALF coefficient sets.

B33. The method of solution B23, further comprising: encountering an Intra Random Access Point (IRAP) access unit, an IRAP picture, an Instantaneous Decode Refresh (IDR) access unit, or an IDR picture; and clearing the one or more lists of ALF coefficient sets after the encounter.

B34. The method of solution B23, wherein different lists of ALF coefficient sets are maintained for different color components associated with a current video block.

B35. The method of solution B34, wherein the different color components include one or more of a luminance component, a Cr component, and a Cb component.

B36. The method of solution B23, wherein one ALF coefficient set list is maintained for multiple pictures, slice groups, slices, or Codec Tree Units (CTUs), and wherein an index in one ALF coefficient set list is different for each of the multiple pictures, slice groups, slices, or Codec Tree Units (CTUs).

B37. The method of solution B36, wherein the index is in ascending order and is based on a first temporal layer index associated with a current video block and a second temporal layer index associated with a current picture, slice group, slice, or Coding Tree Unit (CTU) that includes the current video block.

B38. The method of solution B36, wherein the index is in ascending order and is based on a Picture Order Count (POC) associated with a current video block and a second POC associated with a current picture, slice group, slice, or Coding Tree Unit (CTU) that includes the current video block.

B39. The method of solution B36, wherein the index comprises a minimum index assigned to the available ALF coefficient set.

B40. The method of solution B23, wherein the converting includes a clipping operation, and the method further comprises: classifying adjacent sampling points of the sampling points into a plurality of groups aiming at the sampling points of the current video block; and using a single set of parameters signaled in the bitstream representation for the clipping operation of each of the plurality of groups.

B41. The method of solution B23, wherein the converting includes a clipping operation, and wherein a set of parameters for the clipping operation is predefined for one or more new ALF coefficient sets.

B42. The method of solution B23, wherein the converting comprises a clipping operation, and wherein a set of parameters for the clipping operation is signaled in a bit stream representation of the one or more new sets of ALF coefficients.

B43. A method for video processing, comprising: determining, for a transition between a current video block of a video and a bitstream representation of the video, an indication of Adaptive Loop Filtering (ALF) in a header of a video region of the video equal to an indication of ALF in an Adaptive Parameter Set (APS) Network Abstraction Layer (NAL) unit associated with the bitstream representation; and performing the conversion.

B44. The method according to solution B43, wherein the video area is a picture.

B45. The method according to solution B43, wherein the video area is a slice.

B46. A method for video processing, comprising: selectively enabling a non-linear Adaptive Loop Filtering (ALF) operation for transitions between a current video block of the video and a bitstream representation of the video based on a type of adaptive loop filter used by a video region of the video; and performing a transition after the selectively enabling.

B47. The method of solution B46, wherein the video region is a Codec Tree Unit (CTU), and wherein the non-linear ALF operation is disabled when it is determined that the type of adaptive loop filter comprises a fixed ALF set or a time domain ALF set.

B48. The method of solution B46, wherein the video region is a slice, slice group, slice, or Codec Tree Unit (CTU), and wherein the non-linear ALF operation is enabled when it is determined that the type of adaptive loop filter comprises a fixed ALF set.

B49. The method of solution B46, further comprising: one or more clipping parameters for a non-linear ALF operation are selectively signaled in a bit stream representation.

B50. The method of solution B49, wherein one or more clipping parameters are signaled.

B51. The method of solution B49, wherein the one or more clipping parameters are signaled for an ALF filter coefficient set signaled in the bitstream representation.

B52. The method of solution B49, wherein the one or more clipping parameters are signaled when it is determined that the type of adaptive loop filter includes a fixed ALF set.

B53. The method of any of solutions B19-B52, wherein the converting generates the current video block from a bit stream representation.

B54. The method of any of solutions B19-B52, wherein the converting generates a bitstream representation from the current video block.

B55. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement a method according to any of solutions B1-B54.

B56. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing a method according to any of solutions B1-B54.

In some embodiments, the following technical solutions may be implemented:

C1. a method for video processing, comprising: performing a filtering process on the current video block, wherein the filtering process includes two or more operations with at least one intermediate result; applying a clipping operation to the at least one intermediate result; and performing a conversion between the current video block to a bitstream representation of the current video block based on the filtering operation.

C2. The method of solution C1, further comprising classifying neighboring samples of samples into a plurality of groups for samples of the current video block, wherein the clipping operation is applied to intermediate results in each of the plurality of groups with different parameters.

C3. The method of solution C2, wherein the at least one intermediate result comprises a weighted average of differences between the current sample and neighboring samples in each of the plurality of groups.

C4. The method according to solution C2, wherein the filtering process uses filter coefficients, and wherein the at least one intermediate result comprises a weighted sum of the filter coefficients and differences between the current sample and neighboring samples.

C5. The method of solution C1, wherein a plurality of neighboring samples of the current video block share filter coefficients, and wherein the clipping operation is applied once to each of the plurality of neighboring samples.

C6. The method of solution C5, wherein the filter shape associated with the filtering operation is a symmetric pattern.

C7. The method of solution C5 or C6, wherein one or more parameters of the clipping operation are signaled in a bit stream representation.

C8. The method of any of solutions C1-C7, wherein a clipping operation is defined as K (min, max, input), where input is an input to the clipping operation, min is a nominal minimum value of an output of the clipping operation, and max is a nominal maximum value of the output of the clipping operation.

C9. The method of solution C8, wherein an actual maximum value of the output of the clipping operation is less than a nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is greater than a nominal minimum value.

C10. The method of solution C8, wherein an actual maximum value of the output of the clipping operation is equal to the nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is greater than the nominal minimum value.

C11. The method of solution C8, wherein an actual maximum value of the output of the clipping operation is less than a nominal maximum value, and wherein an actual minimum value of the output of the clipping operation is equal to the nominal minimum value.

C12. The method of solution C8, wherein the actual maximum value of the output of the clipping operation is equal to the nominal maximum value, and wherein the actual minimum value of the output of the clipping operation is equal to the nominal minimum value.

C13. A method for video processing, comprising, based on unavailability of a set of time-domain adaptive loop filter coefficients, performing a transition between a current video block and a bitstream representation of the current video block such that the bitstream representation omits an indication of the set of time-domain adaptive loop filter coefficients.

C14. The method of solution C13, further comprising: determining that new Adaptive Loop Filter (ALF) coefficients and fixed ALF coefficients are not used for a Coding Tree Block (CTB), block, slice group, slice, or picture that includes the current video block; and inferring that adaptive loop filtering is disabled.

C15. The method of solution C13, wherein the consistent bit stream includes an indication of new Adaptive Loop Filter (ALF) coefficients or an indication of fixed ALF coefficients.

C16. A method for video processing, comprising: determining one or more sets of temporal Adaptive Loop Filter (ALF) coefficients for a current video block based on a set of available ALF coefficients, wherein the set of available ALF coefficients has been encoded or decoded prior to the determining; and performing a conversion between the current video block and a bitstream representation of the current video block based on the one or more sets of time domain ALF coefficients.

C17. The method of solution C16, wherein the maximum number of one or more sets of time-domain ALF coefficients is ALFavailable

C18. The method of solution C17, wherein the number of one or more sets of time-domain ALF coefficients is min (N, ALF)available) Wherein N is an integer, and wherein N ≧ 0.

C19. The method of solution C18, wherein N-5.

C20. A method of video processing, comprising: processing one or more new sets of Adaptive Loop Filtering (ALF) coefficients for the current video block; after this processing, designating one or more new ALF coefficient sets as available ALF coefficient sets; and performing a conversion between the current video block and a bitstream representation of the current video block based on the available set of ALF coefficients.

C21. The method of solution C20, further comprising: encountering an Intra Random Access Point (IRAP) access unit, an IRAP picture, an Instantaneous Decode Refresh (IDR) access unit, or an IDR picture; and designating an available ALF coefficient set as an unavailable ALF coefficient set.

C22. The method of solution C20 or C21, wherein the available set of ALF coefficients is the temporal ALF coefficients of the video block following the current video block.

C23. The method of any of solutions C20-C22, wherein available ALF coefficient sets are maintained in a list of ALF coefficient sets of maximum size N, where N is an integer.

C24. The method of solution C23, wherein the list of ALF coefficient sets is maintained in first-in-first-out (FIFO) order.

C25. The method of any of solutions C13-C24, wherein one list of ALF coefficient sets is maintained for each temporal layer associated with a current video block.

C26. The method of any of solutions C13-C24, wherein one list of ALF coefficient sets is maintained for K adjacent temporal layers associated with a current video block.

C27. The method of any of solutions C13-C24, wherein a first list of ALF coefficient sets is maintained for a current picture that includes a current video block, and wherein a second list of ALF coefficient sets is maintained for pictures subsequent to the current picture.

C28. The method of solution C27, wherein pictures subsequent to the current picture are predicted based on the current picture, and wherein the first list of ALF coefficient sets is the same as the second list of ALF coefficient sets.

C29. The method of solution C20, further comprising: encountering an Intra Random Access Point (IRAP) access unit, an IRAP picture, an Instantaneous Decode Refresh (IDR) access unit, or an IDR picture; and clearing the one or more lists of ALF coefficient sets after the encounter.

C30. The method of solution C20, wherein different lists of ALF coefficient sets are maintained for different color components of a current video block.

C31. An apparatus in a video system comprising a processor and a non-transitory memory having instructions thereon, wherein the instructions, when executed by the processor, cause the processor to implement a method according to any of solutions C1-C30.

C32. A computer program product stored on a non-transitory computer readable medium, the computer program product comprising program code for performing a method according to any of solutions C1-C30.

Fig. 12 is a block diagram of the video processing apparatus 1200. Apparatus 1200 may be used to implement one or more of the methods described herein. The apparatus 1200 may be embodied in a smart phone, tablet, computer, Internet of Things (IoT) receiver, or the like. The apparatus 1200 may include one or more processors 1202, one or more memories 1204, and video processing hardware 1206. The processor(s) 1202 may be configured to implement one or more of the methods described in this document (including, but not limited to, methods 1100 and 1150). The memory (es) 1204 may be used for storing data and code for implementing the methods and techniques described herein. Video processing hardware 1206 may be used to implement some of the techniques described in this document in hardware circuits.

In some embodiments, the video codec method may be implemented using an apparatus implemented on a hardware platform as described with respect to fig. 12.

Some embodiments of the disclosed technology include making a decision or determination to enable a video processing tool or mode. In an example, when a video processing tool or mode is enabled, the codec will use or implement the tool or mode in the processing of video blocks, but may not necessarily modify the resulting bitstream based on the use of the tool or mode. That is, when a video processing tool or mode is enabled based on the decision or determination, the conversion from a block of video to a bitstream representation of the video will use that video processing tool or mode. In another example, when a video processing tool or mode is enabled, the decoder will process the bitstream knowing that the bitstream has been modified based on the video processing tool or mode. That is, the conversion from a bitstream representation of the video to blocks of the video will be performed using a video processing tool or mode that is enabled based on the decision or determination.

Some embodiments of the disclosed technology include making a decision or determination to disable a video processing tool or mode. In an example, when a video processing tool or mode is disabled, the codec will not use that tool or mode in the conversion of blocks of video to bitstream representations of video. In another example, when a video processing tool or mode is disabled, the decoder will process the bitstream knowing that the bitstream was not modified using the video processing tool or mode that was enabled based on the decision or determination.

Fig. 13 is a block diagram illustrating an example video processing system 1300 in which various techniques disclosed herein may be implemented. Various embodiments may include some or all of the components of system 1300. The system 1300 may include an input 1302 for receiving video content. The video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or codec format. Input 1302 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of Network interfaces include wired interfaces such as ethernet, Passive Optical Network (PON), etc., and wireless interfaces such as Wi-Fi or cellular interfaces.

The system 1300 may include a codec component 1304 that may implement various codec methods described in this document. The codec component 1304 may reduce the average bit rate of the video from the input 1302 to the output of the codec component 1304 to produce a codec representation of the video. Codec techniques are therefore sometimes referred to as video compression or video transcoding techniques. The output of codec component 1304 may be stored or transmitted via a communication connection as represented by component 1306. The stored or communicated bitstream (or codec) representation of the video received at input 1302 may be used by component 1308 to generate pixel values or displayable video communicated to display interface 1310. The process of generating user-viewable video from a bitstream representation is sometimes referred to as video decompression. Further, while certain video processing operations are referred to as "codec" operations or tools, it will be understood that codec tools or operations are used at the codec and that corresponding decoding tools or operations that reverse the codec results will be performed by the decoder.

Examples of the peripheral Bus Interface or the display Interface may include a Universal Serial Bus (USB), or a High Definition Multimedia Interface (HDMI), or a Displayport (Displayport), and the like. Examples of storage interfaces include SATA (Serial Advanced Technology Attachment), PCI, IDE interfaces, and the like. The techniques described in this document may be embodied in various electronic devices, such as mobile phones, laptops, smart phones, or other devices capable of performing digital data processing and/or video display.

From the foregoing it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims.

Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a combination of substances which affect a machine-readable propagated signal, or a combination of one or more of them. The term "data processing unit" or "data processing apparatus" encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

A computer program (also known as a program, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.

The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit).

Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not require such a device. Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.

Only some embodiments and examples are described and other embodiments, enhancements and variations can be made based on what is described and illustrated in this patent document.

71页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于MPM列表的帧内预测方法及其设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类