Method and apparatus for adaptive illumination compensation in video encoding and decoding

文档序号:1367530 发布日期:2020-08-11 浏览:16次 中文

阅读说明:本技术 用于视频编码和解码中的自适应照明补偿的方法和装置 (Method and apparatus for adaptive illumination compensation in video encoding and decoding ) 是由 F.莱林内克 F.加尔平 T.波里尔 Y.陈 于 2019-01-22 设计创作,主要内容包括:描述了不同的实现方式,用于确定由视频编码器编码或由视频解码器解码的当前块的一个或多个照明补偿参数。确定在图片中正被编码的当前块的多个运动矢量。确定多个运动矢量中的每一个的一个或多个照明补偿参数并对其进行编码或解码。然后,使用多个运动矢量和多个运动矢量中的每一个的一个或多个照明补偿参数,对当前块进行编码或解码。在一个实施例中,使用标志来发信号通知使用还是不使用照明补偿。在另一实施例中,如果不使用照明补偿,则不对照明补偿标志进行编码或解码。(Various implementations are described for determining one or more illumination compensation parameters for a current block encoded by a video encoder or decoded by a video decoder. A plurality of motion vectors for a current block being encoded in a picture is determined. One or more illumination compensation parameters for each of the plurality of motion vectors are determined and encoded or decoded. Then, the current block is encoded or decoded using the plurality of motion vectors and the one or more illumination compensation parameters for each of the plurality of motion vectors. In one embodiment, a flag is used to signal whether illumination compensation is used or not used. In another embodiment, if illumination compensation is not used, the illumination compensation flag is not encoded or decoded.)

1. A method of encoding video data, comprising:

determining a plurality of motion vectors for a current block being encoded in a picture;

determining one or more illumination compensation parameters for each of the plurality of motion vectors, wherein an illumination compensation flag is provided for each of the plurality of motion vectors, the illumination compensation flag indicating that the current block is encoded using the one or more illumination compensation parameters;

encoding a first illumination compensation flag corresponding to a first motion vector of the plurality of motion vectors based on a second illumination compensation flag corresponding to a second motion vector of the plurality of motion vectors; and

encoding the current block using the plurality of motion vectors and the one or more illumination compensation parameters for each of the plurality of motion vectors.

2. A method of decoding video data, comprising:

determining a plurality of motion vectors for a current block in a picture being decoded;

determining an illumination compensation flag for each of the plurality of motion vectors, the illumination compensation flag indicating that the current block is to be decoded using the one or more illumination compensation parameters, wherein a first illumination compensation flag corresponding to a first motion vector of the plurality of motion vectors is determined based on a second illumination compensation flag corresponding to a second motion vector of the plurality of motion vectors;

determining one or more illumination compensation parameters for each of the plurality of motion vectors; and

decoding the current block using the plurality of motion vectors and the one or more illumination compensation parameters for each of the plurality of motion vectors.

3. An apparatus for encoding video data, comprising at least one memory and one or more processors, wherein the one or more processors are configured to:

determining a plurality of motion vectors for a current block being encoded in a picture;

determining one or more illumination compensation parameters for each of the plurality of motion vectors, wherein an illumination compensation flag is provided for each of the plurality of motion vectors, the illumination compensation flag indicating that the current block is encoded using the one or more illumination compensation parameters;

encoding a first illumination compensation flag corresponding to a first motion vector of the plurality of motion vectors based on a second illumination compensation flag corresponding to a second motion vector of the plurality of motion vectors; and

encoding the current block using the plurality of motion vectors and the one or more illumination compensation parameters for each of the plurality of motion vectors.

4. An apparatus for decoding video data, comprising at least one memory and one or more processors, wherein the one or more processors are configured to:

determining a plurality of motion vectors for a current block in a picture being decoded;

determining an illumination compensation flag for each of the plurality of motion vectors, the illumination compensation flag indicating that the current block is determined using the one or more illumination compensation parameters, wherein a first illumination compensation flag corresponding to a first motion vector of the plurality of motion vectors is determined based on a second illumination compensation flag corresponding to a second motion vector of the plurality of motion vectors;

determining one or more illumination compensation parameters for each of the plurality of motion vectors; and

decoding the current block using the plurality of motion vectors and the one or more illumination compensation parameters for each of the plurality of motion vectors.

5. The method of claim 1 or 2, or the apparatus of claim 3 or 4, wherein the first illumination compensation marker is encoded or decoded using a different context than the second illumination compensation marker.

6. The method of claim 5, or the apparatus of claim 5, wherein the illumination compensation flag is predicted from an illumination compensation parameter.

7. The method of any one of claims 1, 2, 5, or 6, or the apparatus of any one of claims 3-6, wherein the plurality of motion vectors are determined from one or more reference pictures.

8. The method of claim 7, or the apparatus of claim 7, wherein the one or more reference pictures are indexed in one or more reference picture lists.

9. The method of any of claims 1, 2 and 5-8, or the apparatus of any of claims 3-8, wherein the current block is a bi-directionally predicted prediction block, wherein the plurality of motion vectors is one motion vector pointing to a first reference picture block and another motion vector pointing to a second reference picture block.

10. The method of any of claims 1, 2, and 5-9, or the apparatus of any of claims 3-9, wherein the current block is inter-coded.

11. The method of any one of claims 1, 2 and 5-10, or the apparatus of any one of claims 3-10, wherein the current block is encoded in AMVP mode.

12. The method of any of claims 1, 2 and 5-11, or the apparatus of any of claims 3-11, wherein the current block is encoded in merge mode.

13. A bitstream formatted to include:

a plurality of motion vectors encoded for a current block being encoded in a picture;

one or more illumination compensation parameters encoded for each of the plurality of motion vectors; and

an illumination compensation flag for each of the plurality of motion vectors indicating that the current block is encoded using the one or more illumination compensation parameters, wherein a first illumination compensation flag corresponding to a first motion vector of the plurality of motion vectors is encoded based on a second illumination compensation flag corresponding to a second motion vector of the plurality of motion vectors,

wherein the current block is encoded using the plurality of motion vectors and the one or more illumination compensation parameters for each of the plurality of motion vectors.

14. A non-transitory computer readable medium containing data content generated according to the method of any one of claims 1, 2, and 5-12.

15. A computer program product comprising instructions for performing the method of any one of claims 1, 2, and 5-12 when executed by one of a plurality of processors.

Technical Field

At least one of the present embodiments relates generally to a method or apparatus for video encoding or decoding, and more particularly to a method or apparatus for determining illumination compensation parameters in video encoding or decoding.

Background

To achieve high compression efficiency, image and video coding schemes typically employ prediction and transform to exploit spatial and temporal redundancy in video content. Typically, intra or inter prediction is used to exploit intra or inter correlation, and then the difference between the original block and the predicted block (usually denoted prediction error or prediction residual) is transformed, quantized and entropy encoded. To reconstruct video, the compressed data is decoded by the inverse process corresponding to entropy coding, quantization, transformation, and prediction.

Disclosure of Invention

According to at least one embodiment, a method of encoding video data is presented, the method comprising: determining a plurality of motion vectors for a current block being encoded in a picture; determining one or more illumination compensation parameters for each of a plurality of motion vectors; and encoding the current block using a plurality of motion vectors and one or more illumination compensation parameters for each of the plurality of motion vectors.

According to another embodiment, a method of decoding video data is presented, the method comprising: determining a plurality of motion vectors for a current block in a picture being decoded; determining one or more illumination compensation parameters for each of a plurality of motion vectors; and decoding the current block using a plurality of motion vectors and one or more illumination compensation parameters for each of the plurality of motion vectors.

According to another embodiment, an apparatus for encoding video data is proposed, the apparatus comprising: means for determining a plurality of motion vectors for a current block being encoded in a picture; means for determining one or more illumination compensation parameters for each of a plurality of motion vectors; and means for encoding the current block using a plurality of motion vectors and one or more illumination compensation parameters for each of the plurality of motion vectors.

According to another embodiment, an apparatus for decoding video data is proposed, the apparatus comprising: means for determining a plurality of motion vectors for a current block in a picture being decoded; means for determining one or more illumination compensation parameters for each of a plurality of motion vectors; and means for decoding the current block using a plurality of motion vectors and one or more illumination compensation parameters for each of the plurality of motion vectors.

According to another embodiment, an apparatus for encoding video data is presented, the apparatus comprising at least one memory and one or more processors, wherein the one or more processors are configured to: determining a plurality of motion vectors for a current block being encoded in a picture; determining one or more illumination compensation parameters for each of a plurality of motion vectors; and encoding the current block using a plurality of motion vectors and one or more illumination compensation parameters for each of the plurality of motion vectors.

In accordance with another embodiment, an apparatus for decoding video data is provided, the apparatus comprising at least one memory and one or more processors, wherein the one or more processors are configured to: determining a plurality of motion vectors for a current block in a picture being decoded; determining one or more illumination compensation parameters for each of a plurality of motion vectors; and decoding the current block using a plurality of motion vectors and one or more illumination compensation parameters for each of the plurality of motion vectors.

According to another embodiment, a bitstream formatted to include: a plurality of motion vectors encoded for a current block being encoded in a picture; and one or more illumination compensation parameters encoded for each of the plurality of motion vectors, wherein the current block is encoded using the plurality of motion vectors and the one or more illumination compensation parameters for each of the plurality of motion vectors.

According to a further embodiment, an illumination compensation flag is provided for each of the plurality of motion vectors, the illumination compensation flag indicating whether the current block is to be encoded or decoded using one or more illumination compensation parameters. The illumination compensation flag may be predicted from illumination compensation parameters, such as tilt parameters and intercept parameters.

According to another embodiment, a first illumination compensation flag corresponding to a first motion vector of the plurality of motion vectors may be encoded or decoded based on a second illumination compensation flag corresponding to a second motion vector of the plurality of motion vectors. The first illumination compensation marker may be entropy encoded or decoded using a different context than the second illumination compensation marker.

According to another embodiment, if illumination compensation is not used, the illumination compensation flag is not encoded or decoded.

According to another embodiment, a plurality of motion vectors is determined from one or more reference pictures.

According to a further embodiment, the one or more reference pictures are indexed in one or more reference picture lists.

According to another embodiment, the current block is a bi-directionally predicted prediction block, wherein the plurality of motion vectors is one motion vector pointing to the first reference picture block and another motion vector pointing to the second reference picture block.

According to another embodiment, the current block is inter-coded.

According to a further embodiment, the current block is encoded in AMVP mode.

According to a further embodiment, the current block is encoded in merge mode.

One or more of the present embodiments also provide a computer-readable storage medium having stored thereon instructions for encoding or decoding video data according to any of the above-described methods. The present embodiment also provides a computer-readable storage medium on which a bitstream generated according to the above-described method is stored. The embodiment also provides a method and a device for transmitting the bit stream generated according to the method. The present embodiments also provide a computer program product comprising instructions for performing any of the described methods.

Drawings

Fig. 1 illustrates a block diagram of an embodiment of a video encoder.

Fig. 2A is a diagram illustrating an example of positions of five spatial candidates, and fig. 2B is a diagram illustrating an example of a motion vector representation using AMVP.

Fig. 3 illustrates a block diagram of an embodiment of a video decoder.

Fig. 4 illustrates the use of FRUC to derive motion information for a current block.

Fig. 5 illustrates an exemplary process for performing motion derivation.

Figure 6 conceptually illustrates the derivation of Illumination Compensation (IC) parameters using an L-shaped template.

Fig. 7 illustrates an exemplary process 700 of inter-coding of a current CU.

Fig. 8 illustrates an exemplary process 800 of inter-decoding of a current CU.

Fig. 9 illustrates an exemplary encoding process of AMVP mode inter-coding of a current CU.

Fig. 10 illustrates an exemplary decoding process of AMVP mode inter-decoding of a current CU.

Fig. 11 illustrates an exemplary encoding process of a current CU in accordance with an aspect of the present embodiment.

Fig. 12 illustrates an exemplary decoding process of a current CU in accordance with an aspect of the present embodiment.

Fig. 13 illustrates an exemplary process of determining an overall rate-distortion optimization selection for an encoding mode of a CU.

Fig. 14 illustrates an exemplary rate-distortion optimization procedure in accordance with an aspect of the present embodiment.

Fig. 15 illustrates an exemplary process of searching for an AMVP coding mode.

Fig. 16 illustrates an exemplary process of searching for an AMVP coding mode in accordance with an aspect of the present embodiment.

FIG. 17 illustrates a block diagram of a system in which aspects of the present embodiments may be implemented.

Detailed Description

Fig. 1 illustrates an exemplary video encoder 100, such as a High Efficiency Video Coding (HEVC) encoder. HEVC is a compression standard developed by the joint collaborative group of video coding (JCT-VC) (see, for example, "ITU-T h.265itu telecommunication standardization sector (10/2014), H series: audio-visual and multimedia systems, infrastructure for audio-visual services-coding of motion video, high efficiency video coding, ITU-T h.265 recommendation"). Fig. 1 may also show an encoder in which the HEVC standard is improved or an encoder that employs techniques similar to HEVC.

In this application, the terms "reconstructed" and "decoded" may be used interchangeably, the terms "encoded" or "encoded" may be used interchangeably, and the terms "picture" and "frame" may be used interchangeably. Typically, but not necessarily, the term "reconstructed" is used at the encoder side, while "decoded" is used at the decoder side.

In HEVC, to encode a video sequence having one or more pictures, a picture is partitioned into one or more slices, where each slice may include one or more slice segments. The slice segments are organized into coding units, prediction units, and transform units. The HEVC specification distinguishes between "blocks" for a particular region in a sample array (e.g., luma, Y) and "units" that include collocated blocks of all coded color components (Y, Cb, Cr, or monochrome), syntax elements, and prediction data (e.g., motion vectors) associated with the blocks.

For encoding, a picture is partitioned into square Coded Tree Blocks (CTBs) of configurable size, and a contiguous set of the coded tree blocks is grouped into slices. The Coding Tree Unit (CTU) contains the CTBs of the coded color components. The CTB is the root of a quadtree partitioned into Coding Blocks (CB), and may partition a coding block into one or more Prediction Blocks (PB) and form the root of the quadtree partitioned into Transform Blocks (TB). Corresponding to the coding block, the prediction block, and the transform block, a Coding Unit (CU) includes a Prediction Unit (PU) and a tree-structured set of Transform Units (TUs), the PU includes prediction information for all color components, and the TU includes a residual coding syntax structure for each color component. The sizes of CB, PB, and TB of the luminance components are suitable for the corresponding CU, PU, and TU. In this application, the term "block" may be used to refer to, for example, any one of a CTU, CU, PU, TU, CB, PB, and TB. In addition, "block" may also be used to refer to macroblocks and partitions as specified in H.264/AVC or other video coding standards, and more generally to data arrays of various sizes.

In the exemplary encoder 100, a picture is encoded by an encoder element, as described below. The picture to be encoded is processed in units of CUs. Each CU is encoded using intra or inter modes. When a CU is encoded in intra mode, it performs intra prediction (160). In inter mode, motion estimation (175) and compensation (170) are performed. The encoder decides (105) which of an intra mode or an inter mode to use for encoding the CU, and indicates the intra/inter decision by a prediction mode flag. The prediction residual is calculated by subtracting (110) the predicted block from the original image block.

The CU in intra mode is predicted from reconstructed neighboring samples within the same strip. In HEVC, a set of 35 intra prediction modes is available, including DC, planar, and 33 angular prediction modes. An intra-prediction reference is reconstructed from rows and columns adjacent to the current block. The reference extends in the horizontal and vertical directions over twice the block size, using available samples from previously reconstructed blocks. When the angular prediction mode is used for intra prediction, the reference samples may be copied along the direction indicated by the angular prediction mode.

The applicable luma intra prediction mode for the current block may be encoded using two different options. If the applicable mode is contained in a build list of three Most Probable Modes (MPMs), that mode is signaled by an index in the MPM list. Otherwise, the mode is signaled by a fixed length binarization of the mode index. The three most probable modes are derived from the intra prediction modes of the top and left neighboring blocks.

For inter-CUs, the corresponding coding block is further partitioned into one or more prediction blocks. Inter prediction is performed at the PB level, and the corresponding PU contains information on how to perform inter prediction. Motion information (e.g., motion vectors and reference picture indices) can be signaled in two methods, namely "merge (merge) mode" and "Advanced Motion Vector Prediction (AMVP)".

In merge mode, the video encoder or decoder assembles a candidate list based on the encoded blocks, and the video encoder signals an index of one of the candidates in the candidate list. At the decoder side, Motion Vectors (MVs) and reference picture indices are reconstructed based on the signaled candidates.

The set of possible candidates in the merge mode includes spatial neighbor candidates, temporal candidates, and generated candidates. FIG. 2A shows five spatial candidates a for the current block 2101,b1,b0,a0,b2Position of (b) }, in which a0And a1On the left side of the current block, and b1、b0、b2On top of the current block. For each candidate position, according to a1、b1、b0、a0、b2The sequence of (2) checks availability and then removes redundancy among the candidates.

The motion vectors of collocated positions in the reference picture can be used for temporal candidate derivation. Reference picture of applicationSlices are selected on a slice-by-slice basis and indicated in the slice header, and the reference index of the temporal candidate is set to iref0. If a POC distance (td) between a picture in which a PU is collocated and a reference picture from which the PU is predicted collocated is the same as a distance (tb) between a current picture and a reference picture containing the collocated PU, a motion vector mv is collocatedcolCan be used directly as a time candidate. Otherwise, the scaled motion vector tb/td mvcolUsed as a time candidate. Depending on where the current PU is located, the collocated PU is determined by the sample location at the bottom right or center of the current PU.

In AMVP, a video encoder or decoder assembles a candidate list based on motion vectors determined from encoded blocks. The video encoder then signals an index in the candidate list to identify a Motion Vector Predictor (MVP) and signals a Motion Vector Difference (MVD). On the decoder side, the Motion Vectors (MVs) are reconstructed to MVP + MVDs. The applicable reference picture index is also explicitly coded in the PU syntax of AMVP.

Fig. 2B illustrates an exemplary motion vector representation using AMVP. For the current block 240 to be encoded, a Motion Vector (MV) may be obtained through motion estimationcurrent). Using Motion Vectors (MVs) from the left block 230left) And the Motion Vector (MV) from the upper block 220above) Can be derived from MVleftAnd MVaboveSelecting a motion vector predictor as the MVPcurrent. Then, the motion vector difference may be calculated as MVDcurrent=MVcurrent–MVPcurrent

Motion compensated prediction may be performed using one or two reference pictures for prediction. In P slices, only a single prediction reference may be used for inter prediction, enabling unidirectional prediction of a prediction block. In B slices, two reference picture lists (i.e., list 0, list 1) are available and either uni-directional prediction or bi-directional prediction may be used. In bi-prediction, one reference picture from each reference picture list is used.

The prediction residual is then transformed (125) and quantized (130). The quantized transform coefficients are entropy encoded (145) along with motion vectors and other syntax elements to output a bitstream. The encoder may also skip the transform and apply quantization directly to the untransformed residual signal on a 4 x 4TU basis. The encoder may also bypass both transform and quantization, i.e. directly encode the residual without applying a transform or quantization process. In direct PCM coding, prediction is not applied, and the coding unit samples are directly encoded into the bitstream.

The encoder decodes the encoded block to provide a reference for further prediction. The quantized transform coefficients are dequantized (140) and inverse transformed (150) to decode the prediction residual. The decoded prediction residual and the predicted block are combined (155) to reconstruct an image block. An in-loop filter (165) is applied to the reconstructed picture, for example, to perform deblocking/SAO (sample adaptive offset) filtering to reduce coding artifacts. The filtered image is stored in a reference picture buffer (180).

Fig. 3 illustrates a block diagram of an exemplary video decoder 300, such as an HEVC decoder. In the exemplary decoder 300, the bitstream is decoded by a decoder element, as described below. The video decoder 300 typically performs a decoding pass as opposed to an encoding pass as described in fig. 1, which performs video decoding as part of the encoded video data. Fig. 3 may also show a decoder in which the HEVC standard is improved or a decoder employing techniques similar to HEVC.

In particular, the input to the decoder comprises a video bitstream that can be generated by the video encoder 100. The bitstream is first entropy decoded (330) to obtain transform coefficients, motion vectors and other coding information. The transform coefficients are dequantized (340) and inverse transformed (350) to decode the prediction residual. The decoded prediction residual and the predicted block are combined (355) to reconstruct the image block. The predicted block may be obtained (370) from intra prediction (360) or motion compensated prediction (i.e., inter prediction) (375). As described above, the AMVP and merge mode techniques may be used to derive motion vectors for motion compensation, which may use an interpolation filter to calculate interpolated values for sub-integer samples of a reference block. An in-loop filter (365) is applied to the reconstructed image. The filtered image is stored in a reference picture buffer (380).

Joint video exploration group (jfet) developed a Frame Rate Up Conversion (FRUC) model or derivation based on frame rate up conversion techniques in reference software JEM (joint exploration model). In case of FRUC mode, motion information of a block is derived on the decoder side without explicit syntax of MVP information. The FRUC process is fully symmetric, i.e., the same motion derivation operations are performed at both the encoder and decoder.

In JEM, the QTBT (quadtree plus binary tree) structure removes the concept of multiple partition types in HEVC, i.e., removes the separation of CU, PU and TU concepts. A Coding Tree Unit (CTU) is first partitioned by a quadtree structure. The quad-tree leaf nodes are further partitioned by a binary tree structure. The binary tree leaf nodes are called Coding Units (CUs) and are used for prediction and transform without further partitioning. Thus, in the new coding QTBT block structure, CU, PU and TU have the same block size. In JEM, a CU is composed of Coded Blocks (CBs) of different color components.

Fig. 4 illustrates the use of FRUC to derive motion information for current block 410. The current block may be in a "merge" or "AMVP" mode. The top and left neighboring blocks of the current block are used as templates. Motion information may be derived by locating the best match between a template (420, 430) for the current block and a template (440, 450) for a block in a reference picture, by locating the block (460) with the smallest matching cost, e.g., with the smallest SAD (sum of absolute differences) between the templates. Other cost measures besides SAD may also be used to calculate the matching cost. In particular, the motion vector may be obtained as a displacement between the collocated block of the current block and the best matching block.

Fig. 5 illustrates an exemplary process 500 for selecting a motion vector in FRUC mode. At step 510, a MV candidate list is built. In steps 520 to 540, an MV is selected from the MV candidate list (best _ MV at 540) in order to minimize the Sum of Absolute Differences (SAD) (530) between the motion compensated reference block and the template for the current block (ref 0 and rec in 530).

Recent additions to high compression techniques include the use of affine modeling based motion models. In particular, affine modeling is used for motion compensation for encoding and decoding of video pictures. Generally, affine modeling is a model that uses at least two parameters, such as for example two Control Point Motion Vectors (CPMV) representing motion at corners of a picture block, which allows deriving a motion field for the entire block of the picture to simulate for example rotation and similarity expansion (scaling). The affine flag is used to signal the use of affine modeling in encoding and decoding.

Other recent additions to video compression techniques, such as those described in the algorithm description of Joint exploration test model 6 (JEM6, document: JFET-F1001-v 3), include the use of Illumination Compensation (IC) parameters to compensate for illumination (e.g., brightness) variations between the current block being encoded or decoded and at least one prediction block. In particular, as shown in fig. 6, adjacent samples are selected using an L-shaped template to calculate IC parameters in inter-coding mode. The IC parameters are estimated by comparing reconstructed neighboring samples (i.e., samples in the L-shaped cur region 602 'of the current block 603') with neighboring samples (samples in the L-shaped ref-i region 602 ") of the reference i-block (i ═ 0 or 1) 603". Note that to reduce computational complexity, the reference i-block here may not be exactly a prediction block, but instead may be based on an integer version of the motion vector (i.e., full pixel precision) without using a motion compensated interpolation filter. The IC parameters minimize the difference between the samples in L-shaped cur 602 'and the samples in L-shaped ref-i 602' adjusted with the IC parameters. Without loss of generality, the reference i block may also be referred to as a prediction block.

That is, in inter prediction encoding or decoding, the current block 603' uses motion information (e.g., motion vector MV)curAnd a reference picture index i identifying, for example, one reference picture in the decoded picture buffer) to construct a prediction block using a Motion Compensation (MC) process. Furthermore, the prediction block is adjusted by the IC parameters. Assuming that the prediction block (ref-i) obtained using the ith reference picture, the IC parameters are estimated by comparing the reconstructed neighboring samples in L-shaped cur 602' with the neighboring samples in L-shaped ref-i 602 ″ of ref-i block (i ═ 0 or 1), as shown in fig. 6.

The IC parameters may be estimated by minimizing the mean square error/difference (MSE) between the samples in L-shaped cur 602' and the L-shaped ref-i 602 "samples adjusted with the IC parameters. Typically, the IC model is linear, e.g.,

IC(y)=ay+b, (1)

where a is a slope parameter and b is an intercept parameter. Then IC parameters (a) can be obtainedi,bi) As shown below

Where x is the reconstructed sample in the L-shaped template in the current picture and y is the sample in the L-shaped template in the reference picture, which can be obtained by using MVcurOr modified MVcur(e.g., lower precision MV)cur) Is obtained by motion compensation. In equation (2), x and y are samples located at the same position within the L-shaped template as shown in fig. 6 (see, e.g., the pair of x 605' and y 605 "). In the case of bi-directional prediction, the IC parameters (a) can be derived independently from L-shaped ref-0 and L-shaped ref-1, respectively0,b0) And (a)1,b1). In the bitstream, when IC is enabled for a current slice, picture or sequence, an indication (such as, for example, an IC flag) may be encoded for each block to indicate whether IC is enabled for that block. Thus, in the case of a bi-directionally predicted current coding block or unit in existing approaches, IC is applied to both the ref-0 and ref-1 reference samples for the current block or unit, or not applied to both.

The present embodiments recognize certain limitations and disadvantages of the above-described prior methods in IC processing currently proposed in JEM. These limitations and disadvantages of current proposals include, for example:

in the case of a bi-directionally predicted current block, certain lighting variations in the current block may exist with respect to one reference block and not for another. As described above, since there is only one IC flag per entire CU, this situation cannot be solved.

In case the motion information associated with the current block is close to the motion information associated with at least one neighboring block available in its decoding state, the IC flag may be inferred or spatially predicted from the neighboring block. In the JEM video coding scheme, this potential correlation between the IC parameters of the current block and those of the neighboring blocks is underutilized.

Accordingly, the present embodiments are directed to methods and apparatus for improving IC processing associated with a block being encoded or decoded. In some of the present embodiments, the IC parameters associated with a CU are processed in a similar manner as how other motion information (motion vectors, reference picture information) for a block is processed. Thus, for example, IC processing is integrated into the processing of motion information for a current block being encoded or decoded. That is, the motion information may include a motion vector, reference picture information, and motion compensation information. Hereinafter, a data field representing motion information is denoted as "motion field", which may be used interchangeably with the term "motion information".

In some present embodiments, the IC processing is determined and/or signaled for each motion vector considered for a CU, rather than on the entire CU level. This means that an IC flag may be associated with each motion vector of the CU. As a result, several motion vectors and thus several IC flags may be associated with a CU. In case of a bi-directionally predicted CU, at least two motion vectors are associated with the CU. If a CU is divided into two bi-directionally predicted Prediction Units (PUs) as in HEVC, a pair of IC flags is associated with each of the two PUs contained in the CU. That is, there is one IC tag for each of the two bi-predictive motion vectors assigned to each PU.

According to a general aspect of at least one embodiment, the presently improved IC processing is applied to AMVP mode, where the motion information encoded/decoded for a CU includes one or several motion fields. Each motion field includes parameters such as, for example, reference frame indices, motion vectors, and the like. According to some embodiments, the IC tag information becomes part of the motion field information for a given CU.

In the following example, consider the case where one CU corresponds to one PU and one TU. Thus, in accordance with a general aspect of at least one embodiment, a motion field of a CU may include, for example:

-a motion vector of the motion vector,

-a reference picture index (Cx) for the picture,

-an IC flag indicating whether or not IC is used when processing the current motion field during temporal prediction.

Note that, in general, several motion fields may be associated with a CU: there is one motion field per reference picture list associated with the CU under consideration.

One advantage of the proposed IC flag parameter being integrated into the motion field is the flexibility to be able to provide IC processing for each reference in both reference picture list 0 and reference picture list 1. Since the loop on the IC flag is moved from the CU level to the motion estimation process, certain processes such as RDOQ and coefficient coding estimation can be avoided. Thus, the present codec design modification may result in reduced encoding/decoding time with little change in compression performance compared to existing codecs.

Fig. 7 illustrates an exemplary prior art process 700 of inter-coding of a current CU. As shown in fig. 7, the inputs to the process 700 are, for example, the coding position, size, and slice type of the current block. In step 705, the encoder checks whether the slice type is not intra (i.e., instead of inter-coded slices). At step 710, a skip mode flag is encoded according to whether a skip mode is used. If the skip mode flag is determined to be true at step 715, skip mode information of the current CU is encoded at step 720. On the other hand, if it is determined in step 715 that the skip mode is not used, the prediction mode is encoded in step 725. In steps 730 and 735, if the prediction mode is intra mode, intra coding information of the CU is encoded. If the prediction mode is not intra (i.e., instead of inter), the merge flag is encoded accordingly, step 740. In steps 745 and 755, if the merge mode flag is encoded as true, the merge information of the current CU is encoded. On the other hand, if the merge mode flag is encoded as not true, inter coding information (e.g., inter _ pred _ idc, motion vector difference, motion vector predictor index) of the current CU is encoded in step 750. As previously described, in the current JEM IC process, the IC flag and corresponding IC parameters are determined and encoded for the entire CU and encoded as part of step 750 shown in FIG. 7. In step 760, transform coefficients of the current CU are determined and encoded. At step 765, process 700 ends.

Fig. 8 illustrates an exemplary prior art process 800 for inter-decoding of a current CU. The decoding process 800 is a corresponding decoding process of the exemplary encoding process 700 shown in fig. 7. As shown in fig. 8, the inputs to the process 800 in fig. 8 are, for example, the encoding position, size, and slice type of the current block. At step 805, the slice type is decoded and determined to be not intra (i.e., instead of inter-coded slices). At step 810, the skip mode flag is decoded. If the skip mode flag is determined to be true at step 815, skip mode information of the current CU is decoded at step 820. On the other hand, if it is determined in step 815 that the skip mode is not used, the prediction mode is decoded in step 825. In steps 830 and 835, if the prediction mode is intra mode, the intra coded information of the current CU is decoded. If the prediction mode is not intra (i.e., instead of inter), the merge flag is decoded accordingly, step 840. In steps 845 and 855, if the merge mode flag is true, the merge information of the current CU is decoded. On the other hand, if the merge mode flag is not true, then at step 850, the inter-coded information of the current CU is decoded. As previously described, in the JEM IC processing method, the IC flag and the corresponding IC parameter are decoded only for each entire CU, and decoding is performed as part of step 850 shown in fig. 8. In step 860, the transform coefficients of the current CU are decoded. At step 865, process 800 ends.

Fig. 9 illustrates an exemplary existing encoding process 900 for AMVP mode inter-coding of a current CU. As shown in fig. 9, the inputs to the process 900 are, for example, the encoding position, size, and slice type of the current block. In step 905, an inter direction parameter, such as the inter _ pred _ idc parameter provided in the HEVC standard, is determined and encoded for the current block. The inter direction parameter specifies whether list 0, list 1, or bi-prediction (both lists) is used for the current CU. At step 910, the size of the current block is checked to see if the number of pixels for both the width and height of the current block is greater than 8. If so, at step 915, an affine flag is determined and encoded. On the other hand, if the number of pixels of one or both of the width and the height of the current block is not greater than 8, the parameter refPicList is set to 0, corresponding to the reference picture list 0, in step 920. If affine motion prediction is not used, steps 910 and 915 may be skipped.

At step 925 and 950, the process 900 enters an iterative loop for each reference picture list. In this loop, the reference picture index number of each reference picture list is determined and encoded in step 930. At step 935, the Motion Vector Difference (MVD) is encoded. Also, at step 940, motion vector predictor information, such as an index in a candidate list for identifying a Motion Vector Predictor (MVP), is also encoded. Additional temporal prediction parameters may also be encoded. For example, at step 955, the iMv flag indicates whether the motion vector of the current CU is encoded at a lower precision level than the usual 1/4 pixel precision. At step 960, an OBMC flag may be encoded to indicate whether temporal prediction of the current CU includes Overlapped Block Motion Compensation (OBMC) processing. At step 965, an IC flag indicating whether IC is used for the current CU is determined and encoded. Again, as previously described, this existing IC process, as shown in the exemplary encoding process 900 of fig. 9, determines and encodes the IC flag and IC parameters only once for the entire CU, even in AMVP mode. The method 900 ends at step 970.

Fig. 10 illustrates an exemplary existing decoding process 1000 for AMVP mode inter-decoding of a current CU. The decoding process 1000 shown in fig. 10 is a corresponding AMVP decoding process of the exemplary AMVP encoding process 900 shown in fig. 9. As shown in fig. 10, the inputs to the process 1000 in fig. 10 are, for example, the encoding position, size, and slice type of the current block. In step 1005, an inter direction parameter, e.g., inter _ pred _ idc parameter, is decoded for the current block. At step 1010, the size of the current block is checked to see if the number of pixels for both the width and height of the current block is greater than 8. If so, at step 1015, the affine flag is decoded. On the other hand, if the number of pixels of one or both of the width and the height of the current block is not greater than 8, the parameter refPicList is set to 0, corresponding to reference picture list 0, in step 1020.

At step 1025 and 1050 of fig. 10, the process 1000 enters an iterative loop for each reference picture list. In this loop, the reference picture index number of each reference picture list is decoded in step 1030. At step 1035, a Motion Vector Difference (MVD) is decoded for each reference picture list. Also, at step 1040, motion vector predictor information, such as an index in a candidate list for identifying a Motion Vector Predictor (MVP), is also decoded. Additional temporal prediction parameters may also be decoded. For example, in step 1055, a iMv flag indicating whether the motion vector of the current CU is encoded at a lower precision level than the normal 1/4 pixel precision is decoded. At step 1060, an OBMC flag indicating whether the temporal prediction of the current CU includes OBMC processing is decoded. At step 1065, an IC flag indicating whether the IC is for the current CU is decoded. Again, as previously described, the existing IC process as shown in the exemplary decoding process 1000 of fig. 10 decodes and determines the IC flag and IC parameters only once for the entire CU, even in AMVP mode.

Fig. 11 illustrates an exemplary encoding process 1100 for AMVP mode inter-coding of a current CU in accordance with general aspects of at least one embodiment. As is evident from comparison with the existing AMVP mode encoding process 900 shown in fig. 9, the present process 1100 in fig. 11 differs from the known process 900 in fig. 9 in that the IC information (e.g., IC flag) is now determined and encoded for each reference picture list within the iterative loop consisting of steps 1125-1155. Thus, in this embodiment, the IC usage information is now integrated into the stadium level. Thus, for each reference picture used to predict the current CU, the encoded motion field may include the following as shown in fig. 11:

the reference picture index, as shown at step 1130,

motion vector difference, as shown at step 1135,

-a motion vector predictor identifier index, as shown at step 1140, an

IC flag indicating the use of illumination compensation (and other IC parameters (in case of explicit coding)), as shown in step 1145.

The other steps (1170, 1160, 1165) of the exemplary encoding process 1100 shown in fig. 11 are substantially the same as the corresponding steps of the existing encoding process 900 in fig. 9, which have been described in detail above. Accordingly, for the sake of brevity, these corresponding steps in the encoding process 1100 in fig. 11 will not be described again here.

Also, fig. 12 illustrates an exemplary decoding process 1200 for AMVP mode inter-decoding of a current CU in accordance with a general aspect of at least one present embodiment. The decoding process 1200 is a corresponding AMVP decoding process of the exemplary AMVP encoding process 1100 shown in fig. 11. As is evident from comparison with the existing AMVP mode encoding process 1000 shown in fig. 10, the present decoding process 1200 in fig. 12 differs from the known decoding process 1000 in fig. 10 in that the IC information (e.g., IC flag) is now decoded for each reference picture list within the iterative loop consisting of steps 1225 to 1260. Thus, in this embodiment, the IC usage information is now integrated into the stadium level. Thus, for each reference picture used to decode the current CU, the decoded motion data may include the following as shown in fig. 12:

reference picture index, as shown at step 1230,

motion Vector Differences (MVD), as shown at step 1235,

-a Motion Vector Predictor (MVP) identifier index, as shown at step 1240, and

an IC flag indicating that illumination compensation is used for each reference picture list of the current CU, as shown at step 1245.

The other steps (1265, 1270, 1275) of the present exemplary decoding process 1200 shown in fig. 12 are substantially the same as the corresponding steps of the prior decoding process 1000 in fig. 10, which have been described in detail above. Accordingly, for the sake of brevity, these corresponding steps in the decoding process 1200 in fig. 12 will not be described again here.

In another aspect according to this embodiment, in addition to the modification of the encoding of IC information as described above, further modifications to inter-coding of CUs in AMVP mode may include rate distortion optimization in AMVP mode. In practice, since one IC flag is now assigned for each reference picture candidate of each inter CU, the IC flag information is decided during the search for the best motion data to predict the current CU in AMVP mode.

Fig. 13 illustrates an exemplary prior art process 1300 for determining an overall rate-distortion optimization selection for an encoding mode of a CU in an inter-coded slice. As can be seen in the process 1300 of diagram 1300, all possible inter-coding modes are first evaluated at step 1305-. Thereafter, if the inter-coding mode best found at step 1305-. In step 1305-. In the example process 1300 of fig. 13, the rate-distortion search includes a loop over all coding parameters (including the IC flag) for FRUC merge mode and AMVP mode. Thus, in the existing process 1300, the possible values of each IC flag are evaluated from a rate-distortion perspective of the current CU in both FRUC combining mode and inter-coding AMVP mode. The notation EMT in step 1325 refers to enhanced multi-transforms.

Fig. 14 illustrates how an existing rate-distortion optimization process 1300 may be modified in relation to AMVP mode, according to an exemplary aspect of the present embodiment. As is evident from the exemplary process 1400 of fig. 14, the entire loop over each IC flag value is removed from the existing process 1300 in fig. 13. Instead, the iterative search for the optimal IC configuration for inter-coding mode is moved to the motion search process, as previously described in connection with the example process 1100 of fig. 11 and described later in connection with the example process 1600 shown in fig. 16.

Fig. 15 illustrates an exemplary prior art process 1500 by which an encoder searches for the best AMVP coding mode. Thus, the process 1500 aims to find rate-distortion optimized motion data to predict the CU under consideration. The process 1500 consists of 2 stages. The first stage of step 1501 and 1521 of FIG. 15 determines the best motion data for predicting the current CU for each of the reference picture lists L0 and L1, respectively. Next, step 1523 of FIG. 15 entails the second stage of 1547 of determining the best inter prediction mode among the two best found previously uni-directional prediction modes and the bi-directional temporal prediction of the current CU. Basically, the first stage involves a loop over each reference picture index of each reference picture list. For each candidate reference picture, the best motion vector predictor and the associated motion vector difference are searched. This implies the selection of the best Motion Vector Prediction (MVP), and a motion estimation step based on this selected MVP. The second stage then performs an iterative search for the best bi-prediction of the CU. To this end, the motion estimation and compensation step of one of the two reference lists is performed each iteration in such a way as to minimize the rate-distortion cost of the bi-prediction of the current CU.

Fig. 16 illustrates how an existing process 1500 used by an encoder to search for the best AMVP coding mode may be modified, according to an exemplary aspect of the present embodiment. As can be seen, the search for the best motion information associated with each reference picture list now includes an additional loop over all possible IC flag values at step 1605 and 1619 of fig. 16 for the reference picture list under consideration. In addition, refinement of the motion field of the reference picture during the bi-directional motion search process also includes a loop over the IC flag values at step 1639 1657 to find the best IC flag configuration for the list under consideration when bi-predicting the CU under consideration. Note that in the proposed codec modification as shown in fig. 16, the motion compensation step now includes applying possibly two different IC flags to the L0 and L1 reference picture lists, respectively.

In addition, according to an exemplary embodiment, two IC flags associated with motion fields of CUs in B slices may be entropy encoded consecutively using CABAC encoding of the two IC flags. According to another non-limiting embodiment, the encoding of the two above IC flags includes encoding a first flag and then encoding a second flag indicating whether the second IC flag is equal to the first IC flag. An advantage of this last embodiment is that the coding efficiency is improved in the entropy coding of the IC tag information. Since the likelihood of two flags being equal is higher than the likelihood of the two flags being different, encoding information indicating whether the two flags are different is more efficient than directly encoding the flags. Different CABAC contexts may be associated with different flags.

Table 1 shows values of the first flag, values to be decoded for the first flag (same as the first flag value), values of the second flag, values to be decoded for the second flag (indicating whether the first flag and the second flag are different, 1: same, 0: different).

TABLE 1

According to another exemplary embodiment, when deriving or predicting a motion vector of a current CU from neighboring CUs, e.g., using AMVP, the IC flag pair is context-coded according to the IC flag pair associated with the CU containing the MV predictor selected for the current CU.

In one example, whether the IC flag of list N of the current CU is equal to the IC flag of list N of the neighboring CU associated with the selected motion vector predictor is encoded or decoded. When they are equal, a binary number "1" is encoded, otherwise a "0" is encoded, as shown in table 2.

TABLE 2

IC tag of List N IC tag for predictor of list N Binary number
0 0 1
0 1 0
1 0 0
1 1 1

When only one IC flag is encoded for a CU, binarization as shown in table 2 may also be used. In one example, if both MVPs are equal to 0, the CU level IC flag is predicted to be 0. If at least one of the two MVPs is equal to 1, the CU level IC flag is predicted to be 1. The IC flag of the current CU is then compared to the predicted IC flag. If they are equal, the binary number is set to 1, otherwise it is set to 0. An example is shown in table 3.

TABLE 3

According to another embodiment, if illumination compensation is not used, the illumination compensation flag is not encoded or decoded.

According to another embodiment, as described above, two IC tags are associated with a CU in the L0 and L1 motion fields, respectively. However, when encoding the IC flag information, only one IC flag is signaled for the AMVP CU, which takes advantage of the fact that in AMVP mode, the IC flags associated with each reference picture list are equal. Thus, only one IC tag is required. According to another embodiment, if mvd _ l1_ zero _ flag is equal to 1 (i.e., the motion vector difference of list 1 is 0), the IC flag is inferred to be the IC flag associated with the CU containing the MV predictor selected for the current AMVP CU, or the IC flag is inferred to be the IC flag encoded for reference picture list 0.

According to another aspect of the present embodiment, the IC flag contained in the motion field of the merging candidate CU is propagated to the current CU predicted in the merging mode. This means that a CU in merge mode is also assigned an IC flag in each of its motion fields. Thus, a pair of IC flags is associated with a merging CU through spatial propagation from neighboring causal CUs. This motion compensated temporal prediction of the merging CU therefore involves the application of illumination variation compensation in exactly the same way as for the AMVP CU. Thus, the advantages of this embodiment arising in the merge mode include that even for merging CUs, a distinct illumination variation compensation procedure can be applied to each reference picture list (i.e. illumination compensation can be used for one list but not for the other), similar to that for AMVP CUs.

In another embodiment, the IC flag may be predicted from IC parameters. If the IC parameters do not change the illumination (i.e., Y ═ a × X + b, where a ═ l and b ═ 0), then the prediction of the IC flag is 0. Otherwise, the IC flag is predicted to be 1. The difference between the predicted IC flag and the actually selected IC flag may then be encoded. This embodiment may improve compression performance.

An exemplary modification of the existing proposed syntax for IC processing of PUs according to an aspect of the present embodiment is shown in table 4, where strikethrough is used for deletion and underlining is used for addition to the existing syntax. The modification may also apply to CUs in which the CU is also a PU. The semantics of the removed or added syntax elements are described below.

ic _ flag x0 y0 specifies whether illumination compensation is used for inter prediction of all lists of the current prediction unit. The array indices x0, y0 specify the position of the top left luma sample of the considered prediction block relative to the top left luma sample of the picture (x0, y 0). The syntax element is removed.

ic _ l0_ flag [ x0] [ y0] specifies whether inter prediction of list 0 of the current prediction unit uses illumination compensation. The array indices x0, y0 specify the position of the top left luma sample of the considered prediction block relative to the top left luma sample of the picture (x0, y 0). The syntax element is added.

ic _ l1_ flag [ x0] [ y0] has the same semantic meaning as ic _ l0_ flag, where l0 and List 0 are replaced with l1 and List 1, respectively. The syntax element is added.

TABLE 4

Various methods are described above, and each method includes one or more steps or actions for implementing the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. In addition, certain steps or actions may be removed.

Various values are used in this application, such as the number of IC parameters, or the number of iterations in step 1525 of fig. 15. It should be noted that the particular values are for exemplary purposes and the present embodiments are not limited to these particular values.

As described above, various methods of improving IC processing according to the present embodiment may be used to modify the motion estimation module, the motion compensation module, and the entropy encoding and decoding module (145, 170, 175, 330, 375) of a jfet or HEVC encoder and decoder as shown in fig. 1 and 3. Furthermore, the present embodiment is not limited to jfet or HEVC, and may be applied to other standards, recommendations, and extensions thereof. The various embodiments described above can be used alone or in combination.

Furthermore, in different embodiments, the IC model may use other linear or non-linear functions of the IC parameters. For example, the IC model may only consider the slope parameter and not the intercept parameter, i.e., IC (y) ═ a × y. In another example, the IC model may have more than two parameters, which depend on the function (e.g., on the degree of the polynomial function). To estimate the IC parameters, rather than MSE as shown in equation (2), absolute differences or other difference functions may be used. The present embodiment can also be applied when illumination compensation is used for intra-frame coding.

Fig. 17 illustrates a block diagram of an exemplary system 1700 in which aspects of the illustrative embodiments may be implemented. The system 1700 may be implemented as a device including various components described below and configured to perform the processes described above. Examples of such devices include, but are not limited to, personal computers, laptop computers, smart phones, tablet computers, digital multimedia set-top boxes, digital television receivers, personal video recording systems, networked home appliances, and servers. The system 1700 may be communicatively coupled to other similar systems and to a display via a communication channel as shown in fig. 17 and as known to those skilled in the art to implement all or part of the exemplary video system described above.

Various embodiments of the system 1700 include at least one processor 1710 configured to execute instructions loaded therein to implement various processes as described above. The processor 1710 may include embedded memory, an input-output interface, and various other circuits known in the art. The system 1700 may also include at least one memory 1720 (e.g., volatile storage, non-volatile storage). System 1700 may additionally include storage 1740, which may include non-volatile memory, including but not limited to EEPROM, ROM, PROM, RAM, DRAM, SRAM, flash, magnetic disk drives, and/or optical disk drives. As non-limiting examples, the storage devices 1740 may include internal storage devices, attached storage devices, and/or network accessible storage devices. The system 1700 may also include an encoder/decoder module 1730, the encoder/decoder module 1730 being configured to process data to provide encoded video and/or decoded video, and the encoder/decoder module 1730 may include its own processor and memory.

Encoder/decoder module 1730 represents a module that may be included in a device to perform encoding and/or decoding functions. As is known, such devices may include one or both of an encoding module and a decoding module. In addition, the encoder/decoder module 1730 may be implemented as a separate element of the system 1700 or may be incorporated within the one or more processors 1710 as a combination of hardware and software as is known to those skilled in the art.

Program code to be loaded onto the one or more processors 1710 to perform the various processes described above may be stored in the storage device 1740 and subsequently loaded onto the memory 1720 for execution by the processors 1710. According to an example embodiment, one or more of the processors 1710, memory 1720, storage device 1740, and encoder/decoder module 1730 may store one or more of various items including, but not limited to, input video, decoded video, bitstreams, equations, formulas, matrices, variables, operations, and operational logic during execution of the processes discussed above.

System 1700 may also include a communication interface 1750, where communication interface 1750 enables communications with other devices via a communication channel 1760. Communication interface 1750 may include, but is not limited to, a transceiver configured to transmit and receive data from a communication channel 1760. Communication interface 1750 may include, but is not limited to, a modem or network card, and communication channel 1750 may be implemented in a wired and/or wireless medium. The various components of system 1700 may be connected or communicatively coupled together using various suitable connections, including but not limited to internal buses, wiring, and printed circuit boards (not shown in fig. 17).

The exemplary embodiments may be performed by computer software implemented by the processor 1710, by hardware, or by a combination of hardware and software. By way of non-limiting example, the illustrative embodiments may be implemented by one or more integrated circuits. The memory 1720 may be of any type suitable to the technical environment and may be implemented using any suitable data storage technology, such as optical storage, magnetic storage, semiconductor-based storage, fixed memory and removable memory, as non-limiting examples. The processor 1710 may be of any type suitable to the technical environment, and may include one or more of a microprocessor, a general purpose computer, a special purpose computer, and a processor based on a multi-core architecture, as non-limiting examples.

The implementations described herein may be implemented, for example, in methods or processes, apparatus, software programs, data streams, or signals. Even if only discussed in the context of a single form of implementation (e.g., discussed only as a method), the implementation of the features discussed may also be implemented in other forms (e.g., an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software and firmware. The method may be implemented, for example, in an apparatus such as, for example, a processor, which generally refers to a processing device including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cellular telephones, portable/personal digital assistants ("PDAs"), and other devices that facilitate the communication of information between end-users.

Reference to "one embodiment" or "an embodiment" or "one implementation" or "an implementation," and other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrase "in one embodiment" or "in an embodiment" or "in one implementation" or "in an implementation" in various places throughout this specification, as well as any other variations, are not necessarily all referring to the same embodiment.

In addition, the present application or claims thereof may be directed to "determining" various information. The determination information may include, for example, one or more of the following: estimating information, calculating information, predicting information, or retrieving information from memory.

Further, the present application or claims hereof may be directed to "accessing" various information. The access information may include, for example, one or more of the following: receiving information, retrieving information (e.g., from memory), storing information, moving information, copying information, calculating information, predicting information, or estimating information.

In addition, the present application or claims hereof may refer to "receiving" various information. Reception is intended to be a broad term as "access". The received information may include, for example, one or more of the following: access information or retrieve information (e.g., from memory). Furthermore, "receiving" is often referred to in one way or another during operations such as, for example, storing information, processing information, transmitting information, moving information, copying information, erasing information, calculating information, determining information, predicting information, or estimating information.

As will be apparent to those of skill in the art, implementations may produce various signals formatted to carry information that may be stored or transmitted, for example. The information may include, for example, instructions for performing a method or data generated by one of the described implementations. For example, the signal may be formatted to carry a bitstream of the described embodiments. Such signals may be formatted, for example, as electromagnetic waves (e.g., using the radio frequency portion of the spectrum) or as baseband signals. Formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information carried by the signal may be, for example, analog or digital information. As is known, signals may be transmitted over a variety of different wired or wireless links. The signal may be stored on a processor readable medium.

32页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于在用于视频编码的图片的边界处划分视频块的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类