Apparatus and method for encoding video data based on mode list including different mode groups

文档序号:1382837 发布日期:2020-08-14 浏览:2次 中文

阅读说明:本技术 基于包含不同模式组的模式列表以编码视频数据的设备及方法 (Apparatus and method for encoding video data based on mode list including different mode groups ) 是由 张耀仁 江蕙宇 于 2018-12-27 设计创作,主要内容包括:提供了一种通过一电子设备解码一比特流的方法。根据所述比特流从图像帧确定块单位。包括在模式列表的多个候选模式分离在第一模式组和第二模式组中。所述电子设备确定所述第一模式组中的所述多个候选模式中特定的一个是否被所述第二模式组的所述多个候选模式的一个取代。当所述第一模式组中的所述特定候选模式被取代时,从所述第二模式组中的所述多个候选模式确定出预测模式。当所述第一模式组中的所述特定候选模式维持不变时,将所述第一模式组中的所述特定候选模式确定为所述预测模式。然后,基于所述预测模式重建所述图像帧的所述块单元。(A method of decoding a bitstream by an electronic device is provided. Block units are determined from an image frame according to the bitstream. The plurality of candidate patterns included in the pattern list are separated in the first pattern group and the second pattern group. The electronic device determines whether a particular one of the plurality of candidate patterns in the first pattern group is replaced by one of the plurality of candidate patterns in the second pattern group. Determining a prediction mode from the plurality of candidate modes in the second mode group when the particular candidate mode in the first mode group is replaced. Determining the particular candidate mode in the first mode group as the prediction mode when the particular candidate mode in the first mode group remains unchanged. Then, the block units of the image frame are reconstructed based on the prediction mode.)

1. A method of decoding a bitstream by an electronic device, the method comprising:

determining a block unit from an image frame according to the bitstream;

determining a pattern list comprising a plurality of candidate patterns separated into a first pattern group and a second pattern group;

determining whether a particular one of the plurality of candidate patterns in the first pattern group is replaced by one of the plurality of candidate patterns in the second pattern group;

determining a prediction mode from the plurality of candidate modes in the second mode group when the particular one of the plurality of candidate modes in the first mode group is replaced;

determining the particular one of the plurality of candidate modes in the first mode group as the prediction mode when the particular one of the plurality of candidate modes in the first mode group remains unchanged; and

reconstructing the block units of the image frame based on the prediction mode.

2. The method of claim 1, wherein each of the plurality of candidate patterns in the first pattern group is different from the plurality of candidate patterns in the second pattern group.

3. The method of claim 1, wherein the plurality of candidate patterns in the first pattern group includes a plurality of outermost patterns in the first pattern group, and the particular one of the plurality of candidate patterns in the first pattern group is one of the plurality of outermost patterns in the first pattern group.

4. The method of claim 1, wherein the plurality of candidate patterns in the first pattern group are a plurality of preset patterns, and the plurality of candidate patterns in the second pattern group are a plurality of added patterns for replacing the preset patterns.

5. The method of claim 4, further comprising:

determining a first candidate list comprising the plurality of default patterns when each of the plurality of candidate patterns in the first pattern group remains unremoved; and

when at least one of the plurality of preset patterns is replaced with at least one of the plurality of addition patterns and the other of the plurality of preset patterns is treated as a plurality of remaining patterns, a second candidate list including the at least one of the plurality of addition patterns and the plurality of remaining patterns is determined.

6. The method of claim 5, wherein the number of the at least one of the plurality of preset patterns is equal to the number of the at least one of the plurality of addition patterns, and the number of the plurality of preset patterns is equal to the sum of the number of the plurality of remaining patterns and the number of the at least one of the plurality of addition patterns.

7. The method of claim 5, further comprising:

selecting one of the first mode list and the second mode list; and

determining the prediction mode from a selected one of the first mode list and the second mode list.

8. The method of claim 7, wherein the first candidate list and the second candidate list are selected based on a prediction indication.

9. The method of claim 7, wherein a plurality of reconstructed blocks adjacent to the block unit are located at a plurality of sample locations, and the first candidate list and the second candidate list are selected based on the plurality of sample locations.

10. An electronic device that decodes a bitstream, the electronic device comprising:

at least one processor; and

a storage device coupled to the at least one processor and storing a plurality of instructions that, when executed by the at least one processor, cause the at least one processor to:

determining a block unit from an image frame according to the bitstream;

determining a pattern list comprising a first pattern group having a plurality of first candidate patterns and a second pattern group having a plurality of second candidate patterns, wherein each of the plurality of second candidate patterns is different from the plurality of first candidate patterns;

determining whether a particular one of the plurality of first candidate patterns is replaced by one of the plurality of second candidate patterns;

determining a prediction mode from the plurality of second candidate modes when the particular one of the plurality of first candidate modes is replaced;

determining the particular one of the plurality of first candidate modes as the prediction mode when the particular one of the plurality of first candidate modes remains unremoved; and

reconstructing the block units of the image frame based on the prediction mode.

11. The electronic device of claim 10, wherein the plurality of instructions, when executed by the at least one processor, cause the at least one processor to:

determining a first candidate list including each of the plurality of first candidate patterns while remaining unremoved;

determining a second candidate list including at least one of the plurality of remaining patterns in the first pattern group and the plurality of second candidate patterns when at least one of the plurality of first candidate patterns is replaced with at least one of the plurality of second candidate patterns and the other of the plurality of first candidate patterns is treated as a plurality of remaining patterns;

selecting one of the first mode list and the second mode list; and

determining the prediction mode from a selected one of the first mode list and the second mode list.

12. The electronic device of claim 11, wherein the selected one of the first candidate list and the second candidate list is determined based on a prediction indication.

13. The electronic device of claim 11, wherein a plurality of reconstructed blocks adjacent to the block unit are located at a plurality of sample locations, and the first candidate list and the second candidate list are selected based on the plurality of sample locations.

14. The electronic device of claim 11, wherein a number of the at least one of the plurality of first candidate patterns is equal to a number of the at least one of the plurality of second candidate patterns, and the number of the plurality of first candidate patterns is equal to a sum of a number of the plurality of remaining patterns and the number of the at least one of the plurality of second candidate patterns.

15. The electronic device of claim 10, wherein the plurality of first candidate patterns includes a plurality of outermost patterns, and the particular one of the plurality of first candidate patterns is one of the plurality of outermost patterns in the first pattern group.

16. A method of decoding a bitstream by an electronic device, the method comprising:

determining a block unit from an image frame according to the bitstream;

determining an intra prediction indication of the block unit from the bitstream;

determining a pattern list comprising a plurality of candidate patterns separated into a first pattern group and a second pattern group;

selecting, based on the intra prediction indication, a particular one of the plurality of candidate modes in the candidate list when the other of the plurality of candidate modes in the first mode group is replaced by the at least one of the plurality of candidate modes in the second mode group from more than one of the plurality of candidate modes in the first mode group and at least one of the plurality of candidate modes in the second mode group;

selecting the particular one of the plurality of candidate modes in the candidate list from the plurality of candidate modes in the first mode group based on the intra prediction indication while each of the plurality of candidate modes in the first mode group remains removed; and

reconstructing the block units of the image frame based on the particular one of the plurality of candidate patterns in the candidate list.

17. The method of claim 16, wherein the plurality of replaced candidate patterns in the first pattern group are selected from a plurality of outermost patterns of the first pattern group.

18. The method of claim 16, wherein the number of the plurality of candidate patterns in the first pattern group that are replaced is equal to the number of the at least one of the plurality of candidate patterns in the second pattern group.

19. The method of claim 16, further comprising:

determining, based on the intra prediction indication, whether the plurality of replaced candidate modes in the first mode group are replaced by the at least one of the plurality of candidate modes in the second mode group.

20. The method of claim 16, further comprising:

determining whether the plurality of replaced candidate patterns in the first pattern group are replaced by the at least one of the plurality of candidate patterns in the second pattern group based on a plurality of sample positions at which a plurality of reconstructed blocks adjacent to and reconstructed before the block unit are located.

Technical Field

The present disclosure relates generally to video encoding, and in particular techniques for intra prediction based on an adjusted intra mode list.

Background

Intra-prediction is an encoding tool used for video encoding. In a common video encoding approach, the encoder and decoder generate reference pixels and predictors using only previously reconstructed pixels in the nearest pixel line adjacent to an encoded block to predict or reconstruct the encoded block along a direction. However, the orientation is selected from a plurality of intra-modes included in a predefined mode list. Therefore, the encoder needs to adjust the predefined pattern list to accommodate different encoded blocks. When the encoder adapts the predefined pattern list to accommodate different coding blocks, the decoder needs to adapt the predefined pattern list to accommodate different coding blocks in the same way.

Disclosure of Invention

The present disclosure is directed to an apparatus and method for encoding video data based on a plurality of reference lines.

Drawings

The exemplary disclosed aspects are best understood from the following detailed description when read with the accompanying drawing figures. The various features are not drawn to scale and the dimensions of the various features may be arbitrarily increased or decreased for clarity of discussion.

Fig. 1 is a block diagram of an exemplary embodiment of a system configured to encode and decode video data in accordance with one or more techniques of the present disclosure.

Fig. 2 is a block diagram of an exemplary implementation of a decoder module of a destination device in the system of fig. 1.

Fig. 3 shows a flowchart of a first exemplary embodiment according to a mode list adjustment for intra prediction.

Fig. 4 is a schematic diagram of an exemplary embodiment of an image frame having a block unit.

Fig. 5 shows a flowchart according to a second exemplary embodiment of mode list adjustment for intra prediction.

Fig. 6A and 6B illustrate flowcharts of a third exemplary embodiment and a fourth exemplary embodiment, respectively, according to a mode list adjustment for intra prediction.

Fig. 7A and 7B are schematic diagrams of exemplary embodiments of block units and reference samples of the block units.

Fig. 8A and 8B show two flowcharts according to a first exemplary embodiment and a second exemplary embodiment of multi-reference line prediction for chroma prediction, respectively.

FIG. 9 is a schematic diagram of an exemplary embodiment of a block unit and a plurality of reference lines.

Fig. 10 is a block diagram of an exemplary implementation of an encoder module of a source device in the system of fig. 1.

Fig. 11 shows a flowchart of a fifth exemplary embodiment according to a mode list adjustment for intra prediction.

Fig. 12A and 12B show two flowcharts according to a third exemplary embodiment and a fourth exemplary embodiment of multi-reference line prediction for chroma prediction, respectively.

Detailed Description

The following description contains specific information pertaining to the exemplary embodiments of the present disclosure. The drawings in the present disclosure and their accompanying detailed description are directed to merely exemplary embodiments. However, the present disclosure is not limited to these exemplary embodiments. Other variations and embodiments of the present disclosure will occur to those skilled in the art. Unless otherwise indicated, identical or corresponding components in the figures may be indicated by identical or corresponding reference numerals. Furthermore, the drawings and illustrations in this application are generally not drawn to scale and are not intended to correspond to actual relative dimensions.

For purposes of consistency and ease of understanding, identical features are identified by reference numerals in the exemplary drawings (although not so identified in some examples). However, features in different embodiments may differ in other respects and should therefore not be limited narrowly to the features shown in the figures.

The phrases "in one embodiment" or "in some embodiments" as used in the specification may each refer to the same or different embodiment or embodiments. The term "coupled" is defined as directly connected, or indirectly connected through intervening components, and is not necessarily limited to a physical connection. The term "comprising," when used, means "including, but not necessarily limited to,"; it is expressly intended that all such combinations, groups, families and equivalents thereof, which are open-ended or stated, be members of the same group.

Furthermore, for purposes of explanation and not limitation, specific details are set forth, such as functional entities, techniques, protocols, standards, etc. in order to provide an understanding of the described technology. In other instances, detailed descriptions of well-known methods, techniques, systems, architectures, and equivalents are omitted so as not to obscure the description with unnecessary detail.

Those skilled in the art will immediately recognize that any of the encoding functions or algorithms described in this disclosure can be implemented in hardware, software, or a combination of software and hardware. The functions described may correspond to modules being software, hardware, firmware, or any combination thereof. Software implementations may include computer-executable instructions stored on a computer-readable medium, such as a memory or other type of storage device. For example, one or more microprocessors or general purpose computers having communications processing capabilities may be programmed with corresponding executable instructions and perform the recited network functions or algorithms. Microprocessors or general-purpose computers may be formed by Application Specific Integrated Circuits (ASICs), programmable logic arrays, and/or using one or more Digital Signal Processors (DSPs). Although several of the exemplary embodiments described in this specification contemplate software installed and executing on computer hardware, it is also within the scope of the disclosure that the embodiments are implemented as alternative exemplary embodiments in firmware or hardware or a combination of hardware and software.

Computer-readable media include, but are not limited to, Random Access Memory (RAM), Read-Only Memory (ROM), Erasable programmable Read-Only Memory (EPROM), Electrically Erasable programmable Read-Only Memory (EEPROM), flash Memory, Compact Disc Read-Only Memory (CD ROM), magnetic cassettes, magnetic tape, magnetic disk storage, or any other equivalent medium capable of storing computer-readable instructions.

Fig. 1 is a block diagram of an exemplary embodiment of a system that may be configured to encode and decode video data in accordance with one or more techniques of this disclosure. In the embodiment, the system includes a source device 11, a destination device 12, and a communication medium 13. In at least one embodiment, the source device 11 may comprise any device configured to encode video data and transmit the encoded video data to the communication medium 13. In at least one embodiment, the destination device 12 may include any device configured to receive encoded video data via the communication medium 13 and decode the encoded video data.

In at least one embodiment, the source device 11 may communicate with the destination device 12 via the communication medium 13, wired and/or wirelessly. The source device 11 may include a source module 111, an encoder module 112, and a first interface 113. The destination device 12 may comprise a display module 121, a decoder module 122 and a second interface 123. In at least one embodiment, the source device 11 may be a video encoder and the destination device 12 may be a video decoder.

In at least one embodiment, the source device 11 and/or the destination device 12 may be a mobile phone, a tablet computer, a desktop computer, a notebook computer, or other electronic devices. Fig. 1 shows only one example of the source device 11 and the destination device 12, and the source device 11 and the destination device 12 in other embodiments may include more or fewer components than shown, or have different configurations of various components.

In at least one embodiment, the source module 111 of the source device 11 may include a video capture device for capturing new video, a video archive for storing previously captured video, and/or a video feed interface for receiving video from a video content provider. In at least one embodiment, the source module 111 of the source device 11 may generate computer graphics-based data as the source video, or a combination of real-time video, archived video, and computer-generated video. In at least one embodiment, the video capture device may be a Charge-coupled device (CCD) image sensor, a Complementary Metal-Oxide-Semiconductor (CMOS) image sensor, or a camera.

In at least one embodiment, the encoder module 112 and the decoder module 122 may each be implemented as any of a variety of suitable encoder/decoder Circuits, such as one or more microprocessors, Central Processing Units (CPUs), Graphics Processing Units (GPUs), System On chips (socs), Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (gapps), discrete logic, software, hardware, firmware, or any combination thereof. When the techniques are implemented in part in software, the device may store instructions for the software in a suitable non-transitory computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. In at least one embodiment, each of the encoder module 112 and the decoder module 122 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in the respective device.

In at least one embodiment, the first interface 113 and the second interface 123 may employ custom protocols or conform to existing or de-facto standards. Existing standards or de-facto standards include, but are not limited to, ethernet, IEEE 802.11 or IEEE 802.15 series, wireless USB, or telecommunications standards. Telecommunication standards include, but are not limited to, GSM, CDMA2000, TD-SCDMA, WiMAX, 3GPP-LTE, or TD-LTE. In at least one embodiment, the first interface 113 and the second interface 123 may each include any device configured to transmit and/or store compatible video bitstreams to the communication medium 13 and receive compatible video bitstreams from the communication medium 13. In at least one embodiment, the first interface 113 and the second interface 123 may include a computer system interface that may enable compatible video bitstreams to be stored on or received from a storage device. For example, the first interface 113 and the second interface 123 may include a chipset supporting a Peripheral Component Interconnect (PCI) and PCIe Bus protocol, a proprietary Bus protocol, a Universal Serial Bus (USB) protocol, I2C, or other logical and physical structures for interconnecting peer devices.

In at least one embodiment, the Display module 121 may include a Display using Liquid Crystal Display (LCD) technology, plasma Display technology, Organic Light Emitting Diode (OLED) Display technology, or Light Emitting Polymer Display (LPD) technology, although other Display technologies may be used in other embodiments. . In at least one embodiment, the display module 121 may include a high definition display or an ultra-high definition display.

Fig. 2 is a block diagram of a decoder module 222, the decoder module 222 representing an exemplary implementation of the decoder module 122 of the destination device 12 in the system of fig. 1. In at least one implementation, the decoder module 222 includes an entropy decoding unit 2221, a prediction processing unit 2222, an inverse quantization/inverse transform unit 2223, a first adder 2224, a filtering unit 2225, and a decoded picture buffer 2226. In at least one embodiment, the prediction processing unit 2222 of the decoder module 222 further includes an intra prediction unit 22221 and an inter prediction unit 22222. In at least one embodiment, the decoder module 222 receives a bitstream, decodes the bitstream, and outputs decoded video.

In at least one embodiment, the entropy decoding unit 2221 may receive a bitstream including a plurality of syntax elements from the second interface 123 in fig. 1, and perform a parsing operation on the bitstream to extract the syntax elements from the bitstream. The entropy decoding unit 2221 may entropy decode the bitstream to generate quantized transform coefficients, quantization parameters, transform data, motion vectors, intra modes, partition information, and other syntax information as part of performing a parsing operation. In at least one embodiment, the Entropy decoding unit 2221 may perform Context Adaptive Variable Length Coding (CAVLC), Context Adaptive Binary Arithmetic Coding (CABAC), Syntax-based Context-Adaptive Binary Arithmetic Coding (SBAC), Probability Interval Entropy (PIPE) Coding, or another Entropy Coding technique to generate quantized transform coefficients. In at least one embodiment, the entropy decoding unit 2221 supplies the quantized transform coefficients, quantization parameters, and transform data to the inverse quantization/inverse transform unit 2223, and supplies motion vectors, intra modes, partition information, and other syntax information to the prediction processing unit 2222.

In at least one embodiment, the prediction processing unit 2222 may receive syntax elements such as motion vectors, intra modes, partition information, and other syntax information from the entropy decoding unit 2221. In at least one embodiment, the prediction processing unit 2222 may receive a syntax element including segmentation information and then divide a plurality of image frames according to the segmentation information. In at least one embodiment, each image frame may be divided into at least one image block according to the division information. The at least one image block may include a luma block for reconstructing a plurality of luma samples and at least one chroma block for reconstructing a plurality of chroma samples. The luma block and the at least one chroma block may be further divided to generate a macroblock, a Coding Tree Unit (CTU), a Coding Block (CB), a sub-partition thereof, and/or another equivalent coding unit.

In at least one embodiment, during the decoding process, the prediction processing unit 2222 receives prediction data including an intra mode or a motion vector for a current image block of a particular one of the plurality of image frames. The current image block may be one of the luminance block and the at least one chrominance block in the particular image frame.

In at least one embodiment, the intra prediction unit 22221 may perform intra prediction encoding of the current block unit on one or more neighboring blocks in the same frame as the current block unit based on syntax elements related to an intra mode to generate the prediction block. In at least one embodiment, the intra mode may specify the position of a reference sample selected from neighboring blocks within the current frame.

In at least one embodiment, when reconstructing a luma component of a current block through the prediction processing unit 2222, the intra prediction unit 22221 may reconstruct a plurality of chroma components of the current block unit based on the plurality of luma components of the current block unit.

In at least one embodiment, the interprediction unit 22222 may perform interprediction encoding on the current block unit with respect to one or more blocks of the one or more reference image blocks based on syntax elements related to a motion vector to generate the prediction block. In at least one implementation, a motion vector may indicate a displacement of a current block unit within a current picture block relative to a reference block unit within a reference picture block. The reference block unit is a block determined to closely match the current block unit. In at least one embodiment, the interprediction unit 22222 receives reference image blocks stored in the decoded picture buffer 2226 and reconstructs the current block unit based on the received reference image blocks.

In at least one embodiment, the inverse quantization/inverse transform unit 2223 may use inverse quantization and inverse transform to reconstruct the residual block in the pixel domain. In at least one embodiment, the inverse quantization/inverse transform unit 2223 may apply inverse quantization to the residual quantized transform coefficients to generate residual transform coefficients, and then apply inverse transform to the residual transform coefficients to generate a residual block in the pixel domain. In at least one embodiment, the inverse Transform may reverse the transformation process, such as Discrete Cosine Transform (DCT), Discrete Sine Transform (DST), Adaptive Multiple Transform (AMT), Mode-Dependent Non-Separable Secondary Transform (MDNSST), cubic Givens Transform (Hypercube-Givens Transform, HyGT) F signal Dependent Transform, Karhunen-Lo Transform (KLT), wavelet Transform, integer Transform, subband Transform, or conceptually similar Transform. In at least one embodiment, an inverse transform may convert residual information from a transform domain, such as a frequency domain, back to a pixel domain. In at least one embodiment, the degree of inverse quantization may be modified by adjusting a quantization parameter.

In at least one embodiment, the first adder 2224 adds the reconstructed residual block to the prediction block provided from the prediction processing unit 2222 to generate a reconstructed block.

In at least one embodiment, the filtering unit 2225 may include a deblocking Filter, a Sample Adaptive Offset (SAO) Filter, a bilateral Filter, and/or an Adaptive Loop Filter (ALF) to remove blocking artifacts from reconstructed blocks. In addition to deblocking filters, SAO filters, bilateral filters, and ALF, other filters (in-loop or post-loop) may be used. Such filters are not shown for simplicity, but the output of the first adder 2224 may be filtered if desired. In at least one embodiment, the filtering unit 2225 may output the decoded video to the display module 121 or other video receiving unit after performing a filtering process on the reconstructed block of a specific image frame by the filtering unit 2225.

In at least one embodiment, the decoded picture buffer 2226 may be a reference picture memory that stores reference blocks used for decoding a bitstream, e.g., in an inter-coding mode, by the prediction processing unit 2222. The decoded picture buffer 2226 may be formed from any of a variety of Memory devices, such as Dynamic Random Access Memory (DRAM), including Synchronous DRAM (SDRAM), Magnetoresistive RAM (MRAM), Resistive RAM (RRAM), or other types of Memory devices. In at least one embodiment, the decoded picture buffer 2226 may be on-chip with other components of the decoder module 222, or off-chip with respect to those components.

Fig. 3 shows a flowchart of a first exemplary embodiment of mode list adjustment for intra prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 2, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 3 represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

At block 31, the decoder module 222 determines a block unit in an image frame from video data and determines a plurality of adjacent blocks adjacent to the block unit.

In at least one embodiment, the video data may be a bitstream. The destination device 12 may receive a bitstream from an encoder, such as the source device 11, via the second interface 123 of the destination device 12. The second interface 123 provides a bit stream to the decoder module 222. The decoder module 222 determines an image frame from the bitstream and divides the image frame to determine block units according to a plurality of segmentation indicators in the bitstream. For example, the decoder module 222 may divide an image frame to generate a plurality of coding tree units and also divide one of the coding tree units according to a division indication based on any video coding standard to determine a block unit having a block size.

In at least one embodiment, the entropy decoding unit 2221 may decode the bitstream to determine a plurality of prediction indicators for the block unit, and then the decoder module 222 may reconstruct the block unit further based on the prediction indicators. In at least one embodiment, the prediction indication may include a plurality of flags and a plurality of indices.

In at least one embodiment, the prediction processing unit 2222 of the destination device 12 determines neighboring blocks that are adjacent to a block unit. In at least one embodiment, the neighboring block may be reconstructed before reconstructing the block unit, and thus, the neighboring block may include a plurality of reference samples for reconstructing the block unit. In at least one embodiment, a block unit may be reconstructed before some neighboring blocks are reconstructed, and thus an unrereconstructed neighboring block may not include a reference sample for the block unit. Fig. 4 is a schematic diagram of an exemplary embodiment of an image frame 41 having a block unit 411. The prediction processing unit 2222 may receive reference samples 412 adjacent to the block unit 411. The reference samples 412 include a plurality of first reference samples 4121 located above the block unit 411 and a second reference sample 4122 located second to the left of the block unit 411.

Referring back to fig. 3, the intra prediction unit 22221 determines a first mode list having a plurality of first candidate modes and a second mode list having a plurality of second candidate modes at block 32.

In at least one embodiment, the first candidate mode and the second candidate mode are selected from a plurality of intra modes. In one embodiment, at least one of the first candidate patterns may be the same as at least one of the second candidate patterns. In another embodiment, each first candidate pattern may be different from the second candidate pattern.

Table 1 schematically shows an exemplary embodiment of assigning indexes to intra modes each having an intra prediction angle, in which each of the first and second candidate modes may correspond to one of the planar mode, the DC mode, and the plurality of intra modes 2 to 66 in table 1.

TABLE 1

Intra mode 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Intra-frame prediction angle 32 29 26 23 21 19 17 15 13 11 9 7 5 3 2 1
Intra mode 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
Intra-frame prediction angle 0 -1 -2 -3 -5 -7 -9 -11 -13 -15 -17 -19 -21 -23 -26 -29
Intra mode 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49
Intra-frame prediction angle -32 -29 -26 -23 -21 -19 -17 -15 -13 -11 -9 -7 -5 -3 -2 -1
Intra mode 50 51 52 53 54 55 56 57 58 69 60 61 62 63 64 65 66
Intra-frame prediction angle 0 1 2 3 5 7 9 11 13 15 17 19 21 23 26 29 32

Table 2 schematically shows an exemplary embodiment of assigning indexes to intra modes each having an intra prediction angle, in which each of the first candidate mode and the second candidate mode may correspond to one of the planar mode, the DC mode, and the plurality of intra modes 2 to 70 in table 2.

TABLE 2

In at least one embodiment, the intra prediction unit 22221 may select the first candidate mode and the second candidate mode based on the first example in table 1, the second example in table 2, and other predefined prediction mode lists. For example, the intra prediction unit 22221 may select the planar mode, the DC mode, and all intra modes 2 to 66 in table 2 as the first candidate mode, and select the planar mode, the DC mode, and all intra modes 2, 4, and 6 to 70 in table 2 as the second candidate mode.

At block 33, the intra prediction unit 22221 selects one of the first mode list and the second mode list.

In at least one embodiment, the entropy decoding unit 2221 may decode the bitstream to determine a list flag for the block unit, and then the decoder module 222 may reconstruct the block unit further based on the list flag. In at least one embodiment, the intra prediction unit 22221 may generate the selected mode list from the first mode list and the second mode list based on the list flag.

In at least one embodiment, the intra prediction unit 22221 may directly determine which of the first mode list and the second mode list is selected for the block unit 411 without a list flag. In one embodiment, the intra prediction unit 22221 may determine whether the neighboring block includes a reference sample for determining how to select one of the first mode list and the second mode list. In another embodiment, the intra prediction unit 22221 may determine how to select one of the first mode list and the second mode list based on the block size of the block unit 411.

At block 34, the intra prediction unit 22221 selects a prediction mode from the selected mode list.

In at least one embodiment, the entropy decoding unit 2221 may decode the bitstream to determine the orientation flag for the block unit. In one embodiment, the intra prediction unit 22221 may select a prediction mode from the selected mode list based on the orientation flag. In one embodiment, when the selected mode list is the first mode list, the intra prediction unit 22221 may select one of the first candidate modes from the first mode list based on the orientation flag. Then, the intra prediction unit 22221 may set the selected first candidate mode as a prediction mode. In another embodiment, when the selected mode list is a second mode list, the intra prediction unit 22221 may select one of the second candidate modes from the second mode list based on the orientation flag. Then, the intra prediction unit 22221 may set the selected second candidate mode as the prediction mode.

At block 35, the decoder module 222 generates a plurality of reconstruction components in the block unit based on the plurality of neighboring blocks and the prediction mode.

In at least one embodiment, a block unit may include a plurality of block components. In the described embodiment, each block component may be a pixel component. In at least one embodiment, the intra prediction unit 22221 may determine one of the predictors for each block component along the prediction mode derived for the block unit based on the reference samples determined from the neighboring blocks.

In at least one embodiment, the first adder 2224 of the decoder module 222 in the destination device 12 may add a predictor derived based on a prediction mode to a plurality of residual samples determined from a bitstream to reconstruct a block unit. In addition, the decoder module 222 may reconstruct all other block units in the image frame to reconstruct the image frame and the video.

Fig. 5 shows a flowchart of a second exemplary embodiment of mode list adjustment for intra prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 2, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 5 represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

At block 51, the decoder module 222 determines a block unit in an image frame from video data.

In at least one embodiment, the video data may be a bitstream. The destination device 12 may receive a bitstream from an encoder, such as the source device 11, via the second interface 123 of the destination device 12. The second interface 123 provides a bit stream to the decoder module 222. The decoder module 222 determines an image frame from the bitstream and divides the image frame to determine block units according to a plurality of segmentation indicators in the bitstream. For example, the decoder module 222 may divide an image frame to generate a plurality of coding tree units and also divide one of the coding tree units according to a division indication based on any video coding standard to determine a block unit having a block size.

In at least one embodiment, the entropy decoding unit 2221 may decode the bitstream to determine a plurality of prediction indicators for the block unit, and then the decoder module 222 may reconstruct the block unit further based on the prediction indicators. In at least one embodiment, the prediction indication may include a plurality of flags and a plurality of indices.

In at least one embodiment, the prediction processing unit 2222 of the destination device 12 determines neighboring blocks that are adjacent to a block unit. In at least one embodiment, the neighboring block may be reconstructed before reconstructing the block unit, and thus, the neighboring block may include a plurality of reference samples for reconstructing the block unit. In at least one embodiment, a block unit may be reconstructed before some neighboring blocks are reconstructed, and thus an unrereconstructed neighboring block may not include a reference sample for the block unit.

At block 52, the intra prediction unit 22221 determines a mode list including a plurality of candidate modes separated into a first mode group and a second mode group.

In at least one embodiment, candidate patterns in the pattern list may be predefined in the destination device 12 and the source device 11. For example, the candidate pattern may be predefined as a planar pattern, a DC pattern, and/or a plurality of directional patterns.

In at least one embodiment, the intra prediction unit 22221 may divide the candidate modes into a first mode group and a second mode group. In the embodiment, the candidate patterns in the first pattern group may be a plurality of preset patterns selected from the pattern list, and the candidate patterns in the second pattern group may be a plurality of additional patterns selected from the pattern list to replace at least one of the preset patterns. In at least one embodiment, the preset patterns in the first pattern group may include a planar pattern, a DC pattern, and a plurality of first direction patterns, and the add patterns in the second pattern group may include a plurality of second direction patterns. In the described embodiment, the second directional mode may be selected to replace at least one of the first directional modes. In the embodiment, since the candidate patterns are separated into the first pattern group and the second pattern group, each of the added patterns in the second pattern group is different from the preset pattern in the first pattern group.

In at least one embodiment, each candidate mode has a prediction index. In one embodiment, when the decoder module 222 decodes the bitstream in High Efficiency Video Coding (HEVC), the prediction index of the preset mode in the first mode group may be equal to 0 to 34. In the embodiment, the prediction indexes of the planar mode and the DC mode may be equal to 0 and 1, and the prediction index of the first direction mode may be equal to 2 to 33. In the embodiment, the prediction index of the additional mode in the second mode group may be greater than 33 or less than 0. In one embodiment, when the decoder module 222 decodes the bitstream with a universal Video Coding (VVC) test model (VVCtest model, VTM), the prediction indexes of the preset modes in the first mode group may be equal to 0 to 66. In the embodiment, the prediction indexes of the planar mode and the DC mode may be equal to 0 and 1, and the prediction index of the preset mode may be equal to 2 to 66. In the embodiment, the prediction index of the added mode in the second mode group may be greater than 66 or less than 0.

Table 3 schematically shows an exemplary embodiment of assigning indexes to intra modes each having an intra prediction angle, in which each of the first candidate mode and the second candidate mode may correspond to one of the planar mode, the DC mode, and the plurality of intra modes 2 to 130 in table 3.

TABLE 3

Table 4 schematically shows an exemplary embodiment of assigning indexes to intra modes each having an intra prediction angle, in which each of the first and second candidate modes may correspond to one of the planar mode, the DC mode, and the plurality of intra modes-14-1 and 2-80 in table 2.

TABLE 4

In at least one embodiment, the intra prediction unit 22221 may select the first candidate mode and the second candidate mode based on the third example in table 3, the fourth example in table 4, and other predefined prediction mode lists. For example, the candidate modes in the mode list may include a first mode group having planar mode, DC mode, and intra modes 2-66 in Table 4, and a second mode group having intra modes-14-1 and 67-80 in Table 4. In the embodiment, the planar mode, the DC mode, and the intra modes 2-66 in the first mode group are preset modes, and the intra modes-14-1 and 67-80 in the second mode group are add modes.

At block 53, the intra prediction unit 22221 determines whether a particular one of the plurality of candidate modes in the first mode group is replaced by one of the plurality of candidate modes in the second mode group.

In at least one embodiment, the entropy decoding unit 2221 may decode the bitstream to determine a replacement flag and an orientation index of the block unit, and then the decoder module 222 may reconstruct the block unit further based on the replacement flag and the orientation index. In at least one embodiment, the intra prediction unit 22221 may determine a specific candidate mode in the first mode group based on the orientation index, and determine whether the specific candidate mode in the first mode group is replaced with one of the candidate modes in the second mode group based on the replacement flag.

In at least one embodiment, the entropy decoding unit 2221 may decode the bitstream to determine a directional index of the block unit to determine a particular candidate mode in the first mode group. In this embodiment, there may be no replacement flag in the bitstream for a particular candidate pattern in the first pattern group. In an embodiment, the intra prediction unit 22221 may directly determine whether a specific candidate mode in the first mode group is replaced by one of the candidate modes in the second mode group without a replacement flag. In one embodiment, the intra prediction unit 22221 may determine whether the neighboring block includes reference samples to determine whether to replace at least one of the candidate modes in the first group with at least one of the candidate modes in the second mode group. In one embodiment, all candidate modes in the first mode group are not replaced by candidate modes in the second mode group, and thus the intra prediction unit 22221 may determine that a specific candidate mode in the first mode group is not replaced by a candidate mode in the second mode group. In another embodiment, the intra prediction unit 22221 may determine that at least one candidate mode in the first mode group is replaced with at least one candidate mode in the second mode group according to a relationship between neighboring blocks and reference samples. Then, the intra prediction unit 22221 further determines which candidate mode among the first mode candidates is selected and replaced by at least one of the candidate modes in the second mode group. In one embodiment, if a specific candidate mode in the first mode group is selected to be replaced, the intra prediction unit 22221 determines that the specific candidate mode in the first mode group is replaced by one of at least one candidate mode in the second mode group. In another embodiment, if a specific candidate mode in the first mode group is not selected, the intra prediction unit 22221 determines that the specific candidate mode in the first mode group is not replaced by a candidate mode in the second mode group.

In at least one embodiment, the intra prediction unit 22221 may determine whether to replace at least one of the candidate modes in the first group with at least one of the candidate modes in the second mode group based on the block size of the block unit 411. When the block width is equal to the block height, the intra prediction unit 22221 may determine that at least one candidate mode in the first group is not replaced by at least one candidate mode in the second mode group. The intra prediction unit 22221 may determine that at least one candidate mode in the first group is replaced with at least one candidate mode in the second group of modes when the block width is different from the block height. Then, the intra prediction unit 22221 further determines which candidate mode among the first mode candidates is selected and replaced by at least one of the candidate modes in the second mode group to check whether a specific candidate mode in the first mode group is selected.

At block 54, the intra prediction unit 22221 determines the particular candidate mode in the first mode group as the prediction mode.

In at least one embodiment, the intra prediction unit 22221 may directly set a specific candidate mode in the first mode group as a prediction mode based on the orientation index.

At block 55, the intra prediction unit 22221 determines which of the plurality of candidate modes in the second mode group is selected as the prediction mode to replace the particular candidate mode in the first mode group.

In at least one embodiment, the intra prediction unit 22221 determines that at least one of the candidate modes in the first mode group is replaced with at least one of the candidate modes in the second mode group. In this embodiment, the number of the at least one candidate pattern in the first pattern group is equal to the number of the at least one candidate pattern in the second pattern group. In addition, each of the at least one candidate pattern in the first pattern group corresponds to one of the at least one candidate pattern in the second pattern group. Accordingly, the intra prediction unit 22221 also determines which one of the candidate modes in the at least one candidate mode in the second mode group corresponds to the particular candidate mode in the first mode group. Then, the intra prediction unit 22221 sets the determined candidate mode in the second mode group corresponding to the specific candidate mode in the first mode group as a prediction mode.

At block 56, the decoder module 222 generates a plurality of predictors in the block unit based on the prediction mode.

In at least one embodiment, a block unit may include a plurality of block components. In the described embodiment, each block component may be a pixel component. In at least one embodiment, the intra prediction unit 22221 may determine one of the predictors for each block component along the prediction mode derived for the block unit based on the reference samples determined from the neighboring blocks.

In at least one embodiment, the first adder 2224 of the decoder module 222 in the destination device 12 may add a predictor derived based on a prediction mode to a plurality of residual samples determined from a bitstream to reconstruct a block unit. In addition, the decoder module 222 may reconstruct all other block units in the image frame to reconstruct the image frame and the video.

Fig. 6A illustrates a flowchart according to a third exemplary embodiment of mode list adjustment for intra prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 2, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 6A represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure. In one embodiment, FIG. 6A may be a detailed exemplary embodiment of block 33 in FIG. 3.

At block 6331, the intra prediction unit 22221 determines a plurality of sample positions and a plurality of reference samples for the block unit.

In at least one embodiment, an image frame includes a plurality of block units including a first block unit, a second block unit, a third block unit, and a fourth block unit. Fig. 7A is a schematic diagram of an exemplary embodiment of the first, second, third and fourth block units 711, 712, 713 and 714 and the reference samples 731, 733 and 734 of the first block unit 711. In the embodiment, the first block unit 711 further includes a first sub-block unit 7111 and a second sub-block unit 7112. In one embodiment, when the decoder module 222 reconstructs the second sub-block unit 7112, the intra prediction unit 22221 may determine a first sample position 721, a second sample position 722, a third sample position 723, and a fourth sample position 724 for the second sub-block unit 7112. In the illustrated embodiment, the first sample position 721 may be an upper position, the second sample position 722 may be an upper right position, the third sample position 723 may be a left position, and the fourth sample position 724 may be a lower left position. In at least one embodiment, the first sub-block unit 7111 may be reconstructed before reconstructing the second sub-block unit 7112, and thus the first sample position 721 may include the first reference sample 731 for reconstructing the block unit. In at least one embodiment, second sub-block unit 7112 may be reconstructed prior to reconstructing second block unit 712, so there are no reference samples at second sample position 722 for reconstructing second sub-block unit 7112. In one embodiment, the third sample position 723 may include the second reference sample 733 when there is a previously decoded block in the left side of the block cell 711 and 714. Additionally, the fourth sample position 724 may include a third reference sample 734.

Fig. 7B is a schematic diagram of an exemplary implementation of sub-block unit 7632 and reference sample 782. In at least one embodiment, the image frame 76 includes a fifth block unit 761, a sixth block unit 762, and a seventh block unit 763, and the seventh block unit 763 further includes a third sub-block unit 7631 and a fourth sub-block unit 7632. When the decoder module 222 reconstructs the fourth sub-block unit 7632, the intra prediction unit 22221 may determine a fifth sample position 771, a sixth sample position 772, a seventh sample position 773, and an eighth sample position 774 for the fourth sub-block unit 7632. In the illustrated embodiment, the fifth sample position 771 may be an upper position, the sixth sample position 772 may be an upper right position, the seventh sample position 773 may be a left position, and the eighth sample position 774 may be a lower left position. In this embodiment, there are no reference samples at the fifth 771 and sixth 772 sample positions, since there are no block units above the fourth sub-block unit 7632. In the described embodiment, the third sub-block unit 7631 may be reconstructed before the fourth sub-block unit 7632 is reconstructed, and thus the seventh sample position 773 may include the fourth reference sample 783 used to reconstruct the block unit. In at least one embodiment, the fourth sub-block unit 7632 may be reconstructed before reconstructing another block unit located below the block unit 763, and thus there are no reference samples for reconstructing the second sub-block unit 7112 at the eighth sample position 774.

At block 6332, the intra prediction unit 22221 determines the selected candidate list based on the relationships between the plurality of reference samples and the plurality of sample positions.

In at least one embodiment, more than one candidate list may be predefined in the source device 11 and the destination device 12. Furthermore, predefined selection rules for determining how to select one of the candidate lists may also be predefined in the source device 11 and the destination device 12. For example, the predefined candidate list may include a first candidate list, a second candidate list, a third candidate list, and a fourth candidate list. In the embodiment, each of the first to fourth candidate lists is different from each other. In such embodiments, some candidate patterns of one of the candidate lists may be the same as candidate patterns in the other candidate list. In one embodiment, the first candidate list may be selected when there is a reference sample at each of the first to fourth sample positions for the block unit. In another embodiment, the second candidate list may be selected when there is a reference sample at the first to third sample positions for the block unit. In other words, there is no reference sample at the bottom left position. In addition, when there is no reference sample at the upper right position, a third candidate list may be selected. In other embodiments, the fourth candidate list may be selected when there is a reference sample at the first sample position and the third to fourth sample positions for the block unit. In other words, there is no reference sample at the upper right position. For example, the intra-prediction unit 22221 may determine that there is no reference sample at the second sample position 722. Accordingly, the intra prediction unit 22221 may select the third candidate list to reconstruct the second sub-block unit 7112 based on predefined selection rules.

Fig. 6B illustrates a flowchart of a fourth exemplary embodiment of mode list adjustment for intra prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 2, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 6B represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure. In one embodiment, fig. 6B may be a detailed exemplary embodiment of block 53 in fig. 5.

At block 6531, the intra prediction unit 22221 determines a plurality of sample positions and a plurality of reference samples for the block unit.

In at least one embodiment, the intra prediction unit 22221 may determine a first sample position, a second sample position, a third sample position, and a fourth sample position for a block unit when the decoder module 222 reconstructs the block unit. In such embodiments, the first sample position may be an upper position, the second sample position may be an upper right position, the third sample position may be a left position, and the fourth sample position may be a lower left position. In one embodiment, a first neighboring block covering a first sample position may be reconstructed before reconstructing the block unit and thus, the first sample position may include a plurality of first reference samples for the block unit generated based on the first neighboring block. In another embodiment, the block unit may be reconstructed before reconstructing a second neighboring block covering the second sample position, so there is no reference sample for reconstructing the block unit at the second sample position. In another embodiment, when there is no block unit on the left side of the block unit, there is no reference sample at the third sample position and the fourth sample position.

At block 6532, the intra prediction unit 22221 determines whether at least one of the plurality of candidate modes in the first mode group is replaced based on the relationship between the plurality of reference samples and the plurality of sample positions.

In at least one embodiment, the candidate patterns in the pattern list may be predefined in the destination device 12 and the source device 11 and may be separated into a first pattern group and a second pattern group. In the embodiment, the candidate patterns in the first pattern group predefined in the source device 11 are the same as the candidate patterns in the first pattern group predefined in the destination device 12. In the embodiment, the candidate patterns in the second pattern group predefined in the source device 11 are the same as the candidate patterns in the second pattern group predefined in the destination device 12. In the embodiment, the candidate patterns in the first pattern group may be a plurality of preset patterns selected from the pattern list, and the candidate patterns in the second pattern group may be a plurality of additional patterns selected from the pattern list to replace at least one of the preset patterns. In at least one embodiment, the preset patterns in the first pattern group may include a planar pattern, a DC pattern, and a plurality of first direction patterns, and the add patterns in the second pattern group may include a plurality of second direction patterns. In the described embodiment, at least one of the second directional patterns may be selected in place of at least one of the first directional patterns. In the described embodiment, each candidate pattern in the second pattern group is different from the candidate patterns in the first pattern group.

In at least one embodiment, predefined replacement rules for determining whether to replace at least one of the preset patterns may be predefined in the source device 11 and the destination device 12. In one embodiment, each preset pattern may remain unchanged when there is a reference sample at each of the first to fourth sample positions for the block unit. Accordingly, a block unit can be predicted by one of preset modes. In another embodiment, when there is no reference sample at one of the first to fourth sample positions for the block unit, the at least one preset pattern may be replaced with at least one of the added patterns.

In at least one embodiment, the preset patterns in the first pattern group may remain unchanged when the intra prediction unit 22221 determines that there are reconstructed samples at the lower left position and the upper right position. In this embodiment, when the intra-prediction unit 22221 determines that there are reconstructed samples at the lower-left position and no reconstructed samples at the upper-right position, the intra-prediction unit 22221 replaces at least one of the preset patterns having an orientation toward the upper-right position with at least one of the added patterns. In the embodiment, at least one of the added patterns, such as a candidate pattern pointing to a lower left position, may be selected from among a plurality of outermost candidate patterns in the second pattern group. In this embodiment, when the intra-prediction unit 22221 determines that there are reconstructed samples at the upper-right position and no reconstructed samples at the lower-left position, the intra-prediction unit 22221 replaces at least one of the preset patterns having an orientation toward the lower-left position with at least one of the added patterns. In the embodiment, at least one of the addition patterns, such as a candidate pattern pointing to an upper right position, may be selected from the outermost candidate patterns. In the embodiment, when the intra prediction unit 22221 determines that there are no reconstructed samples at the upper right position and the lower left position, the preset patterns in the first pattern group may remain unchanged.

In at least one embodiment, when the intra prediction unit 22221 determines that there are no reconstructed samples at the lower-left and upper-right positions, the intra prediction unit 22221 may replace at least one of the preset modes having an orientation toward the upper-right and lower-left positions with at least one of the addition modes. In the embodiment, at least one of the addition modes may be selected based on a cost function. In the described embodiment, the cost function may be a Sum of Absolute Differences (AD). In at least one implementation, when the intra prediction unit 22221 determines that there are reconstructed samples at each of the four sample positions, the intra prediction unit 22221 may replace at least one of the preset modes with at least one of the added modes. In the embodiment, at least one of the addition modes and at least one of the preset modes may be selected based on a cost function.

In at least one embodiment, the intra prediction unit 22221 may generate an intra predictor based on each addition mode and calculate a cost function for each addition mode. Then, the intra prediction unit 22221 may select at least one of the addition modes having the lowest cost result instead of at least one of the preset modes. In at least one embodiment, referring to fig. 7A, the intra prediction unit 22221 can determine the four sample locations 721, 722, 723, and 724 and determine the reconstructed samples 731, 733, and 734 present at the sample locations 721, 723, and 724. Accordingly, the intra prediction unit 22221 may determine that there is no reconstructed sample at the second sample position 722 at the upper right position. In the embodiment, the intra prediction unit 22221 may select at least one of the candidate modes having the lowest cost result or pointing to the lower left position in the second mode group. Then, the intra prediction unit 22221 may replace at least one candidate mode in the first mode group with at least one candidate mode in the selected second mode group. Accordingly, the intra prediction unit 22221 may compare at least one of the candidate modes in the first mode group with a particular one of the candidate modes in the first mode group determined based on the orientation flag to determine whether the particular candidate mode in the first mode group is replaced.

Fig. 8A shows a flow diagram according to a first exemplary embodiment of multiple reference line prediction for chroma prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 2, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 8A represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

At block 811, the decoder module 222 determines a block unit and a prediction mode for the block unit from the video data.

In at least one embodiment, the video data may be a bitstream. The destination device 12 may receive a bitstream from an encoder, such as the source device 11, via the second interface 123 of the destination device 12. The second interface 123 provides a bit stream to the decoder module 222. The decoder module 222 determines an image frame from the bitstream and divides the image frame to determine block units according to a plurality of segmentation indicators in the bitstream. For example, the decoder module 222 may divide an image frame to generate a plurality of coding tree units and also divide one of the coding tree units according to a division indication based on any video coding standard to determine a block unit having a block size.

In at least one embodiment, the entropy decoding unit 2221 may decode the bitstream to determine a plurality of prediction indicators for the block unit, and then the decoder module 222 may reconstruct the block unit further based on the prediction indicators. In at least one embodiment, the prediction indication may include a plurality of flags and a plurality of indices. In the described embodiment, the prediction indication comprises at least one mode flag indicating that the block unit is predicted based on the prediction mode.

In at least one embodiment, the intra prediction unit 22221 determines a plurality of reconstructed samples that are adjacent to a block unit. Fig. 9 is a schematic diagram of an exemplary embodiment of a block unit 900 and a plurality of reference lines 910, 911, 912, and 913 having a plurality of reconstructed samples, respectively. In the embodiment, the number of reference lines may be equal to L, and the number L may be an integer greater than 1. In one embodiment, the intra prediction unit 22221 selects at least one of the reference lines according to a prediction mode to reconstruct the block unit based on the selected at least one of the reference lines.

In at least one embodiment, the prediction mode is selected from a plurality of candidate modes. In at least one embodiment, the candidate modes may include a plurality of Direct Modes (DM), a plurality of Most Probable Modes (MPM), and a plurality of Linear Modes (LM). In the embodiment, the LM may include a Linear Model Mode, a Multiple-Model Linear Mode (MMLM), and a Multiple-Filter Linear Mode (MFLM). In one embodiment, when the intra prediction unit determines that the prediction mode is selected from the plurality of LMs based on at least one prediction flag, the intra prediction unit 22221 may reconstruct the block unit 900 based on the reconstructed samples of one of the reference lines 910 and 913. In the described embodiment, one of the reference lines may be predefined as the first one of the reference lines. For example, a predefined one of the reference lines may be the first reference line 910 in fig. 9. In one embodiment, when the intra prediction unit 22221 determines that the prediction mode is selected from the plurality of DMs and the plurality of MPMs based on the at least one prediction flag, the intra prediction unit 22221 may reconstruct a block unit based on the reconstructed samples in at least one of the reference lines. In the embodiment, referring to FIG. 9, the intra prediction unit 22221 selects at least one of the reference lines 910-.

At block 812, the decoder module 222 determines whether the prediction mode is included in the first mode group. When the prediction mode is included in the first mode group, the process proceeds to block 813. When the prediction mode is different from the candidate modes in the first mode group, the process proceeds directly to block 814.

In at least one embodiment, the source device 11 and the destination device 12 may separate candidate patterns into a plurality of pattern groups. In such an embodiment, the first pattern group may include a plurality of first candidate patterns, and the second pattern group may include a plurality of second candidate patterns. In one embodiment, when the encoder module 112 determines to predict a block unit based on a particular one of the first candidate patterns, the encoder module 112 may predict the block unit based on reconstructed samples of at least one of the reference lines. In addition, when the decoder module 222 determines to reconstruct the block unit based on the specific first candidate mode, the decoder module 222 may reconstruct the block unit based on the reconstructed samples of at least one of the reference lines. In one embodiment, when the encoder module 112 determines to predict a block unit based on a particular one of the second candidate patterns, the encoder module 112 may predict the block unit based on reconstructed samples of a predefined one of the reference lines. In addition, when the decoder module 222 determines to reconstruct the block unit based on the specific second candidate mode, the decoder module 222 may reconstruct the block unit based on the reconstructed sample of the predefined one of the reference lines. In one embodiment, the first candidate mode may be a plurality of DMs and a plurality of MPMs, and the second candidate mode may be a plurality of LMs.

In at least one embodiment, when the intra prediction unit 22221 determines that the prediction mode belongs to the first mode group having the plurality of DMs and the plurality of MPMs, the intra prediction unit 22221 may reconstruct a block unit based on the reconstructed samples in at least one of the reference lines. Thus, the decoder module 222 needs to further determine which reference lines to use to predict a block unit.

In at least one embodiment, when the intra prediction unit determines that the prediction mode is included in the second mode group having the plurality of LMs, the intra prediction unit 22221 may predict the block unit based on the reconstructed sample of the predefined one of the reference lines. Accordingly, the intra prediction unit 22221 may directly select a predefined one of the reference lines to reconstruct the block unit without decoding a line indication indicating an index corresponding to at least one of the reference lines.

At block 813, the decoder module 222 decodes the bitstream to obtain a line indication indicating at least one of the plurality of reference lines.

In at least one embodiment, when the intra prediction unit 22221 determines that the prediction mode belongs to the first mode group, the intra prediction unit 22221 may reconstruct a block unit based on the reconstructed samples in at least one of the reference lines. In one embodiment, the decoder module 222 decodes the bit stream to obtain the line indication. In the described embodiment, the line indication may be a line index. In the embodiment, the intra prediction unit 22221 may determine the at least one of the reference lines based on the thread index. For example, when the thread index equals zero, the number of the at least one of the reference threads may equal one. In one embodiment, the at least one of the reference lines may be a first reference line when the thread index is equal to zero. In another embodiment, when the thread index is equal to one, the at least one of the reference lines may be a second reference line or a combination of the first reference line and the second reference line.

In block 814, the decoder module 222 directly selects one of the plurality of reference lines without decoding the line indication.

In at least one embodiment, when the intra prediction unit determines that the prediction mode is different from the first candidate mode in the first mode group, the intra prediction unit 22221 may reconstruct the block unit based on the reconstructed sample in the predefined one of the reference lines. In one embodiment, the predefined one of the reference lines may be the first reference line.

At block 815, the decoder module 222 generates a plurality of predictors for the block unit based on the prediction mode and the at least one reference line.

In at least one embodiment, a block unit may include a plurality of block components. In the described embodiment, each block component may be a pixel component. In at least one embodiment, the intra prediction unit 22221 may determine one of the predictors for each block component along the prediction mode derived for the block unit based on the reference samples in the at least one reference line according to the prediction mode derived for the block unit.

In at least one embodiment, the first adder 2224 of the decoder module 222 in the destination device 12 may add a predictor derived based on a prediction mode to a plurality of residual samples determined from a bitstream to reconstruct a block unit. In addition, the decoder module 222 may reconstruct all other block units in the image frame to reconstruct the image frame and the video.

Fig. 8B shows a flow diagram according to a second exemplary embodiment of multi-reference line prediction for chroma prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 2, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 8B represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

At block 821, the decoder module 222 determines a block unit from the video data and determines a thread index indicative of at least one of the plurality of reference lines for reconstructing the block unit.

In at least one embodiment, the video data may be a bitstream. The destination device 12 may receive a bitstream from an encoder, such as the source device 11, via the second interface 123 of the destination device 12. The second interface 123 provides a bit stream to the decoder module 222. The decoder module 222 determines an image frame from the bitstream and divides the image frame to determine block units according to a plurality of segmentation indicators in the bitstream. For example, the decoder module 222 may divide an image frame to generate a plurality of coding tree units and also divide one of the coding tree units according to a division indication based on any video coding standard to determine a block unit having a block size.

In at least one implementation, the intra-prediction unit 22221 may determine a plurality of reconstructed samples in a reference line. In one embodiment, the number of reference lines may be equal to L, and the number L may be an integer greater than 1. In one embodiment, the intra prediction unit 22221 may select the indicated at least one of the reference lines to reconstruct the block unit.

In at least one embodiment, the entropy decoding unit 2221 may decode the bitstream to determine a plurality of prediction indicators for the block unit, and then the decoder module 222 may reconstruct the block unit further based on the prediction indicators. In at least one embodiment, the prediction indication may include a plurality of flags and a plurality of indices. In the embodiment, the prediction indication comprises a line index indicating which of the reference lines is the at least one indicated of the reference lines.

In block 822, the decoder module 222 determines whether the thread index is equal to zero. When the thread index equals zero, the program proceeds to block 823. When the thread index is not equal to zero, the routine proceeds to block 824.

In at least one embodiment, the prediction indication may include a mode flag indicating that the block unit is predicted based on the prediction mode. In at least one embodiment, the prediction mode is selected from a plurality of candidate modes. In at least one embodiment, the candidate modes may include a plurality of Direct Modes (DM), a plurality of Most Probable Modes (MPM), and a plurality of Linear Modes (LM). In the embodiment, the LM may include a Linear Model Mode, a Multiple-Model Linear Mode (MMLM), and a Multiple-Filter Linear Mode (MFLM).

In at least one embodiment, the source device 11 and the destination device 12 may separate candidate patterns into a plurality of pattern groups. In such an embodiment, the first pattern group may include a plurality of first candidate patterns, and the second pattern group may include a plurality of second candidate patterns. In one embodiment, the first candidate mode may be a plurality of DMs and a plurality of MPMs, and the second candidate mode may be a plurality of LMs.

In one embodiment, when the encoder module 112 determines to predict the block unit based on a particular one of the first candidate modes, the encoder module 112 may predict the block unit based on reconstructed samples of the indicated at least one of the reference lines. In addition, when the decoder module 222 determines to reconstruct the block unit based on the specific first candidate mode, the decoder module 222 may reconstruct the block unit based on the reconstructed samples of the indicated at least one of the reference lines. In the embodiment, referring to fig. 9, the intra prediction unit 22221 may select at least one of the reference lines 910-. For example, when the thread index equals zero, the indicated at least one of the reference lines 910-913 may be the first reference line 910. In another embodiment, when the thread index equals one, the at least one of the reference lines indicated may be the second reference line 911 or a combination of the first reference line 910 and the second reference line 911.

In one embodiment, when the encoder module 112 determines to predict a block unit based on a particular one of the second candidate patterns, the encoder module 112 may predict the block unit based on reconstructed samples of a predefined one of the reference lines. In addition, when the decoder module 222 determines to reconstruct the block unit based on the specific second candidate mode, the decoder module 222 may reconstruct the block unit based on the reconstructed sample of the predefined one of the reference lines. In one embodiment, the predefined one of the reference lines may be the first reference line and the line index is predefined to be equal to zero.

In at least one embodiment, the prediction mode may be selected from the first mode group and the second mode group when the thread index is equal to zero. Therefore, the intra prediction unit 22221 needs to further select one of the first mode group and the second mode group to determine the prediction mode. In another embodiment, the prediction mode is selected only from the first mode group when the cue index is not equal to zero.

In block 813, the decoder module 222 decodes the bitstream to obtain a group indication for one of the first mode group and the second mode group for selecting the indication of the prediction mode.

In at least one embodiment, the prediction mode may be selected from the first mode group and the second mode group when the thread index is equal to zero. Therefore, when the cue index is equal to zero, the intra prediction unit 22221 requires a group indication to select one of the first mode group and the second mode group. The intra prediction unit 22221 may further determine a prediction mode from the first mode group based on the first mode flag when the group indication indicates that the prediction mode is selected from the first mode group. The intra prediction unit 22221 may further determine a prediction mode from the second mode group based on the second mode flag when the group indication indicates that the prediction mode is selected from the second mode group. In one embodiment, when the first mode group includes only one first candidate mode, the intra prediction unit 222221 may directly set the first candidate mode as the prediction mode without decoding the first mode flag. In addition, when the second mode group includes only one second candidate mode, the intra prediction unit 22221 may directly set the second candidate mode as the prediction mode without decoding the second mode flag.

At block 814, the decoder module 222 directly selects the first mode group without decoding the indication and selects the prediction mode from the first mode group.

In at least one embodiment, the prediction mode may be selected only from the first mode group when the cue index is not equal to zero. Thus, the intra-prediction unit 22221 does not require a group indication when the thread index is not equal to zero. In one embodiment, the intra prediction unit 22221 may also determine a prediction mode from the first candidate modes in the first mode group based on the first mode flag. In another embodiment, when the first mode group includes only one first candidate mode, the intra prediction unit 222221 may directly set the first candidate mode as the prediction mode without decoding the first mode flag.

At block 815, the decoder module 222 generates a plurality of predictors for the block unit based on the prediction mode and the at least one of the plurality of reference lines.

In at least one embodiment, a block unit may include a plurality of block components. In the described embodiment, each block component may be a pixel component. In at least one embodiment, the intra prediction unit 22221 may determine one of the predictors for each block component along the prediction mode derived for the block unit based on the reference samples in the at least one reference line according to the prediction mode derived for the block unit.

In at least one embodiment, the first adder 2224 of the decoder module 222 in the destination device 12 may add a predictor derived based on a prediction mode to a plurality of residual samples determined from a bitstream to reconstruct a block unit. In addition, the decoder module 222 may reconstruct all other block units in the image frame to reconstruct the image frame and the video.

Fig. 10 is a block diagram representing an encoder module 1012 of an exemplary implementation of the encoder module 1012 of the source device 11 in the system of fig. 1. In at least one implementation, the encoder module 1012 includes a prediction processing unit 10121, a first adder 10122, a transform/quantization unit 10123, an inverse quantization/inverse transform unit 10124, a second adder 10125, a filtering unit 10126, a decoded picture buffer 10127, and an entropy coding unit 10128. In at least one implementation, the prediction processing unit 10121 of the encoder module 1012 further includes a partition unit 101211, an intra prediction unit 101212, and an inter prediction unit 101213. In at least one embodiment, the encoder module 1012 receives source video and encodes the source video to output a bitstream.

In at least one embodiment, the encoder module 1012 may receive a source video including a plurality of image frames and then segment the image frames according to an encoding structure. In at least one embodiment, each image frame may be divided into at least one image block. The at least one image block may include a luma block having a plurality of luma samples and at least one chroma block having a plurality of chroma samples. The luma block and the at least one chroma block may be further partitioned to generate a macroblock, a Coding Tree Unit (CTU), a Coding Block (CB), a sub-partition thereof, and/or another equivalent Coding unit. In at least one implementation, the encoder module 1012 may perform additional sub-partitioning of the source video. It should be noted that the disclosure is generally applicable to video encoding, regardless of how the source video is partitioned prior to and/or during encoding.

In at least one embodiment, the prediction processing unit 10121 receives a current image block of a particular one of the plurality of image frames during an encoding process. The current image block may be one of a luminance block and at least one chrominance block in the particular image frame. The division unit 101211 divides the current image block into a plurality of block units. The intra-prediction unit 101212 may perform intra-prediction encoding of the current block unit with respect to one or more neighboring blocks in the same frame as the current block unit to provide spatial prediction. The inter prediction unit 101213 may perform inter prediction encoding of the current block unit with respect to one or more blocks of one or more reference image blocks to provide temporal prediction.

In at least one embodiment, the prediction processing unit 10121 may select one of the plurality of encoding results generated by the intra prediction unit 101212 and the inter prediction unit 101213 based on a mode selection method such as a cost function. In at least one embodiment, the mode selection method may be a Rate-Distortion Optimization (RDO) process. The prediction processing unit 10121 determines the selected encoding result, and provides a prediction block corresponding to the selected encoding result to the first adder 10122 to generate a residual block, and to the second adder 10125 to reconstruct an encoded block unit. In at least one embodiment, the prediction processing unit 10121 may also provide syntax elements such as motion vectors, intra mode indicators, partition information, and other syntax information to the entropy encoding unit 10128.

In at least one implementation, the intra prediction unit 101212 may intra predict the current block unit. In at least one implementation, the intra prediction unit 101212 may determine an intra prediction mode that points to reconstructed samples adjacent to the current block unit to encode the current block unit. In at least one embodiment, the intra prediction unit 101212 may encode the current block unit using various intra prediction modes, and the intra prediction unit 101212 or the prediction processing unit 10121 may select an appropriate intra prediction mode from among the test modes. In at least one implementation, the intra prediction unit 101212 may encode the current block unit using a cross component prediction mode to predict one of two chroma components of the current block unit based on a luma component of the current block unit. In addition, the intra prediction unit 101212 may predict a first one of the two chroma components of the current block unit based on the other one of the two chroma components of the current block unit.

In at least one implementation, as described above, the inter prediction unit 101213 may inter predict a current block unit as an alternative to the intra prediction performed by the intra prediction unit 101212. The inter prediction unit 101213 may perform motion estimation to estimate the motion of the current block unit to generate a motion vector. The motion vector may indicate a displacement of a current block unit within the current image block relative to a reference block unit within the reference image block. In at least one embodiment, the inter prediction unit 101213 receives at least one reference image block stored in the decoded picture buffer 10127 and estimates motion based on the received reference image block to generate a motion vector.

In at least one embodiment, the first adder 10122 generates a residual block by subtracting the prediction block determined by the prediction processing unit 10121 from the original current block unit. The first adder 10122 represents one or more components that perform the subtraction operation.

In at least one embodiment, the transform/quantization unit 10123 applies a transform to the residual block to generate residual transform coefficients, which are then quantized to further reduce the bit rate. In at least one embodiment, the transform may be a DCT, DST, AMT, mdsnst, HyGT, signal dependent transform, KLT, wavelet transform, integer transform, sub-band transform, or conceptually similar transform. In at least one embodiment, the transform may convert residual information from a pixel value domain to a transform domain, such as a frequency domain. In at least one embodiment, the degree of quantization may be modified by adjusting a quantization parameter. In at least one embodiment, the transform/quantization unit 10123 may scan a matrix including quantized transform coefficients. Alternatively, the entropy encoding unit 10128 may perform scanning.

In at least one embodiment, the entropy encoding unit 10128 may receive a plurality of syntax elements including quantization parameters, transform data, motion vectors, intra modes, partition information, and other syntax information from the prediction processing unit 10121 and the transform/quantization unit 10123, and encode the syntax elements into a bitstream. In at least one implementation, the entropy encoding unit 10128 entropy encodes the quantized transform coefficients. In at least one embodiment, the entropy encoding unit 10128 may perform CAVLC, CABAC, SBAC, PIPE encoding, or another entropy encoding technique to generate an encoded bitstream. In at least one embodiment, the encoded bitstream may be transmitted to another device (e.g., the destination device 12) or archived for later transmission or retrieval.

In at least one implementation, the inverse quantization/inverse transform unit 10124 may apply inverse quantization and inverse transform to reconstruct a residual block in the pixel domain for later use as a reference block. In at least one implementation, the second adder 10125 adds the reconstructed residual block to the prediction block provided from the prediction processing unit 10121 to generate a reconstructed block for storage in the decoded picture buffer 10127.

In at least one embodiment, the filtering unit 10126 may include a deblocking filter, SAO filter, bilateral filter, and/or ALF to remove blocking artifacts from reconstructed blocks. In addition to deblocking filters, SAO filters, bilateral filters, and ALF, other filters (in-loop or post-loop) may be used. Such filters are not shown for simplicity, but the output of the second adder 10125 may be filtered if desired.

In at least one implementation, the decoded picture buffer 10127 may be a reference picture memory that stores reference blocks used for encoding video by the encoder module 1012, e.g., in intra or inter coding mode. The decoded picture buffer 10127 may be formed from any of a variety of memory devices, such as DRAM including SDRAM, MRAM, RRAM, or other types of memory devices. In at least one implementation, the decoded picture buffer 10127 may be on-chip with other components of the encoder module 1012 or off-chip with respect to those components.

In at least one embodiment, the encoder module 1012 may perform a mode list adjustment method for intra prediction as shown in fig. 3. For example, the method of fig. 3 may use the configurations shown in fig. 1 and 10 to perform the methods described below, and reference is made to the various components of these figures in explaining example methods. Further, the order of the blocks in fig. 3 is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

At block 31, the encoder module 1012 determines a block unit in an image frame from video data and determines a plurality of neighboring blocks adjacent to the block unit.

In at least one embodiment, the video data may be video. The source device 11 may receive video through the source module 111. The encoder module 1012 determines an image frame from the video and segments the image frame to determine block units.

In at least one embodiment, the prediction processing unit 10121 of the source device 11 determines a block unit from video via the partition unit 101211, and the encoder module 1012 provides a plurality of partition indications to a bitstream based on the partition results of the partition unit 101211. In at least one embodiment, the prediction processing unit 10121 determines neighboring blocks that are adjacent to a block unit. In at least one embodiment, the neighboring block may be predicted before the prediction block unit, and thus the neighboring block may include a plurality of reference samples for the prediction block unit. In at least one embodiment, a block unit may predict some neighboring blocks before predicting them, and thus, the non-predicted neighboring blocks may not include reference samples for the block unit.

At block 32, the intra prediction unit 101212 determines a first mode list having a plurality of first candidate modes and a second mode list having a plurality of second candidate modes.

In at least one embodiment, the first candidate mode and the second candidate mode are selected from a plurality of intra-modes predefined in the source device 11 and the destination device 12. In one embodiment, at least one of the first candidate patterns may be the same as at least one of the second candidate patterns. In another embodiment, each first candidate pattern may be different from the second candidate pattern.

At block 33, the intra prediction unit 101212 selects one of the first mode list and the second mode list.

In at least one embodiment, the intra prediction unit 101212 may determine which of the first mode list and the second mode list is selected for the block unit. In one embodiment, the intra prediction unit 101212 may determine whether a neighboring block includes reference samples for determining how to select one of the first mode list and the second mode list. In another implementation, the intra prediction unit 101212 may determine how to select one of the first mode list and the second mode list based on the block size of the block unit 411.

At block 34, the intra prediction unit 101212 selects a prediction mode from the selected mode list.

In at least one embodiment, the prediction processing unit 10121 may select one of a plurality of encoding results generated by the intra prediction unit 101212 according to the selected one of the first mode list and the second mode list based on a mode selection method such as a cost function. In at least one embodiment, the mode selection method may be a Rate-Distortion Optimization (RDO) process. When the selected one of the first mode list and the second mode list is the first mode list, the prediction processing unit 10121 may select one of the first candidate modes and set the selected one of the first candidate modes as the prediction mode.

At block 35, the intra prediction unit 101212 generates a plurality of reconstructed components in the block unit based on the plurality of neighboring blocks and the prediction mode.

In at least one embodiment, a block unit may include a plurality of block components. In the described embodiment, each block component may be a pixel component. In at least one embodiment, the intra prediction unit 101212 may determine predictors based on the selected coding result for each block component. In this embodiment, the encoder module 1012 generates a plurality of residual samples based on the prediction sub-prediction block unit and provides a bitstream including a plurality of coefficients corresponding to the residual samples. In addition, the encoder module 1012 may return residual samples based on the coefficients and add the returned residual samples to the predictor to generate a reconstructed component.

Fig. 11 shows a flowchart of a fifth exemplary embodiment according to a mode list adjustment for intra prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 10, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 11 represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

At block 1101, the encoder module 1012 determines a block unit from an image frame of video data.

In at least one embodiment, the video data may be video. The source device 11 may receive video through the source module 111. The encoder module 1012 determines an image frame from the video and segments the image frame to determine block units.

In at least one embodiment, the prediction processing unit 10121 of the source device 11 determines a block unit from video via the partition unit 101211, and the encoder module 1012 provides a plurality of partition indications to a bitstream based on the partition results of the partition unit 101211.

In at least one embodiment, the prediction processing unit 10121 of the destination device 12 determines neighboring blocks that are adjacent to a block unit. In at least one embodiment, the neighboring block may be predicted before the prediction block unit, and thus the neighboring block may include a plurality of reference samples for the prediction block unit. In at least one embodiment, a block unit may predict some neighboring blocks before predicting them, and thus, the non-predicted neighboring blocks may not include reference samples for the block unit.

At block 1102, the intra prediction unit 101212 determines a mode list comprising a plurality of candidate modes separated into a first mode group and a second mode group.

In at least one embodiment, candidate patterns in the pattern list may be predefined in the destination device 12 and the source device 11. For example, the candidate pattern may be predefined as a planar pattern, a DC pattern, and/or a plurality of directional patterns. In at least one embodiment, the intra prediction unit 101212 may partition the candidate modes into a first mode group and a second mode group. In the embodiment, the candidate patterns in the first pattern group may be a plurality of preset patterns selected from the pattern list, and the candidate patterns in the second pattern group may be a plurality of additional patterns selected from the pattern list to replace at least one of the preset patterns. In at least one embodiment, the preset patterns in the first pattern group may include a planar pattern, a DC pattern, and a plurality of first direction patterns, and the add patterns in the second pattern group may include a plurality of second direction patterns. In the described embodiment, the second directional mode may be selected to replace at least one of the first directional modes. In the embodiment, each of the addition patterns in the second pattern group is different from the preset pattern in the first pattern group.

At block 1103, the intra prediction unit 101212 determines whether at least one of the plurality of candidate modes of the first mode group is replaced by the plurality of candidate modes of the second mode group.

In one implementation, the intra prediction unit 101212 may determine whether the neighboring block includes reference samples to determine whether to replace at least one of the candidate modes in the first set with at least one of the candidate modes in the second mode set. In such an embodiment, the intra prediction unit 101212 may determine that the at least one of the candidate modes in the first mode group is to be replaced by a candidate mode in the second mode group based on a relationship between neighboring blocks and reference samples. In such an embodiment, the relationship may include the positions of neighboring blocks and the positions of non-predicted blocks, the neighboring blocks including reference samples.

In at least one implementation, the intra prediction unit 101212 may determine whether to replace at least one of the candidate modes in the first set with a candidate mode in the second mode set based on the block size of the block unit. When the block width is equal to the block height, the intra prediction unit 101212 may determine that at least one candidate mode in the first group is not replaced by a candidate mode in the second group of modes. When the block width is different from the block height, the intra prediction unit 101212 may determine that at least one candidate mode in the first group is replaced with a candidate mode in the second group of modes.

At block 1104, the intra prediction unit 101212 determines one of the plurality of candidate modes in the first mode group as a prediction mode.

In at least one embodiment, the prediction processing unit 10121 may select one of a plurality of encoding results generated by the intra prediction unit 101212 according to candidate modes in the first mode group based on a mode selection method such as a cost function. In at least one embodiment, the mode selection method may be a Rate-distortion optimization (RDO) process. In the embodiment, a specific one of the candidate modes in the first mode group for generating the selected one of the encoding results may be set as a prediction mode by the intra prediction unit 101212.

At block 1105, the intra prediction unit 101212 determines a prediction mode from at least one of the plurality of remaining modes in the first mode group and the plurality of candidate modes in the second mode group.

In at least one embodiment, the prediction processing unit 10121 may select one of a plurality of encoding results generated by the intra prediction unit 101212 according to at least one of the remaining modes in the first mode group and the candidate modes in the second mode group based on a mode selection method such as a cost function. In at least one embodiment, the mode selection method may be a Rate Distortion Optimization (RDO) process. In this embodiment, the at least one of the candidate patterns in the first pattern group is replaced by the at least one of the candidate patterns in the second pattern group, and the other candidate patterns in the first pattern group may be regarded as remaining patterns. In the embodiment, the selected one of the encoding results is generated based on a prediction mode selected from at least one of the remaining modes in the first mode group and the candidate modes in the second mode group.

At block 1106, the encoder module 1012 generates a plurality of predictors for the block unit based on the prediction mode.

In at least one embodiment, a block unit may include a plurality of block components. In the described embodiment, each block component may be a pixel component. In at least one embodiment, the intra prediction unit 101212 may determine one of the predictors based on neighboring blocks for each block unit along an orientation of a prediction mode for the block unit. In this embodiment, the encoder module 1012 predicts a block unit based on a predictor to generate a plurality of residual samples and provides a bitstream including a plurality of coefficients corresponding to the residual samples.

In at least one embodiment, the encoder module 1012 may perform the mode list adjustment for intra prediction illustrated in fig. 6A and 6B. For example, the methods in fig. 6A and 6B may be performed using the configurations shown in fig. 1 and 10, and reference is made to various components of these figures in explaining example methods. In addition, the procedure and results for performing the method described in fig. 6A using the configuration shown in fig. 1 and 10 are substantially the same as the procedure and results for performing the method described in fig. 6A using the configuration shown in fig. 1 and 2. The procedure and results for performing the method described in fig. 6B using the configuration shown in fig. 1 and 10 are substantially the same as the procedure and results for performing the method described in fig. 6B using the configuration shown in fig. 1 and 2. Further, the order of the blocks in fig. 6A and 6B is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

Fig. 12A shows a flow diagram according to a third exemplary embodiment of multiple reference line prediction for chroma prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 2, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 12A represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

At block 1211, the encoder module 1012 determines a block unit and a prediction mode for the block unit from the video data.

In at least one embodiment, the video data may be video. The source device 11 may receive video through the source module 111. The encoder module 1012 determines an image frame from the video and segments the image frame to determine block units.

In at least one embodiment, the prediction processing unit 10121 of the source device 11 determines a block unit from video via the partition unit 101211, and the encoder module 1012 provides a plurality of partition indications to a bitstream based on the partition results of the partition unit 101211.

In at least one embodiment, the prediction processing unit 10121 may select one of a plurality of encoding results generated by the intra prediction unit 101212 according to a plurality of candidate modes based on a mode selection method such as a cost function. In at least one embodiment, the mode selection method may be a Rate-distortion optimization (RDO) process. In the embodiment, the prediction processing unit 10121 sets one of the candidate modes for generating the selected encoding result as a prediction mode. In at least one embodiment, the candidate modes may include a plurality of Direct Modes (DM), a plurality of Most Probable Modes (MPM), and a plurality of Linear Modes (LM). In the embodiment, the LM may include a Linear Model Mode, a Multiple-Model Linear Mode (MMLM), and a Multiple-Filter Linear Mode (MFLM). In one embodiment, the intra prediction unit 101212 may predict the block unit 900 based on the reconstructed samples of one of the reference lines 910 and 913 of FIG. 9 when the prediction mode is selected from the plurality of LMs. In the embodiment, the one of the reference lines may be predefined as a first one of the reference lines. In another embodiment, the intra prediction unit 101212 may predict a block unit based on reconstructed samples of at least one of the reference lines when the prediction mode is selected from among the plurality of DMs and the plurality of MPMs.

At block 1212, the prediction processing unit 10121 determines whether the prediction mode is included in the first mode group. When the prediction mode is included in the first mode group, the process proceeds to block 813. When the prediction mode is not included in the first mode group, the process proceeds to block 814.

In at least one embodiment, the source device 11 and the destination device 12 may separate candidate patterns into a plurality of pattern groups. In such an embodiment, the first pattern group may include a plurality of first candidate patterns, and the second pattern group may include a plurality of second candidate patterns. In one embodiment, when the prediction processing unit 10121 determines to predict a block unit based on a particular one of the first candidate modes, the prediction processing unit 10121 may predict the block unit based on reconstructed samples in one or more of the reference lines. Thus, the entropy encoding unit 10128 needs to further encode into the bitstream a line indication indicating how to select the at least one of the reference lines for the decoder module 122. In one embodiment, when the encoder module 1012 determines to predict a block unit based on a particular one of the second candidate modes, the encoder module 1012 may predict the block unit based on reconstructed samples of a predefined one of the reference lines. Thus, the entropy encoding unit 10128 may not encode a line indication since the predefined one of the reference lines is predefined in the destination device 12. In one embodiment, the first candidate mode may be a plurality of DMs and a plurality of MPMs, and the second candidate mode may be a plurality of LMs.

At block 1213, the entropy encoding unit 10128 encodes a line indication indicating the at least one of the plurality of reference lines into the bitstream.

In at least one embodiment, the line indication may be a line index. In one embodiment, the thread index may indicate a number of the at least one of the reference threads. For example, when the thread index equals zero, the number of at least one reference thread may equal one. In one embodiment, the thread index may be set equal to zero when the at least one of the reference lines comprises only the first reference line. In one embodiment, the thread index may be set equal to one when the at least one reference line includes only the second reference line. In another embodiment, the thread index may be set equal to one when the at least one reference line is a combination of the first reference line and the second reference line.

At block 1214, the encoder module 1012 determines not to add the line indication into the bitstream for a block unit.

In at least one embodiment, the entropy encoding unit 10128 may not encode a line indication since the predefined one of the reference lines is predefined in the destination device 12.

Fig. 12B shows a flow diagram according to a fourth exemplary embodiment of multiple reference line prediction for chroma prediction. Because there are many ways to perform this method, the example method is provided by way of example only. For example, the methods described below may be performed using the configurations shown in fig. 1 and 2, and reference is made to the various components of these figures in explaining example methods. Each block shown in fig. 12B represents one or more processes, methods, or subroutines performed in the example method. Further, the order of the blocks is merely illustrative and may be changed. Additional blocks may be added or fewer blocks may be utilized without departing from the disclosure.

At block 1221, the encoder module 1012 determines a block unit from the video data and determines at least one of a plurality of reference lines for predicting the unit based on a prediction mode.

In at least one embodiment, the video data may be video. The source device 11 may receive video through the source module 111. The encoder module 1012 determines an image frame from the video and segments the image frame to determine block units.

In at least one embodiment, the prediction processing unit 10121 of the source device 11 determines a block unit from video via the partition unit 101211, and the encoder module 1012 provides a plurality of partition indications to a bitstream based on the partition results of the partition unit 101211.

In at least one embodiment, the intra prediction unit 101212 may determine a number of reconstructed samples within the reference line 910 and 913 in FIG. 9. In one embodiment, the number of reference lines L may be an integer greater than one.

In at least one embodiment, the prediction processing unit 10121 may select one of a plurality of encoding results generated by the intra prediction unit 101212 according to a plurality of candidate modes based on a mode selection method such as a cost function. In at least one embodiment, the mode selection method may be a Rate-distortion optimization (RDO) process. In the embodiment, the prediction processing unit 10121 sets one of candidate modes that generate the selected encoding result based on the at least one of the reference lines as a prediction mode.

At block 1212, the prediction processing unit 10121 determines whether at least one of the plurality of reference lines is a first one of the plurality of reference lines. When at least one of the plurality of reference lines is a first one of the plurality of reference lines, the process proceeds to block 1223. When at least one of the plurality of reference lines includes other reference lines different from the first reference line, the routine proceeds to block 1224.

In at least one embodiment, the prediction mode is selected from a plurality of candidate modes. In at least one embodiment, the candidate modes may include a plurality of Direct Modes (DM), a plurality of Most Probable Modes (MPM), and a plurality of Linear Modes (LM). In the embodiment, the LM may include a Linear Model Mode, a Multiple-Model Linear Mode (MMLM), and a Multiple-Filter Linear Mode (MFLM).

In at least one embodiment, the source device 11 and the destination device 12 may separate candidate patterns into a plurality of pattern groups. In such an embodiment, the first pattern group may include a plurality of first candidate patterns, and the second pattern group may include a plurality of second candidate patterns. In one embodiment, the first candidate mode may be a plurality of DMs and a plurality of MPMs, and the second candidate mode may be a plurality of LMs.

In at least one embodiment, when the prediction processing unit 10121 determines to predict a block unit based on reconstructed samples in the predefined one of the reference lines according to the prediction mode, the prediction mode may be selected from a first candidate mode and a second candidate mode. Therefore, the entropy encoding unit 10128 needs to further encode a mode indication indicating one of the first mode group and the second mode group into the bitstream. In one embodiment, the prediction mode is selected only from the first candidate modes when the prediction processing unit 10121 determines that the at least one of the reference lines includes one reference line different from the predefined one of the reference lines. Thus, the entropy encoding unit 10128 may not encode a mode indication since the destination device 12 may directly select the first mode group based on the at least one of the reference lines.

At block 1223, the encoding unit 10128 encodes a mode indication indicating one of the first mode group and the second mode group for selecting the prediction mode into a bitstream.

In at least one embodiment, the mode indication may be a group index. When the group index indicates that the prediction mode is selected from the first mode group, the entropy encoding unit 10128 may further encode the first mode flag into the bitstream for the destination device 12 to select the prediction mode from the first candidate modes. When the group index indicates that the prediction mode is selected from the second mode group, the entropy encoding unit 10128 may further encode a second mode flag into the bitstream for the destination device 12 to select the prediction mode from the second candidate modes.

At block 1224, the encoder module 1012 determines not to add the mode indication into the bitstream for a block unit.

In at least one embodiment, the entropy encoding unit 10128 may not encode a mode indication since the destination device 12 may directly select the first mode group based on the at least one of the reference lines. Then, the entropy encoding unit 10128 may further encode the first mode flag into the bitstream to cause the destination device 12 to select a prediction mode from the first candidate modes.

It will be apparent to those skilled in the art that, although specific details have been set forth, the concepts described herein may be embodied in a wide variety of specific contexts without departing from the scope of such concepts. Further, while the concepts have been described with specific reference to certain embodiments, a person of ordinary skill in the art would recognize that changes could be made in form and detail without departing from the scope of those concepts. The described embodiments are, therefore, to be considered in all respects as illustrative and not restrictive. It should also be understood that the application is not limited to the particular embodiments described above, but that many rearrangements, modifications, and substitutions are possible without departing from the scope of the disclosure.

42页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于不可分离二次变换的图像编码方法和用于其的装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类