Multi-codec processing and rate control

文档序号:958985 发布日期:2020-10-30 浏览:6次 中文

阅读说明:本技术 多编解码器处理和速率控制 (Multi-codec processing and rate control ) 是由 法维奥·默里 I·达蒙佳诺维克 于 2019-01-17 设计创作,主要内容包括:控制用于编码信号的比特率。所述信号是使用至少两种不同的编码算法进行编码的。将总比特率分配给所述信号的至少两个分量。所述信号的第一分量将使用第一编码算法来编码。所述信号的第二分量将使用第二编码算法来编码。(The bit rate for encoding the signal is controlled. The signal is encoded using at least two different encoding algorithms. The total bit rate is allocated to at least two components of the signal. A first component of the signal is to be encoded using a first encoding algorithm. The second component of the signal is to be encoded using a second encoding algorithm.)

1. A method for controlling a bit rate for encoding a signal, wherein the signal is encoded using at least two different encoding algorithms, the method comprising assigning an overall bit rate to at least two components of the signal, wherein a first component of the signal is to be encoded using a first encoding algorithm and a second component of the signal is to be encoded using a second encoding algorithm.

2. A method for encoding a signal using at least two different encoding algorithms, wherein a first component of the signal is encoded with a first encoding algorithm and a second component of the signal is encoded with a second encoding algorithm.

3. A method for decoding an encoded signal having a first component encoded with a first encoding algorithm and a second component encoded with a second encoding algorithm, wherein a first decoding algorithm is used for decoding the first component and a second decoding algorithm is used for decoding the second component.

4. A method as claimed in any preceding claim, wherein the signal comprises one or more sub-signals, and wherein the first component and the second component are components of the same sub-signal.

5. A method as claimed in any one of the preceding claims, wherein the signal comprises a plurality of sub-signals, and wherein the first component corresponds to one or more sub-signals and the second component corresponds to one or more different sub-signals.

6. A method as claimed in claim 4 or 5, wherein the signal is a video signal and the sub-signals correspond to frames of the video signal.

7. A method as claimed in claim 4 or 5, wherein the signal is one or more images and the sub-signals correspond to images.

8. The method of claim 1, further comprising:

determining an optimal value of a first bit rate to be allocated for encoding the first component using the first encoding algorithm; and

determining an optimal value of a second bit rate to be allocated for encoding the second component using the second encoding algorithm;

wherein the optimal value of the first bit rate and the optimal value of the second bit rate are determined jointly.

9. The method of claim 8, wherein the optimal value of the first bit rate and the optimal value of the second bit rate are jointly determined by optimizing a quality level of the signal.

10. The method of claim 8, wherein the optimal value of the first bit rate and the optimal value of the second bit rate are collectively determined by optimizing a cost function, the cost function including at least one characteristic of the first encoding algorithm and at least one characteristic of the second encoding algorithm.

11. The method of claim 9 or 10, wherein the optimization process is performed under the constraint that the sum of the first bit rate and the second bit rate should be less than or equal to the total bit rate.

12. The method of claim 10 or 11, wherein the optimization process is performed by changing the at least one characteristic of the first encoding algorithm or the at least one characteristic of the second encoding algorithm.

13. The method of any of claims 9 to 12, wherein the optimization process is performed using a neural network.

14. The method of claim 3, further comprising reconstructing the signal by combining the decoded first component and the decoded second component.

15. A method as claimed in any preceding claim, wherein the signal can be reconstructed by decoding only one of the components of the encoded signal.

16. The method of claim 1, further comprising:

allocating the total bit rate partly for the first component of the signal, partly for the second component of the signal, and partly for a third component of the signal, the third component to be encoded using a third encoding algorithm.

17. The method of claim 1, 2 or 16, further comprising:

once encoded, the components of the signal are combined into a single encoded signal.

18. The method of claim 1, 2 or 16, further comprising:

the signal is divided into two or more components, each to be encoded with a separate encoding algorithm.

19. The method of claim 1 or 2, further comprising:

the encoding algorithm is selected from a set of encoding algorithms based on which encoding algorithm best suits the respective component of the signal to be encoded.

20. A method for decoding an encoded signal having a first component encoded with a first encoding algorithm and a second component encoded with a second encoding algorithm, the method comprising:

decoding the first component using a first decoding algorithm; and

reconstructing the signal using only the decoded first component.

21. A transport stream comprising one or more data sets, wherein each set of said data sets comprises at least a first component and a second component, said first component being encoded using a first encoding algorithm and said second component being encoded using a second, different encoding algorithm.

22. A rate control device for controlling a bit rate of an encoded signal, wherein the signal is encoded using at least two different encoding algorithms, the device being configured to allocate an overall bit rate to at least two components of the signal, wherein a first component of the signal is to be encoded using a first encoding algorithm and a second component of the signal is to be encoded using a second encoding algorithm.

23. An encoding device for encoding a signal using two different encoding algorithms, the device being configured to encode a first component of the signal with a first encoding algorithm and to encode a second component of the signal with a second encoding algorithm.

24. A decoding device for decoding an encoded signal having a first component encoded with a first encoding algorithm and a second component encoded with a second encoding algorithm, the decoding device being configured to decode the first component with a first decoding algorithm and the second component with a second decoding algorithm.

25. A decoding device for decoding an encoded signal having a first component encoded with a first encoding algorithm and a second component encoded with a second encoding algorithm, the device being configured to:

decoding the first component using a first decoding algorithm; and

reconstructing the signal using only the decoded first component.

26. An apparatus substantially as shown in figure 1 or figure 2 and as described above.

Technical Field

The invention relates to an apparatus, a method, a computer program and a computer readable medium. In particular, the present invention relates to an apparatus, a method, a computer program and a computer readable medium for processing data. Processing the data may include, but is not limited to, obtaining, deriving, outputting, receiving, and reconstructing the data.

Background

Compression and decompression of signals are important considerations in many known systems.

Many types of signals, such as video, audio or volumetric signals, may be compressed and encoded for transmission, for example, over a data communication network. Other signals may be stored in compressed form, for example, on conventional storage media such as Digital Versatile Disks (DVDs) or as data files in online data storage (e.g., cloud storage).

There are many known techniques designed to efficiently compress and decompress signals. By way of example, for video signals, there are standard-based Moving Picture Experts Group (MPEG) compression techniques (such as AVC/h.264 or more recently HEVC/h.265), consisting of(e.g., VP8, VP9) andvarious compression techniques developed (e.g., VC1), and the method of V-Developed and referred to as

Figure BDA0002688062520000014

A new family of layered compression techniques. In addition, there are several known compression techniques that are used in various fields for specific features, such as encoders optimized for identifying lines and efficiently encoding them. As another example, for image and/or intra video signals, there is a standard-based joint imageExpert Group (JPEG) compression techniques (such as JPEG and JPEG 2000), lossless image compression techniques (such as BMP, TIFF, PNG, etc.), and PERSEUS when used in image and intra modes.

Each of the above techniques is characterized by certain characteristics (e.g., the manner in which the data is processed, the type of data transformation used, the type of encoding technique used, etc.), which are largely responsible for determining the performance of the technique. An important factor in the performance is the number of symbols used to encode a data set or signal. In digital communications, this is typically equal to the number of bits used (or any multiple thereof). When coding large assets that are distributed over time, such as, for example, video, the distribution of these bits over time (bit rate) is crucial to ensure the distribution of symbols (bits) that provide the best possible data reconstruction based on a given metric (e.g., quality of subjective experience). Each implementation of the above technique uses its own and typically customized rate control algorithm.

Typical bit rate control uses a given input video signal and a desired bit rate (e.g., constant or variable) to determine encoder settings to maintain image quality as high and constant as possible. The most important settings are usually the quantization steps used in the encoding process.

Due to the specific features of each compression technique and the manner of rate control performed by a particular implementation of such a technique, implementations of the compression technique may be superior to others for certain video frames/sequences and inferior to others for other video frames/sequences.

Video sequences are encoded using specific compression technology implementations, resulting in compression inefficiencies because the best techniques are not always used for video sequences.

Disclosure of Invention

According to a first aspect of the present invention, there is a method for controlling a bit rate for encoding a signal, wherein the signal is encoded using at least two different encoding algorithms. The method may comprise allocating a total bit rate to at least two components of the signal. A first component of the signal is to be encoded using a first encoding algorithm and a second component of the signal is to be encoded using a second encoding algorithm. Further, a first component of the signal may be encoded with a first encoding algorithm and a second component of the signal may be encoded with a second encoding algorithm.

The method may further include determining an optimal value of a first bit rate to be allocated for encoding the first component using the first encoding algorithm; and determining an optimal value of a second bit rate to be allocated for encoding the second component using the second encoding algorithm. The optimal value of the first bit rate and the optimal value of the second bit rate may be jointly determined by optimizing a quality level of the signal. The optimal value of the first bit rate and the optimal value of the second bit rate may be jointly determined by optimizing a cost function comprising at least one characteristic of the first encoding algorithm and at least one characteristic of the second encoding algorithm. The optimization process may be performed under the constraint that the sum of the first bit rate and the second bit rate should be less than or equal to the total bit rate. The optimization process may be performed by changing the at least one characteristic of the first encoding algorithm or the at least one characteristic of the second encoding algorithm. The optimization process may be performed using a neural network or any suitable deep learning algorithm. The method may further comprise allocating the total bit rate partly for the first component of the signal, partly for the second component of the signal, and partly for a third component of the signal, the third component to be encoded using a third encoding algorithm.

According to a second aspect of the present invention, a method for decoding an encoded signal is provided, wherein the encoded signal has a first component encoded with a first encoding algorithm and a second component encoded with a second encoding algorithm. The first decoding algorithm may be used to decode the first component and the second decoding algorithm may be used to decode the second component. The reconstruction of the signal may be done by combining the decoded first component and the decoded second component. The signal may be reconstructed by decoding only one of the components of the encoded signal.

The signal may comprise one or more sub-signals, wherein the first component and the second component are components of the same sub-signal. In various embodiments, the signal may include a plurality of sub-signals, where the first component corresponds to one or more sub-signals and the second component corresponds to one or more different sub-signals. The signal may be a video signal and the sub-signals correspond to frames of the video signal. The signal may be one or more images and the sub-signals correspond to the images.

The above method may comprise combining said components of said signal into a single encoded signal once encoded.

The method may further comprise splitting the signal into two or more components, each to be encoded with a separate encoding algorithm.

The method may further comprise selecting a coding algorithm from a set of coding algorithms based on which coding algorithm is best suited to encode the respective component of the signal.

In a third aspect of the invention, there is provided a method for decoding an encoded signal having a first component encoded with a first encoding algorithm and a second component encoded with a second encoding algorithm, the method comprising decoding the first component using a first decoding algorithm; and reconstructing the signal using only the decoded first component.

In a fourth aspect of the present invention, a transport stream comprising one or more data sets is provided, wherein each of said data streams comprises at least a first component and a second component, said first component being encoded using a first encoding algorithm and said second component being encoded using a second, different encoding algorithm.

In a fifth aspect of the present invention, there is provided a rate control device for controlling the bit rate of an encoded signal, wherein the signal is encoded using at least two different encoding algorithms, the device being configured to allocate an overall bit rate to at least two components of the signal, wherein a first component of the signal is to be encoded using a first encoding algorithm and a second component of the signal is to be encoded using a second encoding algorithm.

In a sixth aspect of the invention, there is provided an encoding device for encoding a signal using two different encoding algorithms, the device being configured to encode a first component of the signal with a first encoding algorithm and to encode a second component of the signal with a second encoding algorithm.

In a seventh aspect of the present invention, there is provided a decoding device for decoding an encoded signal having a first component encoded with a first encoding algorithm and a second component encoded with a second encoding algorithm, the decoding device being configured to decode the first component with a first decoding algorithm and the second component with a second decoding algorithm.

In an eighth aspect of the present invention, there is provided a decoding device for decoding an encoded signal having a first component encoded with a first encoding algorithm and a second component encoded with a second encoding algorithm, the device being configured to decode the first component using a first decoding algorithm; and reconstructing the signal using only the decoded first component.

Further features and advantages will become apparent from the following description of preferred embodiments, given by way of example only, which is made with reference to the accompanying drawings.

Drawings

Fig. 1 shows a schematic block diagram of an example of a signal processing system according to an embodiment of the invention; and is

Fig. 2 shows a schematic block diagram of an example of a signal processing system according to an embodiment of the present invention.

Detailed Description

For ease of reference, all of the following figures are described using a video signal as the signal to be compressed and decompressed. However, it should be understood that the above-described invention is equally applicable to any type of signal that may be compressed, such as 1D signals (e.g., audio signals, etc.), 2D signals (e.g., images, video, etc.), and N-dimensional signals (e.g., volume signals, scans, space-time signals, medical imaging, etc.).

Referring to fig. 1, an example of a signal processing system is shown. A video sequence 100 consisting of a series of uncompressed video frames is fed into an encoding system 110 for compression. The processing module 120 communicates with a multi-codec selection unit 130 and a set of encoders 140.

Each of the encoders 140 is configured to encode a received data stream according to a particular compression technique. For example, the first encoder 140-1 encodes its received signal according to a first compression technique (e.g., AVC/H.264), the second encoder 140-2 encodes its received signal according to a second compression technique (e.g., PERSEUS), and the Nth encoder 140-N encodes its received signal according to an Nth compression technique (e.g., VP 9).

The multi-codec selection unit 130 is adapted to determine which encoders of the set of encoders 140 are to be used and what portion of the total bit rate should be allocated to each of the selected encoders. The determination may be performed in various ways as further described below. The multi-codec selection unit 130 may also receive some feedback information (real-time or offline) 150 from the various encoders 140 in order to improve the determination or make the determination first. The multi-codec selection unit 130 may also select one or more filtering operations to decompose a frame or sequence of frames into a plurality of components, each component being fed to a corresponding selected encoder.

The processing module 120 receives the video sequence 100 and feeds the sequence to two or more encoders 140 according to information received from the multi-codec selection unit 130. For example, if the joint rate control 130 informs the processing module that the first encoder 140-1 has been allocated 80% of the total bit rate and the second encoder 140-2 has been allocated 20% of the total bit rate, the processing module 120 manages the two encoders 140-1 and 140-2 such that the two encoders utilize the allocated bit rates when encoding their respective data streams.

It should be noted that although the processing module 120 has been drawn in fig. 1 as a separate module from the multi-codec selection unit 130, the functions performed by these two components may be combined into a single module that performs the same or equivalent functions.

In one embodiment, the multi-codec selection unit 130 determines the encoder to select and the portion of the bit rate to allocate to each of the selected encoders based on the relative strength of each encoder for a particular frame or series of frames in the video sequence. By way of example, the multi-codec selection unit 130 determines the total bit rate to be allocated to a particular frame or series of frames in a video sequence. For example, the total bit rate may be selected based on the available bandwidth for transmitting the encoded video sequence (e.g., in a real-time streaming scenario) or based on specific requirements such as available storage capacity, required compressed file size, etc.

Further, based on one or more characteristics of each encoder in a set of encoders, the multi-codec selection unit 130 determines which of the encoders should be used to encode a particular frame or series of frames in a video sequence. The characteristic may be related to some property of a particular frame or series of frames in the video sequence.

For example, if a particular frame or series of frames in a video sequence contains sharp edges and regions with uniform color, the multi-codec selection unit 130 may select an encoder implementing a compression technique suitable for optimal encoding of such sharp edges alongside another encoder implementing a compression technique that is instead the most superior to the uniform region of the encoded picture, assign each encoder an appropriate portion of the total bit rate to optimize the encoding and then instruct the processing module 120 to separate the edges from the uniform regions using a predetermined intelligent filtering operation. The processing module 120 applies a smart filtering operation to a particular frame or series of frames in the video sequence to decompose each of the original particular frame or series of frames in the video sequence into a first component containing edge information and a second component containing residual information. Processing module 120 may then feed the first component to a first encoder that implements a compression technique suitable for optimal optimization of sharp edges and the second component to a second encoder that implements a compression technique that is instead optimal for encoding uniform regions for each frame.

In another case, if a particular frame or series of frames in a video sequence is highly complex from a spatial perspective (e.g., there is much detail in the frame, which means that the spatial correlation is low), the multi-codec selection unit 130 may select an encoder that implements a compression technique that is optimal for the high-frequency information in the encoded picture, along with another encoder that implements a compression technique that is superior to the compression technique of the remaining information of the encoded image, allocate an appropriate portion of the total bit rate for each encoder in order to optimize the encoding, and then instruct the processing module 120 to separate the high-frequency information using a predetermined filtering operation. The processing module 120 applies a predetermined filtering operation to a particular frame or series of frames in the video sequence to decompose each of the original particular frame or series of frames in the video sequence into a first component containing high frequency information and a second component containing residual information. The processing module 120 may then feed, for each frame, the first component to a first encoder that implements a compression technique that is better than encoding the high frequency information and the second component to a second encoder that implements a compression technique that is better than encoding the remaining information in the picture instead.

In another case, the multi-codec selection unit 130 may select an encoder implementing a compression technique suitable for optimally encoding a first color component (e.g., Y component) of a picture, a second encoder implementing a compression technique suitable for optimally encoding a second color component (e.g., U component) of a picture, and a third encoder implementing a compression technique suitable for optimally encoding a third color component (e.g., V component) of a picture. Alternatively, it may select an encoder that implements a compression technique suitable for optimally encoding the color components (e.g., Y components) of the picture and a second encoder that implements a compression technique suitable for optimally encoding the remaining color components (e.g., U and V components) of the picture. In either case, the multi-codec selection unit 130 may allocate an appropriate portion of the total bit rate to each encoder in order to optimize encoding, and then instruct the processing module 120 to separate the color components using a predetermined filtering operation. The processing module 120 applies a predetermined filtering operation to a particular frame or series of frames in the video sequence to decompose each of the original particular frame or series of frames in the video sequence into a first color component (e.g., Y), a second color component (e.g., U), and a third color component (e.g., V). The processing module 120 may then feed, for each frame, the first color component to a first encoder implementing the compression technique best suited to optimally encode said first color component, the second color component to a second encoder implementing the compression technique best suited to optimally encode said second color component, and the third color component to a third encoder implementing the compression technique best suited to optimally encode said third color component, or alternatively (in case the joint controller selects only two encoders, one for the first color component and the second for the second and third color components), the second and third components to a second encoder implementing the compression technique best suited to optimally encode said second and third color components.

In another case, the multi-codec selection unit 130 may select an encoder that implements a compression technique suitable for optimally encoding movement across multiple frames and another encoder that implements a compression technique suitable for optimally encoding static portions within the frames. The multi-codec selection unit 130 may then allocate an appropriate portion of the total bit rate to each encoder in order to optimize the encoding, and then instruct the processing module 120 to separate the moving components from the rest of the picture using a predetermined filtering operation. The processing module 120 applies a predetermined filtering operation to a particular frame or series of frames in the video sequence to decompose the original particular frame or series of frames in the video sequence into a first component containing the movement information and a second component containing the remaining information. Processing module 120 may then feed the first component to a first encoder that implements a compression technique that is better than encoding movement across multiple frames, and feed the second component to a second encoder that implements a compression technique that is suitable for optimally encoding static portions within a frame.

In one embodiment, the multi-codec selection unit 130 determines the part of the total bit rate to be allocated to each of the selected encoders and the encoders to be used originating from a set of encoders 140 based on an optimized cost function, the cost function taking into account the following factors: (i) a fraction of the total bit rate to be allocated to each encoder, and (ii) a toolset, parameters, and/or characteristics associated with a particular encoder implementing a particular compression technique (e.g., whether it is more suitable for certain complexities, the type of transform used, spatial and/or temporal prediction, a layering approach, etc.). For example, the multi-codec selection unit 130 may use the following equation:

Rj=g[TCollection(enc#1),TCollection(enc#2),...,TCollection(enc#N)]

C=f[R1,R2,...,RN,TCollection(enc#1),TCollection(enc#2),...,TCollection(enc#N),OImage of a person,D]

Wherein R isjIs the bit rate to be selected for the jth encoder and is the corresponding compression technique (i.e., T) implementedCollection) (enc # j) and wherein the cost function C is a function of the possible bit-rates, toolsets/characteristics to be selected, in the video sequence (O)Image of a person) Some characteristics of a particular frame or series of frames that are uncompressed and a function of the distortion (D) that generally needs to be minimized. In one embodiment, the cost function used by the multi-codec selection unit 130 may be implemented using a neural network and/or a deep learning algorithm aimed at optimizing the selection. Other functions and/or optimization techniques may be used without departing from the workings of the invention.

In one embodiment, once processing module 120 has received information from multi-codec selection unit 130 regarding (i) which of a set of encoders 140 should be used to encode a particular frame or series of frames in a video sequence encoder, (ii) what portion of the total bit rate should be assigned to each of the selected encoders, and optionally (iii) the toolset, parameters, and/or characteristics to be used with each of the encoders, processing module 120 processes the particular frame or series of frames in the video sequence using the selected encoders accordingly in a predetermined combination by feeding one or more respective data streams to each encoder. Once the output of each decoder (i.e., encoded streams 140-1, 140...., 140-N) has been generated, combiner 150 combines them into a single combined stream 170 representing a compressed version of the input stream. There are various ways in which streams may be combined. For example, a single combined stream may be combined into a combined format (e.g., a format that combines encoded streams into a single bitstream). In different examples, the encoded streams may be combined by encapsulating the encoded streams into a bitstream in which the encoded streams are still effectively independent of each other, even though they appear to be a single encapsulated bitstream.

Although not shown, processing module 120 and/or combiner 160 may work in conjunction with encoder 140 to perform encoding of the various components and generate encoded components 140-1 through 140-N and combined encoded stream 170. For example, some of the filtering operations performed by processing module 120 and the operations performed by combiner 160 may be performed as part of the encoding process, i.e., when various components are encoded by encoder 140.

As a non-limiting example, if the multi-codec selection unit 130 selects the first encoder 140-1 (e.g., implementing HEVC/h.265 compression techniques) and the second encoder 140-2 (e.g., implementing PERSEUS compression techniques), the processing module 120 may first use the first encoder 140-1 to encode a downsampled version of a particular frame or series of frames in a video sequence using a portion of the overall bit rate allocated to the first encoder 140-1 to produce a first encoded stream. The processing module 120 may then up-sample the decoded version of the first encoded stream and feed it to the second encoder 140-2 along with the original particular frame or series of frames in the video sequence so that the second encoder 140-2 uses a portion of the total bit rate allocated to the second encoder 140-2 to produce a second encoded stream. Further description of such combinations can be found, for example, in PCT patent publication No. WO 2014/170819 and PCT patent publication No. WO 2017/089839, which are incorporated herein by reference.

Referring to fig. 2, an example of a signal processing system is shown. An encoded sequence of frames 200, consisting of a series of compressed video frames, is fed into a decoding system 210 for decompression. The processing module 220 receives the encoded sequence of frames 200 and separates it into a plurality of compressed components 230-1 through 230-N. The processing module 220 feeds the received plurality of compressed components to the corresponding decoder 240. Each of the decoders 240 decodes its corresponding compressed component into decoded components 250-1 through 250-N. Recombiner 260 then combines the decoded components 250-1 through 250-N into a single decoded frame or sequence of frames 270.

The side information stream 201 may also be received by the decoding system 210. The side information stream 201 may comprise information about the number of the plurality of components, the particular filtering operation to be applied to the frame sequence and any other information required for decoding the frame sequence. The side information stream 201 may be transmitted together with the encoded sequence of frames 200 in a single encoded bit stream or alternatively as a separate independent stream.

The processing module 220 separates the sequences based on the format of the encoded sequence of the frame 200 and/or based on the side information 201. Processing module 220 also selects a set of decoders 240 based on the particular decoding technique associated with the encoded components 230-1 through 230-N. For example, if processing module 220 receives two encoded components 230-1 and 230-2, and encoded component 230-1 requires a decoder that implements a decompression technique suitable for decoding sharp edges and component 230-2 requires a decoder that implements a decompression technique that is modified to be better than decoding uniform regions, processing module 220 selects a first decoder that implements a decompression technique suitable for decoding sharp edges and a second decoder that implements a decompression technique that is modified to be better than decoding uniform regions and generates decoded components 250-1 and 250-2. The processing module 220 may determine a set of decoders 240 to select based on the format of the encoded sequence of the frame 200 and/or based on the side information 201. For example, when it receives the encoded sequence of frame 200, processing module 220 may analyze the format of the sequence and determine, based thereon, that there are two or more encoded components 230-1 through 230-N, each of which requires a particular decoding algorithm in order to decode. Alternatively or in combination, the processing module 220 may receive information (including, for example, parameters for a device decoding algorithm) about the encoded components 230-1 to 230-N and the particular decoding algorithm to be used with each of them via the side information stream 201.

Recombiner 260 receives decoded components 250-1 through 250-N and recombines them to generate a single decoded frame or sequence of frames 270. The combining may be performed based on the particular encoder 240 used during the process or based on the side information received from the side information stream 201. For example, if the decoded components include a first decoded component 250-1 that has been decoded by a decoder 240 that implements a decompression technique suitable for decoding sharp edges and a second decoded component 250-2 that has been decoded by a decoder that implements a decompression technique that is better than decoding uniform regions instead, the re-composer 260 may use a predetermined filter suitable for combining and/or combining the decoded components 250-1 and 250-2. For example, a simple filter implementation may simply overlap decoded component 250-1 with decoded component 250-2 to produce a single decoded stream 270.

Although not shown, the processing module 220 and/or the re-composer 260 may work in conjunction with the decoder 240 to perform decoding of the encoded components 230-1 through 230-N and generate the decoded components 250-1 through 250-N and the combined decoded stream 270. For example, some of the filtering operations performed by reassembler 260 and/or operations performed by processing module 220 may be performed as part of a decoding process, i.e., when decoder 240 decodes various encoded components.

The above-described method effectively receives an uncompressed frame (or an uncompressed series of frames), decomposes it into two or more components, allocates the total available bit rate to the corresponding two or more encoders, feeds the two or more components to the corresponding two or more encoders to generate two or more encoded components, and then combines the two or more components into a single encoded stream, which is then transmitted or stored for decoding at a later stage.

One of the advantages of the above-described approach is that two or more encoders (and the corresponding compression techniques they implement) can be effectively used simultaneously on a decomposed version of the same original input stream in order to use the best possible combination of encoders and thus optimize the overall coding performance of the coding system. As discussed above, conventional encoding systems implement one compression technique at a time. There are various technical reasons for the above. First, each compression technique typically requires dedicated hardware in order to perform the encoding process, and therefore only one dedicated hardware is used. Second, as discussed above, each implementation requires a fairly complex rate control in order to manage the bit rate required by the implementation. Third, each compression technique has its own different syntax and bitstream, and therefore the coding format is set to strictly comply with such syntax and bitrate, and does not allow any deviation therefrom. By using the present invention, two or more compression techniques (e.g., AVC/H.264 and PERSEUS, or "Y" encoder and "UV" encoder) can be used and their characteristics utilized cooperatively to optimize coding efficiency and maximize the quality of the coding system for a given desired bit rate.

Another advantage of the present invention is that video coding sequences can be optimized with greater granularity. By multiplexing the encoder as the video sequence is encoded, an optimal combination of compression techniques can be selected for each frame, thereby enabling the encoding system to achieve the best achievable quality for not only the entire encoded sequence, but also for each portion of the encoded sequence at all times.

Another advantage of the present invention is that backward compatibility can be achieved. For example, if the combined coded sequence is coded at the desired overall bit rate using AVC/h.264 and PERSEUS, legacy systems may still be able to decode the AVC/h.264 coded sequence without being able to decode the PERSEUS portions.

It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

12页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视频压缩中的多线帧内预测的方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类