Processing apparatus and control method thereof

文档序号:1652345 发布日期:2019-12-24 浏览:20次 中文

阅读说明:本技术 处理设备及其控制方法 (Processing apparatus and control method thereof ) 是由 罗尚权 俞基源 于 2018-02-20 设计创作,主要内容包括:提供了一种处理设备。所述处理设备包括:存储器,存储视频内容;以及处理器,用于将形成视频内容的帧划分为多个编码单元,并通过针对多个编码单元中的每个编码单元执行编码来生成编码帧,并且处理器可以将包括运动矢量的信息添加到编码帧,其中所述运动矢量是在针对多个编码单元中的每个编码单元的编码过程中获得的。(A processing apparatus is provided. The processing apparatus includes: a memory storing video content; and a processor for dividing a frame forming the video content into a plurality of coding units and generating a coded frame by performing coding for each of the plurality of coding units, and the processor may add information including a motion vector obtained in the coding process for each of the plurality of coding units to the coded frame.)

1. A processing device, comprising:

a memory storing video content; and

a processor configured to:

dividing a frame forming the video content into a plurality of coding units and generating a coded frame by performing coding for each of the plurality of coding units,

wherein the processor is configured to add information including a motion vector to the encoded frame, wherein the motion vector is obtained during encoding of each of the plurality of coding units.

2. The processing device of claim 1, wherein the additional information comprises motion vectors for all of the coding units of the plurality of coding units.

3. The processing device of claim 1, wherein the additional information is included in a reserved area of a header corresponding to the encoded frame.

4. The processing device of claim 1, wherein the processor is configured to:

searching for a motion vector corresponding to the current coding unit from a current frame including the current coding unit and a predetermined number of adjacent frames based on the current frame, and adding identification information of at least one frame including a pixel region corresponding to the searched motion vector from among the current frame and the adjacent frames to the additional information.

5. The processing device of claim 1, wherein the processor is configured to:

searching for a motion vector corresponding to a current coding unit, and adding information using motion vectors of neighboring coding units of the current coding unit to the additional information based on a pixel value of a corresponding position between a pixel region corresponding to the searched motion vector and the current coding unit satisfying a preset condition.

6. The processing device of claim 1, wherein the processor is configured to: adding position information on the at least two coding units and a motion vector of one of the at least two coding units to the additional information based on that the motion vectors of the at least two coding units are the same in one frame.

7. The processing device of claim 1, wherein the processor is configured to: adding information corresponding to the detected regularity to the additional information, based on detecting regularity between the motion vectors of all of the plurality of coding units.

8. A processing device, comprising:

a memory storing encoded video content; and

a processor generating a decoded frame by performing decoding on the encoded frame forming the encoded video content in units of encoding units, an

Wherein the encoded video content is added with additional information including a motion vector obtained from each of the encoded frames during encoding of each of a plurality of encoding units forming the encoded frame,

wherein the processor is configured to:

performing decoding by obtaining a motion vector of the current coding unit from the additional information and replacing the current coding unit with a pixel region corresponding to the obtained motion vector, based on failing to decode the current coding unit.

9. The processing device of claim 8, wherein the additional information comprises motion vectors for all of the plurality of coding units.

10. The processing device of claim 8, wherein the additional information is included in a reserved area of a header corresponding to the encoded frame.

11. The processing device according to claim 8, wherein the additional information includes identification information of at least one frame including a pixel region corresponding to the motion vector, and based on a failure to decode a current coding unit, the processor may perform decoding by obtaining the motion vector and the identification information of the current coding unit from the additional information and replacing the current coding unit with a pixel region corresponding to the obtained motion vector in a frame corresponding to the obtained identification information.

12. The processing apparatus according to claim 8, wherein the additional information includes information of a motion vector of a neighboring coding unit using the current coding unit, and based on inability to decode the current coding unit, the processor may perform decoding by obtaining information of a motion vector of a neighboring coding unit using the current coding unit from the additional information, and replacing the current coding unit with a pixel region corresponding to the motion vector of the neighboring coding unit based on the obtained information.

13. The processing device of claim 8, wherein the additional information includes position information of at least two coding units having the same motion vector and a motion vector of one of the at least two coding units, and the processor may perform decoding by replacing the current coding unit with a pixel region corresponding to the motion vectors of the at least two coding units based on the position information, based on failing to decode the current coding unit.

14. The processing apparatus according to claim 8, wherein the additional information includes information corresponding to regularity between motion vectors of all of the plurality of coding units, and based on inability to decode the current coding unit, the processor may perform decoding by obtaining a motion vector corresponding to the current coding unit based on the information corresponding to the regularity, and replacing the current coding unit with a pixel region corresponding to the obtained motion vector.

15. A control method of a processing apparatus, comprising:

dividing a frame forming video content into a plurality of coding units; and

generating an encoded frame by performing encoding for each of the plurality of encoding units,

wherein the generating of the encoded frame includes adding information including a motion vector, which is obtained in an encoding process of each of the plurality of encoding units, to the encoded frame.

Technical Field

The present disclosure relates to a processing apparatus and a control method thereof, and more particularly, to a processing apparatus that performs inter-frame encoding and intra-frame encoding, and a control method thereof.

Background

Within a predetermined time, wireless video content transmission may have packet losses and errors during data transmission. In particular, in the case of video images where real-time and low latency are important, recovery through packet retransmission may be limited.

As a method of recovering the transmission error, a frame/subframe copy method, a content-based error concealment/recovery method, and the like are used.

The frame/subframe duplication method may include determining whether an error has occurred through a Cyclic Redundancy Check (CRC), and if an error has occurred, repeatedly outputting a last normal transmission image (frame), or copying and outputting a portion of an area (subframe) of the error image to an area corresponding to the last normal transmission image.

Meanwhile, the frame/subframe copy method is significant in view quality degradation due to low restoration accuracy and in freezing artifacts due to repeated playback of a previous image, and a transmission delay is always accompanied in a CRC check process of a bitstream in a frame or subframe unit. In particular, in the case where an error occurs in successive frames, there is a problem in that a freezing artifact increases due to successive repeated reproduction of the same frame.

The context-based error concealment/recovery method predicts and recovers pixels of a lost region using modes and pixel information of neighboring blocks. Here, the Motion Vector (MV) of the neighboring block and the pixel information of the frame after normal recovery are used to predict and recover the pixel of the lost area, or the mode, the pixel of the neighboring block, and the pixel of the lost area is predicted and recovered using the pixel information of the previous normal recovery frame through a motion prediction process in a decoder.

However, the context-based error concealment/recovery method has the following problems: the precision is reduced, which is to generate the reference MV only by the surrounding MVs; and propagating the error of the incorrectly recovered image to the last frame. In addition, the MV correction technique using the neighboring pixels on the decoder side requires a high computational complexity, and when there is no available neighboring pixel or MV information, the degradation of the recovery quality is propagated in a serial manner to the process of using the error recovery data.

Therefore, methods for recovering errors (even under time constraints) from wireless transmission of video content have been developed to provide high quality images.

Disclosure of Invention

[ problem ] to provide a method for producing a semiconductor device

Accordingly, an object of the present disclosure is to provide a processing apparatus that improves recovery efficiency of a pixel region where an error occurs in a frame forming video content, and a control method thereof.

[ technical solution ] A

According to one embodiment, a processing device comprises: a memory storing video content; and a processor for dividing a frame forming the video content into a plurality of coding units and generating a coded frame by performing coding for each of the plurality of coding units, and the processor may add information including a motion vector obtained in the coding process for each of the plurality of coding units to the coded frame.

The additional information may include motion vectors of all of the plurality of coding units.

The additional information may be included in a reserved area of a header corresponding to the encoded frame.

The processor may search for a motion vector corresponding to the current coding unit from a current frame including the current coding unit and a predetermined number of adjacent frames based on the current frame, and add identification information of at least one frame of a pixel region corresponding to the motion vector searched in the current frame and the adjacent frames to the additional information.

The processor may search for a motion vector corresponding to the current coding unit and add information using motion vectors of neighboring coding units of the current coding unit to the additional information based on a pixel value of a corresponding position between a pixel region corresponding to the searched motion vector and the current coding unit satisfying a preset condition.

The processor may add position information on the at least two coding units and a motion vector of one of the at least two coding units to the additional information, based on the motion vectors of the at least two coding units being the same in one frame.

Based on detecting regularity between the motion vectors of all of the plurality of coding units, the processor may add information corresponding to the detected regularity to the additional information.

According to one embodiment, a processing device includes: a memory storing encoded video content; and a processor for generating a decoded frame by performing decoding on the encoded frame forming the encoded video content in units of encoding units, and may add additional information including a motion vector obtained from each of the encoding frames during encoding of each of a plurality of encoding units forming the encoded frame, and perform decoding by obtaining the motion vector of the current encoding unit from the additional information and replacing the current encoding unit with a pixel region corresponding to the obtained motion vector, based on a failure to decode the current encoding unit, to the encoded video content.

The additional information may include motion vectors of all of the plurality of coding units.

The additional information may be included in a reserved area of a header corresponding to the encoded frame.

The additional information may include identification information of at least one frame including a pixel region corresponding to the motion vector, and the processor may perform decoding by obtaining the motion vector and the identification information of the current coding unit from the additional information and replacing the current coding unit with a pixel region corresponding to the obtained motion vector in a frame corresponding to the obtained identification information, based on the inability to decode the current coding unit.

The additional information may include information of a motion vector of a neighboring coding unit using the current coding unit, and the processor may perform decoding based on the inability to decode the current coding unit by: obtaining information of motion vectors of neighboring coding units using the current coding unit from the additional information; and replacing the current coding unit with a pixel region corresponding to the motion vector of the neighboring coding unit based on the obtained information.

The additional information may include position information of at least two coding units having the same motion vector and a motion vector of one coding unit of the at least two coding units, and the processor may perform decoding by replacing the current coding unit with a pixel region corresponding to the motion vectors of the at least two coding units based on the position information, based on a failure to decode the current coding unit.

The additional information may include information corresponding to regularity between motion vectors of all of the plurality of coding units, and the processor may perform decoding by, based on the inability to decode the current coding unit: a motion vector corresponding to the current coding unit is obtained based on the information corresponding to the regularity, and the current coding unit is replaced with a pixel region corresponding to the obtained motion vector.

According to one embodiment, a method of controlling a processing apparatus includes: dividing a frame forming video content into a plurality of coding units; and generating the encoded frame by performing encoding for each of the plurality of encoding units, and the generating the encoded frame may include adding information including a motion vector obtained in an encoding process of each of the plurality of encoding units to the encoded frame.

The additional information may include motion vectors of all of the plurality of coding units.

The additional information may be included in a reserved area of a header corresponding to the encoded frame.

The generating of the encoded frame may include: a motion vector corresponding to the current coding unit is searched from a current frame including the current coding unit and a predetermined number of adjacent frames based on the current frame, and identification information of at least one frame of a pixel region corresponding to the motion vector searched in the current frame and the adjacent frames is added to the additional information.

The generating of the encoded frame may include: a motion vector corresponding to the current coding unit is searched for, and information using motion vectors of neighboring coding units of the current coding unit is added to the additional information based on a pixel value of a corresponding position between a pixel region corresponding to the searched motion vector and the current coding unit satisfying a preset condition.

According to one embodiment, a method of controlling a processing apparatus includes: performing decoding on an encoded frame forming the encoded video content in units of encoding units and generating a decoded frame by setting the decoded plurality of encoding units in a preset direction; and adding additional information including a motion vector to the encoded video content, wherein the motion vector is obtained from each of the encoded frames during encoding of each of a plurality of encoding units forming the encoded frame; and based on the inability to perform decoding on the current coding unit, performing decoding may include performing decoding by obtaining a motion vector of the current coding unit from the additional information and replacing the current coding unit with a pixel region corresponding to the obtained motion vector.

[ PROBLEMS ] the present invention

As described above, according to various embodiments, a processing device adds a motion vector of each of a plurality of coding units forming a frame to a coded frame, and if an error occurs, the processing device uses the motion vector to improve recovery efficiency.

Drawings

FIG. 1 is a block diagram illustrating a processing device that performs coding to facilitate an understanding of the present disclosure;

FIG. 2 is a block diagram illustrating a processing device that performs decoding to facilitate an understanding of the present disclosure;

FIG. 3 is a simplified block diagram depicting a processing device performing encoding according to one embodiment;

FIG. 4 is a diagram illustrating a method for generating additional information, according to one embodiment;

fig. 5 is a view describing a method for generating additional information according to another embodiment;

FIGS. 6a and 6b are diagrams depicting a situation in which a blockage occurs according to one embodiment;

fig. 7 is a view describing a method for reducing the data amount of additional information according to an embodiment;

FIG. 8 is a simplified block diagram depicting a processing device performing decoding according to one embodiment;

FIG. 9 is a flowchart describing a control method of a processing device that performs encoding according to one embodiment; and

fig. 10 is a flowchart describing a control method of a processing device that performs decoding according to one embodiment.

Detailed Description

Hereinafter, the present disclosure will be described in more detail with reference to the accompanying drawings.

Fig. 1 is a block diagram showing a configuration of a processing device 100 that performs encoding for better understanding of the present disclosure. As shown in fig. 1, the processing apparatus 100 may include a motion predictor 111, a motion compensator 112, an intra predictor 120, a switch 115, a subtractor 125, a transformer 130, a quantizer 140, an entropy coder 150, an inverse quantizer 160, an inverse transformer 170, an adder 175, a filter 180, and a reference frame buffer 190. Here, each functional unit may be implemented in at least one hardware form (e.g., at least one processor), but may also be implemented in at least one software or program form.

The processing device 100 is a device that encodes video content and converts the video content into another signal form. Here, the video content includes a plurality of frames, and each frame may include a plurality of pixels. For example, the processing device 100 may be a device for compressing raw data that is not processed. Alternatively, the processing device 100 may be a device that changes the pre-coded data into another signal form.

The processing device 100 may divide each frame into a plurality of blocks to perform encoding. The processing apparatus 100 may perform encoding by temporal or spatial prediction, transform, quantization, filtering, entropy encoding, or the like in units of blocks.

Prediction refers to generating a prediction block similar to a target block to be encoded. Here, a unit of a target block to be encoded may be defined as a Prediction Unit (PU), and prediction may be divided into temporal prediction and spatial prediction.

Temporal prediction refers to prediction between pictures (inter prediction). The processing device 100 may store some reference frames having a high correlation with a frame to be currently encoded, and perform inter prediction using the reference frames. That is, the processing device 100 may generate a prediction block from a reference frame restored after encoding at a previous time instant. In this case, it can be said that the processing device 100 performs inter-coding.

In the case of inter-coding, the motion predictor 111 may search for a block having the highest temporal correlation with a target block among reference frames stored in the reference frame buffer 190. The motion predictor 111 may interpolate the reference frame and search for a block having the highest temporal correlation with the target block in the interpolated frame.

Here, the reference frame buffer 190 is a space for storing a reference frame. The reference frame buffer 190 is only used when inter prediction is performed and may already store some reference frames having a higher correlation with the frame currently to be encoded. The reference frame may be a frame generated by sequentially transforming, quantizing, inversely transforming, and filtering a residual block to be described later. That is, the reference frame may be a frame recovered after encoding.

The motion compensator 112 may generate a prediction block based on motion information of a block having the highest temporal correlation with the target block found in the motion predictor 111. Here, the motion information may include a motion vector, a reference frame index, and the like.

Spatial prediction refers to intra prediction. The intra predictor 120 may perform spatial prediction from neighboring pixels restored after encoding in the current frame to generate a prediction value of the target block. In this case, the processing device 100 may be said to perform intra-coding.

The inter-coding or the intra-coding may be determined in units of Coding Units (CUs). Here, the coding unit may include at least one prediction unit. When a method for code prediction is determined, the position of the switch 115 may be changed to correspond to the code prediction method.

The reference frame restored after being encoded in the temporal prediction may be a frame to which filtering is applied, but the neighboring pixels restored after being encoded in the spatial prediction may be pixels to which filtering is not applied.

The subtractor 125 may generate a residual block by calculating a difference between the target block and a prediction block obtained from temporal prediction or spatial prediction. The residual block may be a block from which much redundancy has been removed by the prediction process, but since prediction cannot be perfectly performed, the residual block may be a block including information to be encoded.

The transformer 130 may transform the residual block after the intra prediction or the inter prediction to remove spatial redundancy and output transform coefficients of a frequency domain. At this time, the transformed unit is a Transform Unit (TU), and may be determined independently of the prediction unit. For example, a frame including a plurality of residual blocks may be divided into a plurality of transform units independently of a prediction unit, and the transformer 130 may perform transformation on the respective transform units. The partition of the transform unit may be determined according to bit rate optimization.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:形成音频换能器膜片的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类