360 DEG image/video processing method and device with rotation information

文档序号:172866 发布日期:2021-10-29 浏览:30次 中文

阅读说明:本技术 具有旋转信息的360°图像/视频内处理方法及装置 (360 DEG image/video processing method and device with rotation information ) 是由 林鸿志 黄昭智 李佳盈 林建良 张胜凯 于 2017-11-08 设计创作,主要内容包括:本发提供一种视频处理方法,包括:接收具有以360度虚拟现实(360-degreeVirtual Reality,360VR)投影格式表示的360度图像/视频内容的当前输入帧,对该当前输入帧中的该360度图像/视频内容应用内容导向旋转,以生成具有以该360度虚拟现实投影格式表示的旋转的360度图像/视频内容的内容旋转帧,编码该内容旋转帧以生成比特流,以及通过该比特流发信至少一个语法元素,其中,该至少一个语法元素被设置为用于指示该内容导向旋转的旋转信息。(The invention provides a video processing method, which comprises the following steps: the method includes receiving a current input frame having 360 degree image/video content represented in a 360-degree virtual Reality (360VR) projection format, applying content-oriented rotation to the 360 degree image/video content in the current input frame to generate a content rotation frame having rotated 360 degree image/video content represented in the 360 degree virtual Reality projection format, encoding the content rotation frame to generate a bitstream, and signaling at least one syntax element through the bitstream, wherein the at least one syntax element is arranged as rotation information to indicate the content-oriented rotation.)

1. A video processing method, comprising:

receiving a bit stream;

processing the bitstream to obtain at least one syntax element from the bitstream; and

decoding the bitstream to generate a current decoded frame having rotated 360 degree image/video content represented in a 360 degree virtual reality projection format;

wherein the at least one syntax element signaled by the bitstream indicates rotation information for the content-oriented rotation involved in generating the rotated 360 degree image/video content and comprises:

a first syntax element indicating a rotation angle along a particular rotation axis when the content-oriented rotation is enabled; and

a second syntax element indicating whether the content-oriented rotation is enabled.

2. The video processing method of claim 1, further comprising: rendering and displaying output image data on a display screen according to the current decoding frame and the rotation information.

3. A video processing method, comprising:

receiving a bit stream;

processing the bitstream to obtain at least one syntax element from the bitstream; and

decoding the bitstream to generate a current decoded frame having rotated 360 degree image/video content represented in a 360 degree virtual reality projection format;

wherein the at least one syntax element signaled by the bitstream indicates rotation information of the content-oriented rotation involved in generating the rotated 360 degree image/video content and includes;

a first syntax element indicating whether the content-oriented rotation involved in generating the rotated 360 degree image/video content in the current decoded frame is the same as a content-oriented rotation involved in generating the rotated 360 degree image/video content in at least one previously decoded frame;

a second syntax element indicating whether there is rotation along a particular axis of rotation when the content-oriented rotation involved in generating the rotated 360 degree image/video content in the current decoded frame is different from the content-oriented rotation involved in generating the rotated 360 degree image/video content in the at least one previously decoded frame.

4. The video processing method of claim 3, wherein the at least one syntax element further comprises:

a third syntax element indicating, when there is a rotation along the particular rotation axis, a difference between a rotation angle along the particular rotation axis of the content oriented rotation involved in generating the rotated 360 degree image/video content in the current decoded frame and a rotation angle along the particular rotation axis of the content oriented rotation involved in generating the rotated 360 degree image/video content in the at least one previously decoded frame.

5. The video processing method of claim 3, wherein the at least one syntax element further comprises:

a third syntax element having a first value when the 360-degree virtual reality projection format is a first projection format and a second value when the 360-degree virtual reality projection format is a second projection format different from the first projection format, the particular axis of rotation varying according to the third syntax element.

6. A video processing apparatus comprising:

a video decoder for receiving a bitstream, processing the bitstream to obtain at least one syntax element from the bitstream, and decoding the bitstream to generate a current decoded frame of 360 degree image/video content with a rotation represented in a 360 degree virtual reality projection format;

wherein the at least one syntax element signaled by the bitstream indicates rotation information for the content-oriented rotation involved in generating the rotated 360 degree image/video content and comprises: :

a first syntax element indicating a rotation angle along a particular rotation axis when the content-oriented rotation is enabled; and

a second syntax element indicating whether the content-oriented rotation is enabled.

7. The video processing device of claim 6, further comprising:

and the image rendering circuit is used for rendering and displaying output image data on a display screen according to the current decoding frame and the rotation information.

Technical Field

The present invention relates to 360 degree image/video content processing. More particularly, the present invention relates to a method and apparatus for a video encoding function and a method and apparatus for a related video decoding function having syntax element signaling applied to rotation information of content-oriented rotations of 360-degree image/video content represented in a projection format.

Background

Virtual Reality (VR) with head-mounted display (HMD) is associated with various applications. The ability to display a wide field of view of content to a user can be used to provide an immersive visual experience. The real world environment must be captured in all directions, thereby generating an omni-directional video (omni-directional video) corresponding to the field of view. With the advancement of cameras and HMDs, the delivery of VR content may quickly encounter bottlenecks due to the high bit rates required to represent content such as 360 degree images/video. When the resolution of the omni-directional video is 4k or higher, data compression/encoding is crucial to the reduction of the bit rate.

Typically, an omni-directional video corresponding to a field of view is converted into a sequence of images, each of which is represented in a 360-degree Virtual Reality (360-degree Virtual Reality, 360VR) projection format, and the resulting sequence of images is then encoded into a bitstream for transmission. However, the original 360 degree image/video content represented in the 360 degree virtual reality projection format may have poor compression efficiency due to the segmentation and/or stretching of the moving object by the applied 360VR projection format. Accordingly, there is a need for an innovative design that can improve the compression efficiency of 360 degree image/video content represented in a 360VR projection format.

Disclosure of Invention

It is an object of the present invention to provide a method and apparatus for video encoding function with syntax element signaling applied to rotation information of content-oriented rotations of 360-degree image/video content represented in projection format and a method and apparatus with related video decoding function.

According to a first aspect of the invention, a video processing method is disclosed. The video processing method comprises the following steps: the method includes receiving a current input frame having 360 degree image/video content represented in a 360-degree Virtual Reality (360-degree Virtual Reality, 360VR) projection format, applying a content-oriented rotation to the 360 degree image/video content in the current input frame to generate a content rotation frame having the rotated 360 degree image/video content represented in the 360VR projection format, encoding the content rotation frame to generate a bitstream, and signaling at least one syntax element through the bitstream, wherein the at least one syntax element is arranged as rotation information for indicating the content-oriented rotation.

According to a second aspect of the invention, a video processing method is disclosed. The video processing method includes receiving a bitstream, processing the bitstream to obtain at least one syntax element from the bitstream, decoding the bitstream to generate a current decoded frame of a 360-degree image/video content having a rotation represented in a 360-degree Virtual Reality (360-degree Virtual Reality, 360VR), and rendering and displaying output image data on a display screen according to the current decoded frame and rotation information of a content-oriented rotation indicated by the at least one syntax element, wherein the content-oriented rotation is involved in generating the rotated 360-degree image/video content.

According to a third aspect of the present invention, a video processing apparatus is disclosed. The video processing apparatus includes a content oriented rotation circuit and a video encoder. The content-oriented rotation circuit is to receive a current input frame having 360 degree image/video content represented in a 360 degree virtual reality projection format and apply a content-oriented rotation to the 360 degree image/video content to generate a content-rotated frame having rotated 360 degree image/video content represented in the 360 degree virtual reality projection format. The video encoder is configured to encode the content rotation frame to generate a bitstream and to signal at least one syntax element through the bitstream, wherein the at least one syntax element is set to rotation information indicating a content-oriented rotation.

According to a fourth aspect of the present invention, a video processing apparatus is disclosed. The video processing apparatus includes a video decoder and an image rendering circuit. The video decoder is to receive a bitstream, process the bitstream to obtain at least one syntax element from the bitstream, and decode the bitstream to generate a current decoded frame of 360 degree image/video content with a rotation represented in a 360 degree virtual reality projection format. The image rendering circuitry is to render and display output image data on the display screen according to the current decoded frame and rotation information of the content-oriented rotation indicated by the at least one syntax element, wherein the content-oriented rotation is involved in generating the rotated 360 degree image/video content.

These and other objects of the present invention will no doubt become obvious to those of ordinary skill in the art after having read the following detailed description of the preferred embodiments which are illustrated in the various drawing figures and drawings.

Drawings

FIG. 1 is a schematic diagram of a 360-degree Virtual Reality (360VR) system according to an embodiment of the invention.

Fig. 2 is a conceptual diagram of content-oriented rotation proposed according to an embodiment of the present invention.

Fig. 3 is a schematic diagram of a video encoder according to an embodiment of the present invention.

Fig. 4 is a schematic diagram of performing content-oriented rotation using different rotation sequences having the same rotation angle according to an embodiment of the present invention.

Fig. 5 is a schematic diagram of a video decoder according to an embodiment of the present invention.

Detailed Description

Certain terms are used throughout the following description and claims to refer to particular components. As one skilled in the art will recognize, electronic device manufacturers may refer to components by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to. Furthermore, the term "coupled" is used to indicate either an indirect or direct electrical connection. Thus, if one device couples to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.

FIG. 1 is a schematic diagram of a 360-degree Virtual Reality (360VR) system according to an embodiment of the invention. The 360VR system 100 includes a source electronic device 102 and a target electronic device 104. The source electronic device 102 includes a video capture device 112, a conversion circuit 114, a content-oriented rotation circuit 116, and a video encoder 118. For example, the video capture device 112 may be a set of cameras for providing omnidirectional image content (e.g., multiple images covering the entire surrounding environment) S _ IN corresponding to a spherical field of view (viewing sphere). The conversion circuit 114 generates a current input frame IMG having a 360 degree virtual reality (360VR) projection format L _ VR from the omnidirectional image content S _ IN. In this example, the conversion circuit 114 generates one input frame for each video frame of the 360 degrees video provided from the video capture device 112. The 360VR projection format L _ VR applied by the conversion circuit 114 may be any available projection format, including an equirectangular projection (ERP) format, a cube projection (CMP) format, an octahedral projection (OHP) format, an icosahedron projection (ISP) format, and so on. The content-oriented rotation circuit 116 receives the current input frame IMG (which has 360 degree image/video content represented in the 360VR projection format L _ VR) and applies a content-oriented rotation to the 360 degree image/video content in the current input frame IMG to generate a content-rotated frame IMG' having rotated 360 degree image/video content represented in the same 360VR projection format L _ VR. Further, the rotation information INF _ R of the content-oriented rotation of the application is provided to the video encoder 118 for syntax element signaling (syntax element signaling).

Fig. 2 is a conceptual diagram of content-oriented rotation proposed according to an embodiment of the present invention. For clarity and simplicity, assume that the 360VR projection format L _ VR is an ERP format. Thus, the 360 degree image/video content of the spherical field of view 202 is mapped onto a rectangular projection surface via a rectangular projection of the spherical field of view 202. As such, the current input frame IMG having the 360 degree image/video content represented in the ERP format is generated by the conversion circuit 114. As described above, the original 360 degree image/video content represented in the 360VR projection format may have poor compression efficiency due to the moving object being segmented and/or stretched by the applied 360VR projection format. To solve this problem, the present invention proposes to apply content-oriented rotation to 360 degree image/video content to improve codec efficiency.

Further, an example of calculating pixel values at pixel positions in the content rotation frame IMG' is shown in fig. 2. For pixel location c in content rotation frame IMG' with coordinates (x, y)oThe 2D coordinates (x, y) may be mapped to 3D coordinates s (points on the spherical field of view 202) by a 2D to 3D mapping process. Then, after performing the content-oriented rotation, the 3D coordinate s is converted into another 3D coordinate s' (a point on the spherical field of view 202). Content-oriented rotation may be achieved by rotational matrix multiplication (rotation matrix multiplication). Finally, through a 3D to 2D mapping process, a current input frame IMG may be found to have coordinates (x'i,y’i) Corresponding 2D coordinates c ofi'. Thus, for each integer pixel (e.g., c) in the content rotation frame IMGo(x, y)), a corresponding location (e.g., c) in the current input frame IMG may be found by a 2D to 3D mapping from the content rotation frame IMG' to the spherical field of view 202, a rotation matrix multiplication for the currently rotated spherical field of view 202, a 3D to 2D mapping from the spherical field of view 202 to the current input frame IMGi’=(x’i,y’i)). If x'iAnd y'iIs a non-integer position, an interpolation filter (not shown) may be applied to point c in the current input frame IMGi’=(x’i,y’i) Surrounding integer pixels to derive point c in the content-rotated frame IMGoPixel value of (x, y).

In contrast to conventional video encoders that encode the current input frame IMG as a bitstream for transmission, the video encoder 118 encodes the content rotation frame IMG' as a bitstream BS, which is then output to the target electronic device 104 via a transmission means 103, such as a wired/wireless communication link or storage medium. In particular, the video encoder 118 generates an encoded frame for each content rotation frame output from the content guide rotation circuit 116. Thus, successive encoded frames are sequentially generated by the video encoder 118. In addition, the rotation information INF _ R of the content-oriented rotation performed by the content-oriented rotation circuit 116 is supplied to the video encoder 118. Thus, the video encoder 118 also signals syntax elements over the bitstream BS, wherein the syntax elements are set to the rotation information INF _ R for indicating the content-oriented rotation applied to the current input frame IMG.

Fig. 3 is a schematic diagram of a video encoder according to an embodiment of the present invention. The video encoder 118 shown in fig. 1 may be implemented by the video encoder 300 shown in fig. 3. The video encoder 300 is a hardware circuit for compressing raw video data to generate compressed video data. As shown in fig. 3, video encoder 300 includes control circuitry 302 and encoding circuitry 304. It should be noted that the video encoder architecture shown in fig. 3 is for illustrative purposes only and is not meant to limit the present invention. For example, the architecture of the encoding circuit 304 may vary depending on the encoding standard. The encoding circuit 304 encodes a content rotation frame IMG' having a rotated 360 degree image/video content represented in a 360VR projection format L VR to generate a bitstream BS. As shown in fig. 3, the encoding circuit 304 includes a residual calculation circuit 311, a conversion circuit (denoted by "T") 312, a quantization circuit (denoted by "Q") 313, an entropy encoding circuit (e.g., a variable length encoder) 314, an inverse quantization circuit (denoted by "IQ") 315, an inverse conversion circuit (denoted by "IT") 316, a reconstruction circuit 317, at least one loop filter 318, a reference frame buffer 319, an inter-frame prediction circuit 320 (which includes a motion estimation circuit (denoted by "ME") 321 and a motion compensation circuit (denoted by "MC") 322), an intra-frame prediction circuit (denoted by "IP") 323, and an intra/inter-frame mode selection switch 324. Since the basic functions and operations of these circuit components applied in the encoding circuit 304 are well known to those skilled in the art, further description is omitted here for the sake of brevity.

The main difference between the video encoder 300 and a conventional video encoder is that the control circuit 302 is configured to receive the rotation information INF _ R from a previous circuit (e.g., the content-oriented rotation circuit 116 shown in fig. 1) and to set at least one Syntax Element (SE) according to the rotation information INF _ R, wherein the syntax element indicating the rotation information INF _ R is to be signaled to the video decoder by the bitstream BS generated from the entropy encoding circuit 314. In this way, the target electronic device 104 (which has a video decoder) may know the details of the encoder-side content-oriented rotation from the signaled syntax elements, and may, for example, perform the decoder-side inverse content-oriented rotation to obtain the required video data for rendering and display.

The content-oriented rotation performed by the content-oriented rotation circuit 116 may be specified by a rotation axis, a rotation order, and a rotation angle. The content-oriented rotation may include elementary rotations (elementary rotations) along a set of rotation axes in a rotation order, wherein the rotation order specifies an order of the rotation axes used by the content-oriented rotation, and each elementary rotation along the corresponding rotation axis is represented by a rotation angle having a specific rotation angle. For example, the axes of rotation may be three orthogonal axes (e.g., x-axis, y-axis, z-axis) in a cartesian coordinate system, and the order of rotation may be a particular order yaw-pitch-roll (e.g., z-y-x) that is commonly used. However, these are for illustrative purposes only and are not meant to be limitations of the present invention. For example, the axes of rotation need not be orthogonal axes. For another example, the number of rotation shafts and the number of rotation angles may be adjusted. In the case where only one rotation axis is involved in the content-oriented rotation, the rotation order may be omitted.

It should be noted that different rotation orders with the same rotation angle may produce different results. Fig. 4 is a schematic diagram of performing content-oriented rotation using different rotation sequences having the same rotation angle according to an embodiment of the present invention. The content-oriented rotation in the first example Ex1 includes 30 ° rotation along the y-axis followed by 30 ° rotation along the z-axis. Another content-oriented rotation in the second example Ex2 includes a 30 ° rotation along the z-axis followed by a 30 ° rotation along the y-axis. As shown in fig. 4, according to the content-oriented rotation in the first example Ex1, the image/video content located at (x, y, z) — (1,0,0) is rotated toAt, and according to the content-oriented rotation in the second example Ex2, the image/video content located at (x, y, z) ═ 1,0,0 is rotated toTo (3). Therefore, in addition to the rotation axis and the associated rotation angle, it is also necessary to precisely define the rotation order in the content-oriented rotation.

Each rotation axis may be predefined (e.g., defined in the specification text) at the encoder side and the decoder side. Therefore, information of a plurality of rotation axes (or a single rotation axis) used for the content-oriented rotation performed by the content-oriented rotation circuit 116 does not need to be signaled through the bit stream BS. Alternatively, each rotation axis may be actively set by the content-oriented rotation circuit 116. Therefore, information of a plurality of rotation axes (or a single rotation axis) used for the content-oriented rotation performed by the content-oriented rotation circuit 116 needs to be signaled through the bit stream BS.

The rotation order may be predefined (e.g., defined in the specification text) at the encoder side and the decoder side. The information of the rotation order used by the content-oriented rotation performed by the content-oriented rotation circuit 116 does not need to be signaled by the bit stream BS. Alternatively, the rotation sequence may be actively set by the content-oriented rotation circuit 116. Therefore, information of the rotation order used for the content-oriented rotation performed by the content-oriented rotation circuit 116 needs to be signaled via the bit stream BS.

The rotation angle of the rotation angle associated with each rotation axis may vary for different frames. Therefore, information of a plurality of rotation angles (or a single rotation angle) used for the content-oriented rotation performed by the content-oriented rotation circuit 116 needs to be signaled through the bit stream BS.

As described above, the syntax element SE is set to the rotation information INF _ R for indicating the content-oriented rotation applied to the current input frame IMG. In the first case where the rotation axes are predefined on the encoder side and the decoder side, the rotation information INF _ R provided by the content guide rotation circuit 116 to the video encoder 118 includes the rotation order and the rotation angle, which will be indicated by the syntax signaled from the encoder side to the decoder side. In the second case where the rotation axis and the rotation order are defined in advance in the encoder side and the decoder side, the rotation information INF _ R supplied to the video encoder 118 by the content guide rotation circuit 116 includes the rotation angle, which will be indicated by the syntax signaled from the encoder side to the decoder side. In the third case where the rotation order is predefined on the encoder side and the decoder side, the rotation information INF _ R supplied to the video encoder 118 by the content guide rotation circuit 116 includes the rotation axis and the rotation angle, which will be indicated by the syntax signaled from the encoder side to the decoder side. In the fourth case where the rotation axis and rotation order are not predefined on the encoder side and the decoder side, the rotation information INF _ R provided to the video encoder 118 by the content oriented rotation circuit 116 includes the rotation axis, rotation order, and rotation angle, which will be indicated by the syntax signaled from the encoder side to the decoder side.

Referring again to fig. 1, the target electronic device 104 may be a head-mounted display (HMD) device. As shown in fig. 1, the target electronic device 104 includes a video decoder 122, an image rendering circuit 124, and a display screen 126. The video decoder 122 receives a bit stream BS from the transmission apparatus 103 (e.g., a wired/wireless communication link or a storage medium) and decodes the received bit stream BS to generate a current decoded frame IMG ″, and specifically, the video decoder 122 generates one decoded frame for each encoded frame transmitted from the transmission apparatus 103. Thus, successive decoded frames are sequentially generated by the video decoder 122. In the present embodiment, the content rotation frame IMG' encoded by the video encoder 118 has a 360VR projection format. Thus, after the video decoder 122 decodes the bitstream BS, the currently decoded frame (i.e., reconstructed frame) IMG "has the same 360VR projection format.

Fig. 5 is a schematic diagram of a video decoder according to an embodiment of the present invention. The video decoder 122 shown in fig. 1 may be implemented by the video decoder 500 shown in fig. 5. The video decoder 500 may communicate with a video encoder (e.g., the video encoder 118 shown in fig. 1) via a transmission device such as a wired/wireless communication link or a storage medium. The video decoder 500 is a hardware circuit for decompressing compressed image/video data to generate decompressed image/video data. In the present embodiment, the video decoder 500 receives the bit stream BS and decodes the received bit stream BS to generate the current decoded frame IMG ″. As shown in fig. 5, the video decoder 500 includes a decoding circuit 520 and a control circuit 530. It should be noted that the video decoder architecture shown in fig. 5 is for illustrative purposes only and is not meant to limit the present invention. For example, the architecture of the decoding circuit 520 may vary according to a codec standard. The decoding circuit 520 includes an entropy decoding circuit (e.g., a variable length decoder) 502, an inverse quantization circuit (denoted by "IQ") 504, an inverse conversion circuit (denoted by "IT") 506, a reconstruction circuit 508, a motion vector calculation circuit (denoted by "MV calculation") 510, a motion compensation circuit (denoted by "MC") 513, an intra prediction circuit (denoted by "IP") 514, an intra/inter mode selection switch 516, at least one loop filter 518, and a reference frame buffer 522. Since the basic functions and operations of these circuit components applied in the decoding circuit 520 are well known to those skilled in the art, further description is omitted here for the sake of brevity.

The main difference between the video decoder 500 and the conventional video decoder is that the entropy decoding circuit 502 is also configured to perform data processing (e.g., syntax parsing) on the bitstream BS to obtain syntax elements SE signaled by the bitstream BS, and output the obtained syntax elements SE to the control circuit 530. Thus, with respect to the current decoded frame IMG "corresponding to the content-rotated frame IMG' generated from the current input frame, the control circuit 530 may refer to the syntax element SE to determine the rotation information INF _ R for the encoder-side content-oriented rotation of the current input frame IMG.

As described above, the current decoded frame IMG "has a rotated 360 degree image/video content represented in a 360VR projection format. In the present embodiment, the syntax element SE obtained from the bitstream BS indicates the rotation information INF _ R of the content oriented rotation involved in generating the rotated 360-degree image/video content represented in the 360VR projection format. In the first case where the rotation axes are predefined in the encoder side and the decoder side (in particular, the content oriented rotation circuit 116 and the image rendering circuit 124), the rotation information INF _ R provided from the control circuit 530 includes the rotation order and the rotation angle indicated by the signaled syntax element. In the second case where the rotation axis and the rotation order are predefined in the encoder side and the decoder side (in particular, the content oriented rotation circuit 116 and the image rendering circuit 124), the rotation information INF _ R provided from the control circuit 530 includes the rotation angle indicated by the signaled syntax element. In a third case where the rotation order is predefined in the encoder side and the decoder side (in particular the content oriented rotation circuit 116 and the image rendering circuit 124), the rotation information INF _ R provided from the control circuit 530 comprises the rotation angle axis and the rotation angle indicated by the signaled syntax element. In a fourth case where the rotation axis and rotation order are not predefined in the encoder side and the decoder side (in particular, the content oriented rotation circuit 116 and the image rendering circuit 124), the rotation information INF _ R provided from the control circuit 530 includes the rotation axis, the rotation order, and the rotation angle indicated by the signaled syntax element.

The image rendering circuit 124 renders and displays the output image data on the display screen 126 according to the current decoded frame IMG "and the content oriented rotation information INF _ R involved in generating the rotated 360 degree image/video content. For example, according to the rotation information INF _ R derived from the signaled syntax element SE, the rotated 360 degrees image/video content represented in the 360VR projection format may be rotated in reverse, and the rotated 360 degrees image/video content represented in the 360VR projection format may be used for rendering and display.

In order to better understand the technical features of the present invention, several exemplary syntax signaling methods are described below. The video encoder 118/300 may use one of the proposed syntax signaling methods to signal syntax elements SE indicative of rotation information INF _ R applied to the 360-degree image/video content represented in the 360VR projection format, and the video decoder 122/500 may refer to the syntax elements SE signaled by one of the proposed syntax signaling methods applied by the video encoder 118/300 to determine the rotation information INF _ R of the content-oriented rotation involved in generating the rotated 360-degree image/video content represented in the 360VR projection format.

It should be noted that the descriptor (descriptor) in the following exemplary syntax table specifies the parsing process for each syntax element. In particular, syntax elements may be coded by fixed length codecs (e.g., f (n), i (n), or u (n)) and/or variable length codecs (e.g., ce (v), se (v), or ue (v)). Descriptor f (n) describes a fixed pattern bit string that uses n bits written starting from the left bit (left to right). Descriptor i (n) describes a signed integer using n bits. Descriptor u (n) describes an unsigned integer using n bits. Descriptor ce (v) describes the context adaptive variable length entropy coding syntax element starting from the left bit. Descriptor se (v) describes a syntax element with signed integer Golomb-coded syntax starting from the left bit (unsigned integer Exp-Golomb-coded syntax). Syntax element ue (v) describes an unsigned integer golomb coded syntax element starting from the left bit.

According to the first syntax signaling method, the following syntax table may be used.

When the first syntax signaling method is applied, rotation information of the content-oriented rotation may be indicated in a sequence-level header (sequence-level header). H.264 and h.265 may have a plurality of Sequence Parameter Sets (SPS)/Picture Parameter Sets (PPS) referred to by each slice. Each slice may obtain corresponding codec parameters from its PPS/SPS Identifier (ID). Accordingly, the rotation Information of the content-oriented rotation may be indicated in the SPS/PPS or Supplemental Enhancement Information (SEI) by the signaled rotation angle of each rotation axis. When decoding a video frame, the video decoder 122 may obtain the rotation information by referring to the corresponding SPS/PPS ID or SEI.

The syntax element zero _ yaw _ orientation is set to indicate whether there is a rotation along the yaw axis (e.g., z-axis). The syntax element zero _ roll _ orientation is set to indicate whether there is rotation along the roll axis (e.g., x-axis). The syntax element zero _ pitch _ orientation is set to indicate whether there is a rotation along the pitch axis (e.g., y-axis). When there is a rotation along the yaw axis (i.e., | zero _ yaw _ orientation ═ True), the syntax element yaw _ orientation _ index is set by an index value selected from a plurality of predefined index values (index values) that are respectively mapped to different predefined rotation angles and user-defined rotation angles. For example, the mapping between index values and rotation angles may be defined by the following table.

If the rotation angle of the rotation along the yaw axis is not indexed by any one of "000" - "110" (i.e., Orientation _ index ═ 111'), the user-defined rotation angle is signaled by setting the syntax element yaw _ Orientation _ degree.

When there is a rotation along the roll axis (i.e., | zero _ roll _ orientation ═ True), the syntax element roll _ orientation _ index is set by an index value selected from the predefined index values listed in the above table. If the rotation angle along the rotation of the cross roller is not indexed by any one of "000" - "110" (i.e., organization _ index ═ 111'), the user-defined rotation angle is signaled by setting the syntax element roll _ Orientation _ degree.

When there is a rotation along the pitch axis (i.e., | zero _ pitch _ orientation ═ True), the syntax element pitch _ orientation _ index is set by an index value selected from the predefined index values listed in the above table. If the rotation angle of the rotation along the pitch axis is not indexed by any one of "000" - "110" (i.e., Orientation _ index ═ 111'), the user-defined rotation angle is signaled by setting the syntax element pitch _ Orientation _ degree.

The range of rotation angles for these three axes need not range from-180 deg. to 180 deg. (i.e., 0 deg. -360 deg.) in order to represent all possible content-oriented rotations. In practice, one of the rotations ranges from-90 to 90 (i.e., 0-180), while the remaining from-180 to 180 (i.e., 0-360) is sufficient to represent content-oriented rotation. In the first syntax signaling method, it is assumed that the rotation angle is an integer value. With respect to the user-defined rotation angles of the first rotation axis (e.g., yaw axis or z-axis) and the second rotation axis (e.g., roll axis or x-axis) in the rotation sequence (e.g., yaw-roll-pitch (z-x-y)), each is set by 9 bits to indicate a rotation angle in the range from-180 ° to 180 ° (i.e., 0 ° -360 °). However, with respect to the user-defined angle of rotation of the third axis of rotation (e.g., pitch or y-axis) in the rotation sequence (e.g., yaw-roll-pitch (z-x-y)), the user-defined rotation ranges only from-90 ° to 90 ° (i.e., 0 ° -180 °). Thus, 8 bits are sufficient to represent a user-defined rotation angle of the third rotation axis (e.g., pitch axis or y-axis).

According to the second syntax signaling method, the following syntax table may be used.

When the second syntax signaling method is applied, the rotation information of the content-oriented rotation may be indicated in a sequence-level header in the duration (time-duration) of the video frame. For example, the Audio Video coding Standard (AVS) has an SPS for the duration of Video frames. These video frames within the same duration have the same sequence level codec parameters. Thus, the rotation information for the content-oriented rotation may be indicated in the current duration of the video frame and may be updated in the next duration of the video frame. In some embodiments of the invention, the rotation Information for the content-oriented rotation may be indicated in the SPS/PPS or Supplemental Enhancement Information (SEI) in the duration of the video frame. Alternatively, the rotation information of the content-oriented rotation may be indicated in the picture-level header when the second syntax signaling method is applied. Thus, the rotation information of the content guide rotation is signaled for each video frame.

The syntax element prev orientation is set to indicate whether the content-oriented rotation applied to the current input frame is the same as the content-oriented rotation applied to at least one previous input frame. For example, for the case of the content-oriented rotated rotation information indicated in the sequence-level header within the duration of a video frame, the current input frame may be the first video frame within the duration of the video frame, and each of the at least one previous input frame may be a video frame within the previous duration of the video frame, and the current duration of the video frame immediately follows the previous duration. In another embodiment, the at least one previous input frame and the current input frame are two consecutive video frames for the case of content-oriented rotated information indicated in the picture-level header of each video frame. Therefore, when the content-oriented rotation in the current duration of the video frame is the same as the content-oriented rotation in the previous duration of the video frame, a 1-bit syntax element prev _ orientation is signaled to save syntax bits for representing the rotation angle information.

When the content-oriented rotation applied to the current input frame is different from the content-oriented rotation applied to at least one previous input frame (i.e., | prev orientation ═ True), the syntax element zero _ yaw _ orientation is set to indicate whether there is rotation along the yaw axis (e.g., z axis), the syntax element zero _ roll _ orientation is set to indicate whether there is rotation along the roll axis (e.g., x axis), and the syntax element zero _ pitch _ orientation is set to indicate whether there is rotation along the pitch axis (e.g., y axis).

When there is a rotation along the yaw axis (i.e., | zero _ yaw _ orientation ═ True), the syntax element yaw _ orientation _ diff is set to a rotation angle difference along the yaw axis (rotation degree difference) for indicating the content-oriented rotation applied to the current input frame and the content-oriented rotation applied to the at least one previous input frame. When decoding one video frame, the video decoder 122 can determine the rotation angle along the yaw axis by adding the rotation angle difference signaled by the syntax element raw _ orientation _ diff.

When there is rotation along the roll axis (i.e., | zero _ roll _ orientation ═ True), the syntax element roll _ orientation _ diff is set to indicate a rotation angle difference along the roll axis for the content oriented rotation applied to the current input frame and the content oriented rotation applied to the at least one previous input frame. When decoding a video frame, the video decoder 122 can determine the rotation angle along the roll axis by adding the rotation angle difference signaled by the syntax element roll _ orientation _ diff.

When there is a rotation along the pitch axis (i.e., | zero _ pitch _ orientation ═ True), the syntax element pitch _ orientation _ diff is set to indicate a rotation angle difference along the pitch axis that applies to the content-oriented rotation of the current input frame and to the content-oriented rotation of at least one previous input frame. When decoding one video frame, the video decoder 122 may determine the rotation angle along the pitch axis by adding the rotation angle difference signaled by the syntax element pitch _ orientation _ diff.

Each of the first syntax signaling method and the second syntax signaling method described above performs unified syntax signaling of rotation information regardless of the 360VR projection format applied. Alternatively, the first syntax signaling method and the second syntax signaling method may be modified to a syntax signaling method based on the projection format. That is, the syntax signaling of the rotation information may depend on the 360VR projection format applied.

According to the third syntax signaling method, the following syntax table may be used.

According to the fourth syntax signaling method, the following syntax table may be used.

Different 360VR projection formats may have different appropriate rotation sizes. For example, for a cube projection format, a single yaw rotation may be sufficient. For another example, for an equirectangular projection format, a single roll rotation may be sufficient. Therefore, when the 360VR projection format is the cube projection format, the syntax element VR _ content _ format is set to "1"; when the 360VR projection format is an equal rectangular projection format, the syntax element VR _ content _ format is set to "3". In the present embodiment, vr _ content _ format is 1/3 with one dimension of the rotation of the syntax signaling, and vr _ content _ format is 2 with two dimensions of the rotation of the syntax signaling. In short, with respect to each of the third syntax signaling method and the fourth syntax signaling method, the selection of the rotation axis of the content-oriented rotation depends on the syntax element VR _ content _ format, which is set on the basis of the applied 360VR projection format. Since the details of the third syntax signaling method and the fourth syntax signaling method can be easily understood by those skilled in the art after reading the above paragraphs regarding the first syntax signaling method and the second syntax signaling method, further description is omitted here for the sake of brevity.

According to the fifth syntax signaling method, the following syntax table may be used.

A 1-bit on/off flag (disable _ content _ orientation) is used to indicate whether content-oriented rotation of the 360-degree image/video content in the current input frame is enabled. When content-oriented rotation of the 360-degree image/video content in the current input frame is enabled, the syntax element disable _ content _ orientation is set to "0"; when the content-oriented rotation of the 360-degree image/video content in the current input frame is disabled, the syntax element disable _ content _ orientation is set to "1". In a case where content-oriented rotation of the 360-degree image/video content in the current input frame is enabled (i.e., | disable _ content _ orientation ═ True), the syntax element roll _ orientation _ degree is set to indicate a rotation angle along the roll axis (e.g., x-axis), the syntax element yaw _ orientation _ degree is set to indicate a rotation angle along the yaw axis (e.g., z-axis), and the syntax element pitch _ orientation _ degree is set to indicate a rotation angle along the pitch axis (e.g., y-axis).

When the fifth syntax signaling method is applied, rotation information of the content guide rotation may be indicated in the sequence level header. For example, rotation information indicating content-oriented rotation may be signaled in the SPS/PPS or supplemental enhancement information along the rotation angle of each rotation axis. Alternatively, when the fifth syntax signaling method is applied, the rotation information of the content-oriented rotation may be indicated in the picture-level header of each video frame.

Those skilled in the art will readily observe that numerous modifications and alterations of the apparatus and method may be made while maintaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the scope of the following claims.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于卷积神经网络增强AVS帧内解码的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类