Method and device for point cloud compression

文档序号:119168 发布日期:2021-10-19 浏览:23次 中文

阅读说明:本技术 点云压缩的方法和装置 (Method and device for point cloud compression ) 是由 阿拉什·沃索基 芮世薰 刘杉 于 2020-02-28 设计创作,主要内容包括:本公开的各方面提供了点云压缩和解压缩的方法和装置。在一些示例中,一种点云压缩/解压缩装置包括处理电路。在一些实施例中,所述处理电路从已编码比特流中解码点云的预测信息,根据从所述已编码比特流中解码的所述点云的几何图像,重建几何重建云。进一步,所述处理电路将滤波器应用于,除了面片的边界样本之外,所述几何重建云中所述面片内至少一个几何样本,以生成平滑的几何重建云;以及,根据所述平滑的几何重建云,重建所述点云的点。(Aspects of the present disclosure provide methods and apparatus for point cloud compression and decompression. In some examples, a point cloud compression/decompression apparatus includes processing circuitry. In some embodiments, the processing circuitry decodes prediction information for a point cloud from an encoded bitstream, reconstructs a geometrically reconstructed cloud from a geometric image of the point cloud decoded from the encoded bitstream. Further, the processing circuitry applies a filter to at least one geometric sample within a patch in the geometric reconstruction cloud, except for boundary samples of the patch, to generate a smoothed geometric reconstruction cloud; and reconstructing a cloud according to the smoothed geometry, and reconstructing points of the point cloud.)

1. A method of point cloud decompression, comprising:

a processor decodes prediction information of the point cloud from the encoded bitstream;

reconstructing, by the processor, a geometrically reconstructed cloud from a geometric image of the point cloud decoded from the encoded bitstream;

the processor applying a filter to at least one geometric sample within a patch in the geometric reconstruction cloud, except for boundary samples of the patch, to generate a smoothed geometric reconstruction cloud; and the number of the first and second groups,

reconstructing a cloud from the smoothed geometry, the processor reconstructing points of the point cloud.

2. The method of claim 1, further comprising:

the processor selects a region within the panel, wherein a high frequency component level of the region is above a threshold level.

3. The method of claim 1, further comprising:

the processor selects a region within the panel, wherein the region has a motion content level above a threshold level.

4. The method of claim 2, further comprising:

and according to the depth value of the geometric reconstruction cloud, the processor detects the edge in the panel.

5. The method of claim 3, further comprising:

and selecting points in the panel by the processor according to the motion information of the corresponding pixels in the geometric image.

6. The method of claim 1, wherein the prediction information comprises a flag, wherein the flag indicates that selective smoothing is applied within a patch of the point cloud.

7. The method of claim 6, wherein the prediction information indicates a particular algorithm for selecting points within a patch.

8. The method of claim 7, wherein the prediction information comprises parameters of the particular algorithm.

9. A point cloud compression method, comprising:

a processor compresses a geometric image associated with the point cloud;

reconstructing, by the processor, a geometrically reconstructed cloud from the compressed geometric image of the point cloud;

the processor applying a filter to at least one geometric sample within a patch of the geometric reconstruction cloud, except for boundary samples of the patch, to generate a smoothed geometric reconstruction cloud; and the number of the first and second groups,

from the smoothed geometrically reconstructed cloud, the processor generates a texture image of the point cloud.

10. The method of claim 9, further comprising:

the processor selects a region within the panel, wherein a high frequency component level of the region is above a threshold level.

11. The method of claim 9, further comprising:

the processor selects a region within the panel, wherein the region has a motion content level above a threshold level.

12. The method of claim 10, further comprising:

and according to the depth value of the geometric reconstruction cloud, the processor detects the edge in the panel.

13. The method of claim 11, further comprising:

and selecting points in the panel by the processor according to the motion information of the corresponding pixels in the geometric image.

14. The method of claim 9, further comprising:

a flag is included in an encoded bitstream of a compressed point cloud, wherein the flag indicates that selective smoothing is applied within a patch of the point cloud.

15. The method of claim 14, further comprising:

an indicator is included in the encoded bitstream of the compressed point cloud, wherein the indicator indicates a particular algorithm to select points within the patch to apply selective smoothing.

16. A point cloud decompression apparatus, comprising processing circuitry configured to:

decoding prediction information of the point cloud from the encoded bitstream;

reconstructing a geometrically reconstructed cloud from a geometric image of the point cloud decoded from the encoded bitstream;

applying a filter to at least one geometric sample within a patch in the geometric reconstruction cloud, except for boundary samples of the patch, to generate a smoothed geometric reconstruction cloud; and the number of the first and second groups,

and reconstructing a cloud according to the smooth geometry, and reconstructing points of the point cloud.

17. The apparatus of claim 16, wherein the processing circuit is further configured to:

selecting a region within the panel, wherein a high frequency component level of the region is above a threshold level.

18. The apparatus of claim 16, wherein the processing circuit is further configured to:

selecting a region within the panel, wherein the region has a motion content level above a threshold level.

19. The apparatus of claim 17, wherein the processing circuit is further configured to:

and detecting the edge in the panel according to the depth value of the geometric reconstruction cloud.

20. The apparatus of claim 18, wherein the processing circuit is further configured to:

and selecting points in the face according to the motion information of the corresponding pixels in the geometric image.

Technical Field

The present disclosure describes embodiments relating to point cloud compression.

Background

The background description provided herein is intended to be a general background of the disclosure. Work of the presently named inventors, to the extent it is described in this background section, as well as aspects of the description, is not admitted to be prior art by inclusion in the filing of this disclosure, nor is it expressly or implied that it is prior art to the present disclosure.

Various techniques have been developed to capture and describe the world, such as objects in the world, the environment in the world, and the like in three-dimensional (3D) space. The 3D representation of the world enables a more immersive form of interaction and communication. The point cloud may be used as a 3D representation of the world. A point cloud is a collection of points in 3D space, each point having associated attributes such as color, material properties, texture information, intensity attributes, reflectivity attributes, motion-related attributes, modal attributes, and various other attributes. The point cloud may include a large amount of data, and storage and transmission may be expensive and time consuming.

Disclosure of Invention

Aspects of the present disclosure provide methods, apparatuses for point cloud compression and decompression. In some examples, an apparatus for point cloud compression/decompression includes processing circuitry.

According to aspects of the present disclosure, a point cloud decompression apparatus includes a processing circuit. The processing circuitry decodes prediction information for a point cloud from an encoded bitstream, reconstructs a geometrically reconstructed cloud from a geometric image of the point cloud decoded from the encoded bitstream. Further, the processing circuitry applies a filter to at least one geometric sample within a patch in the geometric reconstruction cloud, except for boundary samples of the patch, to generate a smoothed geometric reconstruction cloud; and reconstructing a cloud according to the smoothed geometry, and reconstructing points of the point cloud.

In some embodiments, the processing circuitry selects a region within the panel, wherein a high frequency component level of the region is above a threshold level. In some examples, the processing circuitry detects edges within the panels as a function of depth values of the geometric reconstruction clouds.

In some embodiments, the processing circuitry selects a region within the panel, wherein the region has a motion content level above a threshold level. In some examples, the processing circuitry selects a point within the panel based on motion information of a corresponding pixel in the geometric image.

In some embodiments, the prediction information comprises a flag, wherein the flag indicates that selective smoothing is applied within a patch of the point cloud. In some examples, the prediction information indicates a particular algorithm to select points within a patch. Further, the prediction information includes parameters of a particular algorithm.

According to some aspects of the present disclosure, a point cloud compression device includes processing circuitry. The processing circuit compresses the geometric image associated with the point cloud and reconstructs a geometric reconstructed cloud from the compressed geometric image of the point cloud. The processing circuitry then applies a filter to at least one geometric sample within a patch in the geometric reconstruction cloud, except for boundary samples of the patch, to generate a smoothed geometric reconstruction cloud; and reconstructing the cloud according to the smooth geometry to generate a texture image of the point cloud.

In some embodiments, the processing circuitry selects a region within the panel in which a high frequency component level of the region is above a threshold level. For example, the processing circuitry detects edges within the panels from depth values of the geometric reconstruction clouds.

In some embodiments, the processing circuitry selects a region within the panel, wherein the region has a motion content level above a threshold level. For example, the processing circuit selects points within the panel based on motion information of corresponding pixels in the geometric image.

In some embodiments, the processing circuit includes a flag in the encoded bitstream of the compressed point cloud, wherein the flag indicates that selective smoothing is applied within a patch of the point cloud. In some examples, the processing circuit includes an indicator in the encoded bitstream of the compressed point cloud, wherein the indicator indicates a particular algorithm to select points within the patch to apply selective smoothing, and parameters of the particular algorithm.

Aspects of the present disclosure also provide a non-transitory computer-readable medium having stored therein instructions for point cloud compression/decompression, which when executed by a computer, cause the computer to perform a method for point cloud compression/decompression.

Brief description of the drawings

Further features, properties and various advantages of the disclosed subject matter will become more apparent from the following detailed description and the accompanying drawings, in which:

fig. 1 is a schematic diagram of a simplified block diagram of a communication system (100) according to one embodiment.

Fig. 2 is a schematic diagram of a simplified block diagram of a streaming system (200) according to one embodiment.

Fig. 3 illustrates a block diagram of an encoder (300) for encoding a point cloud frame, in accordance with some embodiments.

Fig. 4 illustrates a block diagram of a decoder for decoding a compressed bitstream corresponding to a point cloud frame, in accordance with some embodiments.

Fig. 5 is a schematic diagram of a simplified block diagram of a video decoder according to one embodiment.

Fig. 6 is a schematic diagram of a simplified block diagram of a video encoder according to one embodiment.

Fig. 7 is a geometric image and a texture image for a point cloud according to some embodiments of the present disclosure.

Fig. 8 is an example of a grammar according to some embodiments of the present disclosure.

Fig. 9 is a flow chart of an example of an overview process according to an embodiment of the present disclosure.

Fig. 10 is a flow chart of an example of an overview process according to an embodiment of the present disclosure.

Fig. 11 is a schematic diagram of a computer system, according to an embodiment.

Detailed Description

Aspects of the present disclosure provide Point Cloud codec techniques, particularly Point Cloud Compression (V-PCC) using Video coding. The V-PCC may use a common video codec for point cloud compression. The point cloud codec techniques in this disclosure may improve lossless and lossy compression produced by V-PCC.

A point cloud is a set of points in 3D space, each point having associated attributes such as color, material properties, texture information, intensity attributes, reflectivity attributes, motion-related attributes, modal attributes, and various other attributes. The point cloud may be used to reconstruct the object or scene as a combination of such points. The points may be captured using multiple cameras and depth sensors in various settings, and may consist of thousands to billions of points in order to truly represent the reconstructed scene.

Compression techniques are needed to reduce the amount of data required to represent the point cloud. Therefore, in real-time communication and six-degree-of-freedom (6DoF) virtual reality, a lossy compression technique using point clouds is required. In addition, lossless point cloud compression techniques are sought in the context of dynamic mapping for autopilot and cultural heritage applications, etc. The Moving Pictures Expert Group (MPEG) began to investigate standards addressing compression of geometries and attributes, such as color and reflectivity, scalable/progressive encoding, encoding of point cloud sequences captured over time, and random access to point cloud subsets.

According to one embodiment of the present disclosure, the main idea behind V-PCC is to compress the geometry, occupancy, and texture of the dynamic point cloud into three separate video sequences using existing video codecs. The additional metadata required to interpret the three video sequences is compressed separately. A small part of the entire bitstream is metadata that can be efficiently encoded/decoded using software. Most of the information is processed by the video codec.

Fig. 1 shows a simplified block diagram of a communication system (100) according to one embodiment of the present disclosure. The communication system (100) comprises a plurality of terminal devices capable of communicating with each other via, for example, a network (150). For example, a communication system (100) includes a pair of terminal devices (110) and (120) interconnected via a network (150). In the example of fig. 1, a first pair of terminal devices (110) and (120) performs a one-way transmission of point cloud data. For example, the terminal device (110) may compress a point cloud (e.g., points representing a structure) captured by a sensor 105 connected to the terminal device (110). The compressed point cloud can be transmitted, for example in the form of a bit stream, via a network (150) to a further terminal device (120). The terminal device (120) may receive the compressed point cloud from the network (150), decompress the bitstream to reconstruct the point cloud, and display it as appropriate from the reconstructed point cloud. One-way data transmission may be common in applications such as media service applications.

In the example of fig. 1, the terminal devices (110) and (120) may be shown as a server and a personal computer, but the principles of the present disclosure may not be limited thereto. Embodiments of the present disclosure may be applied to laptop computers, tablet computers, smart phones, gaming terminals, media players, and/or dedicated three-dimensional (3D) devices. The network (150) represents any number of networks that transmit compressed point clouds between the terminal devices (110) and (120). The network (150) may include, for example, a wired (wired/wired) and/or wireless communication network. The network (150) may exchange data in circuit-switched and/or packet-switched channels. Representative networks include telecommunications networks, local area networks, wide area networks, and/or the internet. For purposes of this discussion, the structure and topology of the network (150) may be immaterial to the operation of the present disclosure, except as explained below.

FIG. 2 illustrates an example of the disclosed subject matter for point cloud application. The disclosed subject matter may be equally applied to other point cloud enabled applications including 3D remote applications, virtual reality applications.

The streaming system 200 may include a capture subsystem (213). The capture subsystem (213) may include a point cloud source (201), such as a light detection and ranging (LIDAR) system, a 3D camera, a 3D scanner, a graphics generation component that generates uncompressed point clouds in software, and similar components that generate uncompressed point clouds (202), for example. In one example, the uncompressed point cloud (202) includes points captured by a 3D camera. The point cloud (202) is depicted as a thick line to emphasize high data volume when compared to the compressed point cloud (204) (bit stream of compressed point cloud). The compressed point cloud (204) may be generated by an electronic device (220), the electronic device (220) including an encoder (203) coupled to a point cloud source (201). The encoder (203) may include hardware, software, or a combination thereof to enable or implement aspects of the disclosed subject matter as described in more detail below. The compressed point cloud (204) (or a bitstream of the compressed point cloud (204)) is drawn as a thin line to emphasize a lower amount of data when compared to the uncompressed point cloud stream (202), and the compressed point cloud (204) (or the bitstream of the compressed point cloud (204)) may be stored in a stream server (205) for future use. One or more streaming client subsystems, such as client subsystems (206) and (208) in fig. 2, can access a streaming server (205) to retrieve copies (207) and (209) of the compressed point cloud (204). The client subsystem (206) may include a decoder (210), for example, in an electronic device (230). The decoder (210) decodes the incoming copy (207) of the compressed point cloud and creates an outgoing stream of reconstructed point clouds (211) that can be rendered on a rendering device (212). In some streaming systems, the compressed point clouds (204), (207), and (209) (e.g., a bit stream of compressed point clouds) may be compressed according to certain standards. In some examples, a video coding standard is used in the compressed point cloud. Examples of these standards include High Efficiency Video Coding (HEVC), general video coding (VVC), and the like.

It is noted that the electronic devices (220) and (230) may include other components (not shown). For example, the electronic device (220) may include a decoder (not shown), and the electronic device (230) may also include an encoder (not shown).

Fig. 3 illustrates a block diagram of a V-PCC encoder (300) for encoding a point cloud frame, in accordance with some embodiments. In some embodiments, the V-PCC encoder (300) may be used in a communication system (100) and a streaming system (200). For example, the encoder (203) may be configured and operate in a similar manner as the V-PCC encoder (300).

A V-PCC encoder (300) receives an uncompressed point cloud frame as input and generates a bitstream corresponding to the compressed point cloud frame. In some embodiments, the V-PCC encoder (300) may receive a point cloud frame from a point cloud source, such as point cloud source (201).

In the example of fig. 3, the V-PCC encoder (300) includes a patch generation module 306, a patch packing module 308, a geometric image generation module 310, a texture image generation module 312, a patch information module 304, an occupancy map module 314, a smoothing module 336, image filling modules 316 and 318, a group expansion module 320, video compression modules 322, 323 and 332, an auxiliary patch information compression module 338, an entropy compression module 334, and a multiplexer 324 coupled together as shown in fig. 3.

According to one aspect of the disclosure, the V-PCC encoder (300) converts the 3D point cloud frame into an image-based representation, and some metadata (e.g., occupancy map and patch information) needed to convert the compressed point cloud back into a decompressed point cloud. In some examples, the V-PCC encoder (300) may convert the 3D point cloud frames into a geometric image, a texture image, and an occupancy map, and then encode the geometric image, the texture image, and the occupancy map into a bitstream using a video encoding technique. Typically, the geometric image is a 2D image. The pixels of the 2D image are filled with geometric values, wherein the geometric values are associated with the points projected to the pixels. The pixels filled with geometric values may be referred to as geometric samples. The texture image is a 2D image, and the pixels of the 2D image are filled with texture values associated with the points projected to the pixels. The pixels filled with texture values may be referred to as texture samples. The occupancy map is a 2D image, with pixels of the 2D image populated with values indicating whether a patch is occupied or unoccupied.

A patch generation module (306) segments the point cloud into a set of patches (e.g., a patch is defined as a contiguous subset of the surface described by the point cloud). The sets of patches may or may not overlap such that each patch may be described by a depth field relative to a plane in 2D space. In some embodiments, the patch generation module (306) aims to decompose the point cloud into a minimum number of patches with smooth boundaries, while also minimizing reconstruction errors.

A patch information module (304) may collect patch information indicating a size and a shape of a patch. In some examples, the patch information may be packed into an image frame and then encoded by the auxiliary patch information compression module 338 to generate compressed auxiliary patch information.

The patch packing module 308 is used to map the extracted patches onto a two-dimensional (2D) mesh while minimizing unused space and ensuring that each M × M (e.g., 16 × 16) block in the mesh is associated with a unique patch. Efficient patch packing can directly impact compression efficiency by minimizing unused space or ensuring temporal consistency.

The geometric image generation module 310 may generate a 2D geometric image associated with the geometry of the point cloud at a given patch location. The texture image generation module 312 may generate a 2D texture image associated with the texture of the point cloud at a given patch location. The geometric image generation module 310 and the texture image generation module 312 store the geometry and texture of the point cloud as an image using the 3D to 2D mapping computed in the packing process. To better handle the case where multiple points are projected onto the same sample, each patch is projected onto two images called layers. In one example, the geometric image is represented by a monochrome frame of WxH in YUV420-8bit format. To generate the texture image, the texture generation process uses the reconstructed/smoothed geometry to compute the color (also referred to as color transfer) associated with the resampled point.

The occupancy map module 314 may generate an occupancy map that describes fill information for each cell. For example, the occupancy map comprises a binary map indicating whether each cell in the grid belongs to a white space or a point cloud. In one example, the occupancy map uses binary information to describe whether each pixel is filled. In another example, the occupancy map uses binary information to describe whether each pixel block is populated.

The occupancy map generated by the occupancy map module 314 may be compressed using lossless encoding or lossy encoding. When lossless coding is used, the entropy compression module 334 is used to compress the occupancy map; when lossy coding is used, the video compression module 332 is used to compress the occupancy map.

It is noted that the patch packing module 308 may leave some empty space between 2D patches packed in the image frame. The image fill modules 316 and 318 may fill empty spaces (referred to as fills) to generate image frames suitable for 2D video and image codecs. Image filling, also referred to as background filling, may fill unused space with redundant information. In some examples, good background filling minimally increases the bit rate without introducing significant coding distortion around patch boundaries.

The video compression modules 322, 323 and 332 may encode, for example, padded geometric images, padded texture images and 2D images occupying maps based on a suitable video coding standard such as HEVC, VVC, etc. In one example, the video compression modules 322, 323, and 332 are separate components that operate separately. It is noted that in another example, the video compression modules 322, 323, and 332 may be implemented as a single component.

In some examples, the smoothing module 336 is to generate a smoothed image of the reconstructed geometric image. The smoothed image information may be provided to a texture image generation module 312. The texture image generation module 312 may then adjust the generation of the texture image based on the reconstructed geometric image. For example, when a patch shape (e.g., a geometric shape) is slightly distorted during encoding and decoding, the distortion may be considered to correct for distortion in the patch shape when generating the texture image.

In some embodiments, the group dilation module 320 is to add pixels to the boundary of the object to reduce compression artifacts and increase coding gain.

The multiplexer 324 may multiplex the compressed geometry image, the compressed texture image, the compressed occupancy map, and the compressed auxiliary patch information into a compressed bitstream.

Fig. 4 illustrates a block diagram of a V-PCC decoder (400), according to some embodiments, wherein the V-PCC decoder (400) is configured to decode a compressed bitstream corresponding to a point cloud frame. In some embodiments, the V-PCC decoder (400) may be used in a communication system (100) and a streaming system (200). For example, the decoder (210) may be configured and operate in a similar manner as the V-PCC decoder (400). A V-PCC decoder (400) receives the compressed bitstream and generates a reconstructed point cloud based on the compressed bitstream.

In the example of fig. 4, the V-PCC decoder (400) includes a demultiplexer (432), video decompression modules (434) and (436), an occupancy map compression module (438), an auxiliary patch information decompression module (442), a geometry reconstruction module (444), a smoothing module (446), a texture reconstruction module (448), and a color smoothing module (452) coupled together as shown in fig. 4.

A demultiplexer (432) may receive the compressed bitstream and separate into a compressed texture image, a compressed geometry image, a compressed occupancy map, and compressed auxiliary patch information.

The video decompression modules (434) and (436) may decode the compressed images according to a suitable standard (e.g., HEVC, VVC, etc.) and output the decompressed images. For example, the video decompression module (434) decodes the compressed texture image and outputs a decompressed texture image; the video decompression module (436) decodes the compressed geometric image and outputs a decompressed geometric image.

The occupancy map decompression module (438) may decode the compressed occupancy map according to an appropriate standard (e.g., HEVC, VVC, etc.) and output a decompressed occupancy map.

The auxiliary patch information decompression module (442) may decode the compressed auxiliary patch information according to an appropriate standard (e.g., HEVC, VVC, etc.) and output the decompressed auxiliary patch information.

A geometric reconstruction module (444) may receive the decompressed geometric image and generate a reconstructed point cloud geometry based on the decompressed occupancy map and the decompressed auxiliary patch information.

The smoothing module (446) may smooth out inconsistencies at the edges of the patch. The smoothing process is intended to mitigate potential discontinuities that may occur at patch boundaries due to compression artifacts. In some embodiments, a smoothing filter may be applied to pixels located at patch boundaries to mitigate distortion that may be caused by compression/decompression.

The texture reconstruction module (448) can determine texture information for points in the point cloud based on the decompressed texture image and the smooth geometry.

The color smoothing module (452) may smooth out shading inconsistencies. Non-adjacent patches in 3D space are typically packed next to each other in 2D video. In some examples, a block-based video codec may blend pixel values of non-adjacent patches. The goal of color smoothing is to reduce visible artifacts that occur at patch boundaries.

Fig. 5 shows a block diagram of a video decoder (510) according to one embodiment of the present disclosure. The video decoder (510) may be used in a V-PCC decoder (400). For example, the video decompression modules (434) and (436), the occupancy map compression module (438) may be configured similarly to the video decoder (510).

The video decoder (510) may include a parser (520) to reconstruct symbols (521) from compressed images, such as encoded video sequences. The classes of symbols include information for managing the operation of the video decoder (510). The parser (520) may parse/entropy decode the received encoded video sequence. Encoding of the encoded video sequence may be performed in accordance with video coding techniques or standards and may follow various principles, including variable length coding, Huffman coding, arithmetic coding with or without contextual sensitivity, and so forth. A parser (520) may extract a subgroup parameter set for at least one of the subgroups of pixels in the video decoder from the encoded video sequence based on at least one parameter corresponding to the group. A subgroup may include a Group of Pictures (GOP), a picture, a tile, a slice, a macroblock, a Coding Unit (CU), a block, a Transform Unit (TU), a Prediction Unit (PU), and so on. The parser (520) may also extract information from the encoded video sequence, such as transform coefficients, quantizer parameter values, motion vectors, and so on.

The parser (520) may perform entropy decoding/parsing operations on the video sequence received from the buffer memory, thereby creating symbols (521).

The reconstruction of the symbol (521) may involve a number of different units depending on the type of the encoded video picture or portion of the encoded video picture (e.g., inter and intra pictures, inter and intra blocks), among other factors. Which units are involved and the way they are involved can be controlled by subgroup control information parsed from the coded video sequence by a parser (520). For the sake of brevity, such a subgroup control information flow between parser (520) and the following units is not described.

In addition to the functional blocks already mentioned, the video decoder (510) may be conceptually subdivided into several functional units as described below. In a practical embodiment operating under business constraints, many of these units interact closely with each other and may be integrated with each other. However, for the purposes of describing the disclosed subject matter, a conceptual subdivision into the following functional units is appropriate.

The first unit is a scaler/inverse transform unit (551). The scaler/inverse transform unit (551) receives the quantized transform coefficients as symbols (521) from the parser (520) along with control information including which transform scheme to use, block size, quantization factor, quantization scaling matrix, etc. The sealer/inverse transform unit (551) may output a block comprising sample values, which may be input into an aggregator (555).

In some cases, the output samples of sealer/inverse transform unit (551) may belong to an intra-coded block; namely: predictive information from previously reconstructed pictures is not used, but blocks of predictive information from previously reconstructed portions of the current picture may be used. Such predictive information may be provided by an intra picture prediction unit (552). In some cases, the intra picture prediction unit (552) generates surrounding blocks of the same size and shape as the block being reconstructed using the reconstructed information extracted from the current picture buffer (558). For example, the current picture buffer (558) buffers a partially reconstructed current picture and/or a fully reconstructed current picture. In some cases, the aggregator (555) adds the prediction information generated by the intra prediction unit (552) to the output sample information provided by the scaler/inverse transform unit (551) on a per sample basis.

In other cases, the output samples of sealer/inverse transform unit (551) may belong to inter-coded and potential motion compensated blocks. In this case, the motion compensated prediction unit (553) may access a reference picture memory (557) to fetch samples for prediction. After motion compensating the extracted samples according to the sign (521), the samples may be added to the output of the scaler/inverse transform unit (551), in this case referred to as residual samples or residual signals, by an aggregator (555), thereby generating output sample information. The motion compensated prediction unit (553) fetching prediction samples from an address within the reference picture memory (557) may be motion vector controlled and used by the motion compensated prediction unit (553) in the form of the symbol (521), the symbol (521) comprising, for example, X, Y and a reference picture component. Motion compensation may also include interpolation of sample values fetched from a reference picture memory (557), motion vector prediction mechanisms, etc., when using sub-sample exact motion vectors.

The output samples of the aggregator (555) may be employed by various loop filtering techniques in the loop filter unit (556). The video compression techniques may include in-loop filter techniques that are controlled by parameters included in the encoded video sequence (also referred to as an encoded video bitstream) and that are available to the loop filter unit (556) as symbols (521) from the parser (520). However, in other embodiments, the video compression techniques may also be responsive to meta-information obtained during decoding of previous (in decoding order) portions of the encoded picture or encoded video sequence, as well as to sample values previously reconstructed and loop filtered.

The output of the loop filter unit (556) may be a stream of samples that may be output to a display device and stored in a reference picture memory (557) for subsequent inter picture prediction.

Once fully reconstructed, some of the coded pictures may be used as reference pictures for future prediction. For example, once the encoded picture corresponding to the current picture is fully reconstructed and the encoded picture is identified (by, e.g., parser (520)) as a reference picture, current picture buffer (558) may become part of reference picture memory (557) and a new current picture buffer may be reallocated before starting reconstruction of a subsequent encoded picture.

The video decoder (510) may perform decoding operations according to predetermined video compression techniques, such as in the ITU-T h.265 standard. The encoded video sequence may conform to the syntax specified by the video compression technique or standard used, in the sense that the encoded video sequence conforms to the syntax of the video compression technique or standard and the configuration files recorded in the video compression technique or standard. In particular, the configuration file may select certain tools from all tools available in the video compression technology or standard as the only tools available under the configuration file. For compliance, the complexity of the encoded video sequence is also required to be within the limits defined by the level of the video compression technique or standard. In some cases, the hierarchy limits the maximum picture size, the maximum frame rate, the maximum reconstruction sampling rate (measured in units of, e.g., mega samples per second), the maximum reference picture size, and so on. In some cases, the limits set by the hierarchy may be further defined by a Hypothetical Reference Decoder (HRD) specification and metadata signaled HRD buffer management in the encoded video sequence.

Fig. 6 shows a block diagram of a video encoder (603) according to one embodiment of the present disclosure. The video encoder (603) may be used to compress point clouds in the V-PCC encoder (300). In one example, the video compression modules (322) and (323) and the video compression module (332) are configured similarly to the encoder (603).

The video encoder (603) may receive images, such as padded geometric images, padded texture images, and generate compressed images.

According to an embodiment, the video encoder (603) may encode and compress pictures of the source video sequence(s) into an encoded video sequence (encoded images) in real-time or under any other temporal constraints required by the application. It is a function of the controller (650) to implement the appropriate encoding speed. In some embodiments, the controller (650) controls and is functionally coupled to other functional units as described below. For simplicity, the couplings are not labeled in the figures. The parameters set by the controller (650) may include rate control related parameters (picture skip, quantizer, lambda value of rate distortion optimization technique, etc.), picture size, group of pictures (GOP) layout, maximum motion vector search range, etc. The controller (650) may be used to have other suitable functions relating to the video encoder (603) optimized for a certain system design.

In some embodiments, the video encoder (603) operates in an encoding loop. As a brief description, in an embodiment, an encoding loop may include a source encoder (630) (e.g., responsible for creating symbols, e.g., a stream of symbols, based on input pictures and reference pictures to be encoded) and a (local) decoder (633) embedded in a video encoder (603). The decoder (633) reconstructs the symbols to create sample data in a manner similar to the way a (remote) decoder creates the sample data (since any compression between the symbols and the encoded video bitstream is lossless in the video compression techniques contemplated by this disclosure). The reconstructed sample stream (sample data) is input to a reference picture memory (634). Since the decoding of the symbol stream produces bit accurate results independent of decoder location (local or remote), the content in the reference picture store (634) also corresponds bit accurately between the local encoder and the remote encoder. In other words, the reference picture samples that the prediction portion of the encoder "sees" are identical to the sample values that the decoder would "see" when using prediction during decoding. This reference picture synchronization philosophy (and the drift that occurs if synchronization cannot be maintained due to, for example, channel errors) is also used in some related techniques.

The operation of the "local" decoder (633) may be the same as a "remote" decoder, such as the video decoder (510) described in detail above in connection with fig. 5. However, referring briefly also to fig. 5, when symbols are available and the entropy encoder (645) and parser (520) are able to losslessly encode/decode the symbols into an encoded video sequence, the entropy decoding portion of the video decoder (510), including the parser (520), may not be fully implemented in the local decoder (633).

At this point it can be observed that any decoder technique other than the parsing/entropy decoding present in the decoder must also be present in the corresponding encoder in substantially the same functional form. For this reason, the present disclosure focuses on decoder operation. The description of the encoder techniques may be simplified because the encoder techniques are reciprocal to the fully described decoder techniques. A more detailed description is only needed in certain areas and is provided below.

During operation, in some embodiments, the source encoder (630) may perform motion compensated predictive coding. The motion compensated predictive coding predictively codes an input picture with reference to one or more previously coded pictures from the video sequence that are designated as "reference pictures". In this way, an encoding engine (632) encodes differences between pixel blocks of an input picture and pixel blocks of a reference picture, which may be selected as a prediction reference for the input picture.

The local video decoder (633) may decode encoded video data for a picture that may be designated as a reference picture based on the symbols created by the source encoder (630). The operation of the encoding engine (632) may be a lossy process. When the encoded video data can be decoded at a video decoder (not shown in fig. 6), the reconstructed video sequence may typically be a copy of the source video sequence with some errors. The local video decoder (633) replicates a decoding process that may be performed on reference pictures by the video decoder, and may cause reconstructed reference pictures to be stored in a reference picture cache (634). In this way, the video encoder (603) may locally store a copy of the reconstructed reference picture that has common content (no transmission errors) with the reconstructed reference picture to be obtained by the remote video decoder.

Predictor (635) may perform a prediction search for coding engine (632). That is, for a new picture to be encoded, predictor (635) may search reference picture memory (634) for sample data (as candidate reference pixel blocks) or some metadata, such as reference picture motion vectors, block shapes, etc., that may be referenced as appropriate predictions for the new picture. The predictor (635) may operate on a block-by-block basis of samples to find a suitable prediction reference. In some cases, from search results obtained by predictor (635), it may be determined that the input picture may have prediction references derived from multiple reference pictures stored in reference picture memory (634).

The controller (650) may manage the encoding operations of the source encoder (630), including, for example, setting parameters and subgroup parameters for encoding the video data.

The outputs of all of the above functional units may be entropy encoded in an entropy encoder (645). The entropy encoder (645) losslessly compresses the symbols generated by the various functional units according to techniques such as huffman coding, variable length coding, arithmetic coding, etc., to convert the symbols into an encoded video sequence.

The controller (650) may manage the operation of the video encoder (603). During encoding, the controller (650) may assign a certain encoded picture type to each encoded picture, but this may affect the encoding techniques applicable to the respective picture. For example, pictures may be generally assigned to any of the following picture types:

intra pictures (I pictures), which may be pictures that can be encoded and decoded without using any other picture in the sequence as a prediction source. Some video codecs tolerate different types of intra pictures, including, for example, Independent Decoder Refresh ("IDR") pictures. Those skilled in the art are aware of variants of picture I and their corresponding applications and features.

Predictive pictures (P pictures), which may be pictures that may be encoded and decoded using intra prediction or inter prediction that uses at most one motion vector and reference index to predict sample values of each block.

Bi-predictive pictures (B-pictures), which may be pictures that can be encoded and decoded using intra-prediction or inter-prediction that uses at most two motion vectors and reference indices to predict sample values of each block. Similarly, multiple predictive pictures may use more than two reference pictures and associated metadata for reconstructing a single block.

A source picture may typically be spatially subdivided into blocks of samples (e.g., blocks of 4 x 4, 8 x 8, 4 x 8, or 16 x 16 samples) and encoded block-wise. These blocks may be predictively encoded with reference to other (encoded) blocks that are determined according to the encoding allocation applied to their respective pictures. For example, a block of an I picture may be non-predictive encoded, or the block may be predictive encoded (spatial prediction or intra prediction) with reference to an already encoded block of the same picture. The pixel block of the P picture can be prediction-coded by spatial prediction or by temporal prediction with reference to one previously coded reference picture. A block of a B picture may be prediction coded by spatial prediction or by temporal prediction with reference to one or two previously coded reference pictures.

The video encoder (603) may perform encoding operations according to a predetermined video encoding technique or standard, such as the ITU-T h.265 recommendation. In operation, the video encoder (603) may perform various compression operations, including predictive encoding operations that exploit temporal and spatial redundancies in the input video sequence. Thus, the encoded video data may conform to syntax specified by the video coding technique or standard used.

Video may be in the form of multiple source pictures (images) in a temporal sequence. Intra-picture prediction, often abbreviated as intra-prediction, exploits spatial correlation in a given picture, while inter-picture prediction exploits (temporal or other) correlation between pictures. In an embodiment, the particular picture being encoded/decoded, referred to as the current picture, is partitioned into blocks. When a block in a current picture is similar to a reference block in a reference picture that has been previously encoded in video and is still buffered, the block in the current picture may be encoded by a vector called a motion vector. The motion vector points to a reference block in a reference picture, and in the case where multiple reference pictures are used, the motion vector may have a third dimension that identifies the reference picture.

In some embodiments, bi-directional prediction techniques may be used in inter-picture prediction. According to bi-prediction techniques, two reference pictures are used, e.g., a first reference picture and a second reference picture that are both prior to the current picture in video in decoding order (but may be past and future, respectively, in display order). A block in a current picture may be encoded by a first motion vector pointing to a first reference block in a first reference picture and a second motion vector pointing to a second reference block in a second reference picture. In particular, the block may be predicted by a combination of a first reference block and a second reference block.

Furthermore, merge mode techniques may be used in inter picture prediction to improve coding efficiency.

According to some embodiments of the present disclosure, prediction such as inter-picture prediction and intra-picture prediction is performed in units of blocks. For example, according to the HEVC standard, pictures in a sequence of video pictures are partitioned into Coding Tree Units (CTUs) for compression, the CTUs in the pictures having the same size, e.g., 64 × 64 pixels, 32 × 32 pixels, or 16 × 16 pixels. In general, a CTU includes three Coding Tree Blocks (CTBs), which are one luminance CTB and two chrominance CTBs. Further, each CTU may be further split into one or more Coding Units (CUs) in a quadtree. For example, a 64 × 64-pixel CTU may be split into one 64 × 64-pixel CU, or 432 × 32-pixel CUs, or 16 × 16-pixel CUs. In an embodiment, each CU is analyzed to determine a prediction type for the CU, such as an inter prediction type or an intra prediction type. Furthermore, depending on temporal and/or spatial predictability, a CU is split into one or more Prediction Units (PUs). In general, each PU includes a luma Prediction Block (PB) and two chroma blocks PB. In an embodiment, a prediction operation in encoding (encoding/decoding) is performed in units of prediction blocks. Taking a luma prediction block as an example of a prediction block, the prediction block includes a matrix of pixel values (e.g., luma values), such as 8 × 8 pixels, 16 × 16 pixels, 8 × 16 pixels, 16 × 8 pixels, and so on.

According to some aspects of the disclosure, geometric smoothing may be performed at the encoder side (for point cloud compression) and the decoder side (for point cloud reconstruction). In an example, at the encoder side, after compressing the geometric video, the geometric part of the point cloud is reconstructed using the compressed geometric video and the corresponding occupancy map, and the reconstructed point cloud (geometric part) is referred to as a geometric reconstructed cloud. The geometric reconstruction cloud is used to generate a texture image. For example, texture image generation 312 may determine colors (also referred to as color shifts) associated with resample points in the geometric reconstruction cloud and generate the texture image accordingly.

In some examples, geometric smoothing is applied to the geometrically reconstructed cloud prior to color transfer. For example, the smoothing module 336 may apply smoothing (e.g., a smoothing filter) to a geometric reconstruction cloud generated based on the reconstructed geometric image. In some embodiments of the present disclosure, the smoothing module 336 is configured to recover not only geometric distortion at patch boundaries, but also within a patch.

On the decoder side, using V-PCC decoder 400 in fig. 4 as an example, smoothing module 446 may apply smoothing to the geometric reconstruction cloud and generate a smoothed geometric reconstruction cloud. Then, based on the decompressed texture image and the smoothed geometrically reconstructed cloud, the texture reconstruction module 448 can determine texture information for the points in the point cloud.

According to some aspects of the present disclosure, distortion may occur due to quantization errors during conversion of the geometric compression and/or the high resolution occupancy map to a lower resolution map. Quantization errors may affect patch boundaries and may affect reconstructed depth values (geometric information of points) within a patch, resulting in an unsmooth reconstructed surface. The present disclosure provides techniques for reconstructing depth values within smooth slices.

The proposed methods can be used alone or in any combination thereof. Further, each method (or embodiment), each encoder and decoder may be implemented by processing circuitry (e.g., one or more processors, or one or more integrated circuits). In one example, one or more processors execute a program stored in a non-transitory computer readable medium.

Fig. 7 shows a geometric image 710 and a texture image 750 for a point cloud. The point cloud is decomposed into a plurality of patches. In some related examples, smoothing is applied only to patch boundaries, such as the boundary shown by 711 in FIG. 7. In the present disclosure, smoothing may be applied to certain locations within the panel, such as shown at 721. The location may be selected based on certain criteria. Smoothing is applied within the patch in a selective manner, resulting in minimal additional computational complexity. In some embodiments, a plurality of candidate points may be determined for which the reconstructed depth value differs most from the uncompressed depth value, and the determined plurality of candidate points may be added to the list. The list may also include boundary points. Smoothing may then be applied to the points in the list by, for example, smoothing module 336, smoothing module 446, or the like.

In some embodiments, based on the reconstructed depth values, a set of candidate points within the slice to be smoothed by the smoothing filter is derived. In some embodiments, a plurality of candidate points whose reconstructed depth values differ most from the original uncompressed values, which are not available at the decoder side, are selected using a suitable algorithm at the encoder side and the decoder side, e.g. based on an estimation. In some examples, those points whose reconstructed depth values are considered to have a large quantization error are selected as the plurality of candidate points. In an example, a region of the depth map (e.g., the reconstructed geometric image) having a high frequency component (high spatial frequency component) may be selected. For example, when the ratio of the intensity of the high spatial frequency component to the intensity of the low spatial frequency component in a region is above a threshold, the region is a high frequency region having a relatively high level of multiple high spatial frequency components, and the region may be selected to apply a smoothing filter. In another example, a region of the depth map (e.g., the reconstructed geometric image) having high motion content may be selected. The region may be selected, for example, based on motion vector information commonly used in video codecs.

In some embodiments, edge detection may be applied to a depth map (e.g., a reconstructed geometric image) to determine a plurality of points within the panel corresponding to the edge, and smoothing may be applied to the plurality of points within the panel corresponding to the edge. Generally, the edge region has relatively high spatial frequency components.

In some embodiments, the plurality of candidate points may be derived based on information implicitly provided by a video compression tool (e.g., HEVC) used by the V-PCC to compress/decompress the depth map. In an example, a pixel having a large motion vector may be selected, and a plurality of points corresponding to the pixel having the large motion vector may be selected as a plurality of candidate points and added to a list to which smoothing is to be applied. In another example, a plurality of pixels having a relatively large response to Sample Adaptive Offset (SAO) may be selected, and a plurality of points corresponding to the plurality of pixels having a large response to SAO may be selected as a plurality of candidate points and added to a list to which smoothing is to be applied.

According to some aspects of the disclosure, the encoder side and the decoder side use the same algorithm to determine a plurality of points (or regions) within the panel to apply smoothing. In some embodiments, multiple flags and multiple parameters may be included in the encoded bitstream, so the decoder side may determine that the encoder selects multiple points within the slice to apply the smoothing algorithms and parameters, and then the decoder side may select multiple points within the slice to apply the smoothing using the same algorithms and parameters.

Fig. 8 illustrates an example of a syntax according to some embodiments of the present disclosure. In the example of fig. 8, a selective _ smoothing _ inside _ patches _ present _ flag is used to indicate whether or not to use selective smoothing within a slice. In an example, when the selected _ smoothening _ inside _ tasks _ present _ flag is true, the parameter, which may be represented by, for example, algorithm _ to _ find _ candidates _ inside _ tasks, indicates the algorithm.

Further, in an example, when the algorithm is an edge detection algorithm, parameters used in the edge detection algorithm may be indicated, such as a kernel size of the edge detection algorithm represented by kernel _ size, a value in the kernel with respect to a raster scan order represented by kernel [ i ], i ═ 0 … kernel _ size × kernel _ size, and the like.

Note that, in fig. 8, XYZ denotes other suitable algorithms that select a plurality of candidate points within a patch to apply smoothing, and XYZ _ parameters denotes parameter values to be used for the algorithm XYZ.

FIG. 9 shows a flowchart outlining a process (900) according to one embodiment of the present disclosure. The process (900) may be used in an encoding process of encoding a point cloud. In various embodiments, process (900) is performed by processing circuitry, e.g., processing circuitry in terminal device (110), processing circuitry that performs the functions of encoder (203), processing circuitry that performs the functions of V-PCC encoder (300), etc. In some embodiments, the process (900) is implemented in software instructions, such that when processing circuitry executes the software instructions, the processing circuitry performs the process (900). The process starts (S901) and proceeds to (S910).

At (S910), a geometric image associated with the point cloud is compressed. In an example, the patch generation module 306 may generate a patch for the point cloud. Further, the geometric image generation module 310 stores geometric information as a geometric image, wherein the geometric information may be, for example, depth values of a plurality of points. The video compression module 322 may compress the geometric image associated with the point cloud.

At (S920), a geometric reconstruction cloud is generated from the compressed geometric image. In an example, the video compression module 322 may generate a reconstructed geometric image from the compressed geometric image. The reconstructed geometric image may be used to form a geometrically reconstructed cloud.

At (S930), a smoothing filter is applied to at least one geometric sample within a patch of the geometric reconstruction cloud, except for boundary samples of the patch. In some examples, smoothing module 336 may apply a smoothing filter at a plurality of boundary points of a patch. In addition, the smoothing module 336 selectively applies a smoothing filter to points within the panel. In some embodiments, a plurality of points at which the reconstructed depth value may differ most from the original uncompressed value may be selected based on the estimation. For example, a plurality of points in the region having high levels of high spatial frequency components may be selected. In another example, points in the depth map having high motion content may be selected (e.g., determined based on motion vector information provided by the video compression module 322).

At (S940), a texture image is generated based on the smoothed geometric reconstruction cloud. In an example, the texture image generation module 312 can determine colors (also referred to as color transitions) associated with the resample points in the smoothed geometric reconstruction cloud and generate the texture image accordingly.

At (S950), the texture image is compressed. In an example, the video compression module 323 can generate a compressed texture image. The compressed geometric image, compressed texture image, and other suitable information may then be multiplexed to form an encoded bitstream. In some examples, a plurality of flags and parameters may be included in the encoded bitstream, wherein the plurality of flags and parameters are associated with selective geometric smoothing within the slice. Then, the process proceeds to (S999) and ends.

Fig. 10 is a flow chart summarizing a process (1000) according to an embodiment of the disclosure. The process (1000) may be used in a decoding process to reconstruct a point cloud. In various embodiments, process (1000) is performed by a processing circuit (e.g., a processing circuit in terminal device (120), a processing circuit that performs the functionality of decoder (210), a processing circuit that performs the functionality of V-PCC decoder (400), etc.). In some embodiments, process (1000) is implemented in software instructions, such that when processing circuitry executes these software instructions, processing circuitry performs process (1000). The process starts at (S1001) and proceeds to (S1010).

At (S1010), prediction information of an image is decoded from an encoded bitstream corresponding to the point cloud. In some examples, the prediction information includes a plurality of landmarks and parameters, wherein the plurality of landmarks and parameters are associated with selective geometric smoothing within the panel.

At (S1020), a geometric reconstruction cloud is generated from a geometric image decoded from the encoded bitstream. In an example, the video decompression module 436 may decode the geometric information and generate one or more decompressed geometric images. The geometric reconstruction module 444 may generate a geometric reconstruction cloud based on the decompressed one or more geometric images.

At (S1030), a smoothing filter is applied to at least one geometric sample within a patch of the geometric reconstruction cloud, except for boundary samples of the patch. In some examples, the smoothing module 446 may apply a smoothing filter on a plurality of geometric samples of patch boundary points. In addition, the smoothing module 446 selectively applies a smoothing filter to the geometric samples for some points within the patch. In some embodiments, a plurality of points at which the reconstructed depth value may differ most from the original uncompressed value may be selected based on the estimation. For example, points in the region having high levels of high spatial frequency components may be selected. In another example, points in the depth map having high motion content may be selected (e.g., determined by motion vector information provided by the video decompression module 436).

At (S1040), the cloud is reconstructed based on the smoothed geometry, reconstructing a point cloud. For example, the texture reconstruction module (448) may determine texture information for a plurality of points in the point cloud based on the decompressed texture image and the smoothed geometrically reconstructed cloud. The color smoothing module (452) can then smooth out the inconsistency of the coloring. Then, the process proceeds to (S1099) and ends.

The techniques described above may be implemented as computer software via computer readable instructions and physically stored in one or more computer readable media. For example, fig. 11 illustrates a computer system (1100) suitable for implementing certain embodiments of the disclosed subject matter.

The computer software may be encoded in any suitable machine code or computer language, and by assembly, compilation, linking, etc., mechanisms create code that includes instructions that are directly executable by one or more computer Central Processing Units (CPUs), Graphics Processing Units (GPUs), etc., or by way of transcoding, microcode, etc.

The instructions may be executed on various types of computers or components thereof, including, for example, personal computers, tablets, servers, smartphones, gaming devices, internet of things devices, and so forth.

The components illustrated in FIG. 11 for the computer system (1100) are exemplary in nature and are not intended to limit the scope of use or functionality of the computer software implementing embodiments of the present application in any way. Neither should the configuration of components be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary embodiments of the computer system (1100).

The computer system (1100) may include some human interface input devices. Such human interface input devices may respond to input from one or more human users through tactile input (e.g., keyboard input, swipe, data glove movement), audio input (e.g., sound, applause), visual input (e.g., gestures), olfactory input (not shown). The human-machine interface device may also be used to capture media that does not necessarily directly relate to human conscious input, such as audio (e.g., voice, music, ambient sounds), images (e.g., scanned images, photographic images obtained from still-image cameras), video (e.g., 2D video, 3D video including stereoscopic video).

The human interface input device may include one or more of the following (only one of which is depicted): keyboard (1101), mouse (1102), touch pad (1103), touch screen (1110), data glove (not shown), joystick (1105), microphone (1106), scanner (1107), camera (1108).

The computer system (1100) may also include certain human interface output devices. Such human interface output devices may stimulate the senses of one or more human users through, for example, tactile outputs, sounds, light, and olfactory/gustatory sensations. Such human interface output devices may include tactile output devices (e.g., tactile feedback through a touch screen (1110), data glove (not shown), or joystick (1105), but there may also be tactile feedback devices that do not act as input devices), audio output devices (e.g., speakers (1109), headphones (not shown)), visual output devices (e.g., screens (1110) including cathode ray tube screens, liquid crystal screens, plasma screens, organic light emitting diode screens, each with or without touch screen input functionality, each with or without haptic feedback functionality-some of which may output 2D visual output or more than 3D output by means such as stereoscopic picture output; virtual reality glasses (not shown), holographic displays and smoke boxes (not shown)), and printers (not shown).

The computer system (1100) may also include human-accessible storage devices and their associated media, such as optical media including compact disc read-only/rewritable (CD/DVD ROM/RW) (1120) or similar media (1121) with CD/DVD, thumb drives (1122), removable hard drives or solid state drives (1123), conventional magnetic media such as magnetic tapes and floppy disks (not shown), ROM/ASIC/PLD based application specific devices such as secure dongle (not shown), and the like.

Those skilled in the art will also appreciate that the term "computer-readable medium" used in connection with the disclosed subject matter does not include transmission media, carrier waves, or other transitory signals.

The computer system (1100) may also include an interface to one or more communication networks. For example, the network may be wireless, wired, optical. The network may also be a local area network, a wide area network, a metropolitan area network, a vehicular network, an industrial network, a real-time network, a delay tolerant network, and so forth. The network also includes ethernet, wireless local area networks, local area networks such as cellular networks (GSM, 3G, 4G, 5G, LTE, etc.), television wired or wireless wide area digital networks (including cable, satellite, and terrestrial broadcast television), automotive and industrial networks (including CANBus), and so forth. Some networks typically require an external network interface adapter for connection to some general purpose data port or peripheral bus (1149) (e.g., a USB port of computer system (1100)); other systems are typically integrated into the core of the computer system (1100) by connecting to a system bus as described below (e.g., an ethernet interface to a PC computer system or a cellular network interface to a smartphone computer system). Using any of these networks, the computer system (1100) may communicate with other entities. The communication may be unidirectional, for reception only (e.g., wireless television), unidirectional for transmission only (e.g., CAN bus to certain CAN bus devices), or bidirectional, for example, to other computer systems over a local or wide area digital network. Each of the networks and network interfaces described above may use certain protocols and protocol stacks.

The human interface device, human accessible storage device, and network interface described above may be connected to the core (1140) of the computer system (1100).

The core (1140) may include one or more Central Processing Units (CPUs) (1141), Graphics Processing Units (GPUs) (1142), special purpose programmable processing units in the form of Field Programmable Gate Arrays (FPGAs) (1143), hardware accelerators (1144) for specific tasks, and so forth. These devices, as well as Read Only Memory (ROM) (1145), random access memory (1146), internal mass storage (e.g., internal non-user accessible hard drives, solid state disks, etc.) (1147), etc. may be connected via a system bus (1148). In some computer systems, the system bus (1148) may be accessed in the form of one or more physical plugs, so as to be expandable by additional central processing units, graphics processing units, and the like. The peripheral devices may be attached directly to the system bus (1148) of the core or connected through a peripheral bus (1149). The architecture of the peripheral bus includes peripheral controller interface PCI, universal serial bus USB, etc.

The CPU (1141), GPU (1142), FPGA (1143), and accelerator (1144) may execute certain instructions, which in combination may constitute the computer code. The computer code may be stored in ROM (1145) or RAM (1146). Transitional data may also be stored in RAM (1146), while persistent data may be stored in, for example, internal mass storage (1147). Fast storage and retrieval of any memory device may be achieved through the use of cache memory, which may be closely associated with one or more of CPU (1141), GPU (1142), mass storage (1147), ROM (1145), RAM (1146), and the like.

The computer-readable medium may have computer code thereon for performing various computer-implemented operations. The media and computer code may be those specially designed and constructed for the purposes of the present application, or they may be of the kind well known and available to those having skill in the computer software arts.

By way of example, and not limitation, a computer system having architecture (1100), and in particular core (1140), may provide functionality as a processor (including CPUs, GPUs, FPGAs, accelerators, etc.) executing software embodied in one or more tangible computer-readable media. Such computer-readable media may be media associated with the user-accessible mass storage described above, as well as certain memory having a non-volatile core (1140) such as core internal mass storage (1147) or ROM (1145). Software implementing various embodiments of the present application may be stored in such devices and executed by the core (1140). The computer-readable medium may include one or more memory devices or chips, according to particular needs. The software may cause the core (1140), and in particular the processors therein (including CPUs, GPUs, FPGAs, etc.), to perform certain processes or certain portions of certain processes described herein, including defining data structures stored in RAM (1146) and modifying such data structures according to software-defined processes. Additionally or alternatively, the computer system may provide functionality that is logically hardwired or otherwise embodied in circuitry (e.g., accelerator (1144)) that may operate in place of or in conjunction with software to perform certain processes or certain portions of certain processes described herein. Where appropriate, reference to software may include logic and vice versa. Where appropriate, reference to a computer-readable medium may include circuitry (e.g., an Integrated Circuit (IC)) storing executable software, circuitry comprising executable logic, or both. The present application includes any suitable combination of hardware and software.

Appendix A: acronyms

JEM: federated development model

VVC: universal video coding

BMS: reference set

MV: motion vector

HEVC: efficient video coding

SEI: supplemental enhancement information

VUI: video usability information

GOP: picture group

TU: conversion unit

PU (polyurethane): prediction unit

And (3) CTU: coding tree unit

CTB: coding tree block

PB: prediction block

HRD: hypothetical reference decoder

SNR: signal to noise ratio

CPUs: central processing unit

GPUs: graphics processing unit

CRT: cathode ray tube having a shadow mask with a plurality of apertures

LCD: liquid crystal display device

An OLED: organic light emitting diode

CD: optical disk

DVD: digital video CD

ROM: read-only memory

RAM: random access memory

ASIC: application specific integrated circuit

PLD: programmable logic device

LAN: local area network

GSM: global mobile communication system

LTE: long term evolution

CANBus: controller area network bus

USB: universal serial bus

PCI: peripheral device interconnect

FPGA: field programmable gate array

SSD: solid state drive

IC: integrated circuit with a plurality of transistors

CU: coding unit

While the application has described several exemplary embodiments, various modifications, arrangements, and equivalents of the embodiments are within the scope of the application. It will thus be appreciated that those skilled in the art will be able to devise various systems and methods which, although not explicitly shown or described herein, embody the principles of the application and are thus within its spirit and scope.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:对分割结构的限制

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类