Point cloud quality evaluation method, encoder, decoder and storage medium

文档序号:1956989 发布日期:2021-12-10 浏览:10次 中文

阅读说明:本技术 点云质量评估方法、编码器、解码器及存储介质 (Point cloud quality evaluation method, encoder, decoder and storage medium ) 是由 元辉 刘祺 李明 于 2020-06-10 设计创作,主要内容包括:本申请实施例公开了一种点云质量评估方法、编码器、解码器及存储介质,该方法包括:解析码流,获取待评估点云的特征参数;确定质量评估模型的模型参数;根据所述模型参数和所述待评估点云的特征参数,使用所述质量评估模型,确定所述待评估点云的主观质量测度值。(The embodiment of the application discloses a point cloud quality evaluation method, an encoder, a decoder and a storage medium, wherein the method comprises the following steps: analyzing the code stream to obtain characteristic parameters of the point cloud to be evaluated; determining model parameters of a quality evaluation model; and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.)

1. A point cloud quality evaluation method applied to a decoder or a media data processing device, the method comprising:

analyzing the code stream to obtain characteristic parameters of the point cloud to be evaluated;

determining model parameters of a quality evaluation model;

and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.

2. The method according to claim 1, wherein the characteristic parameters of the point cloud to be evaluated comprise quantization parameters of the point cloud to be evaluated; the quantization parameters comprise geometric quantization parameters and color quantization parameters of the point cloud to be evaluated.

3. The method of claim 1, wherein determining model parameters for the quality assessment model comprises:

acquiring a subjective quality test data set;

fitting a model parameter function based on the subjective quality test dataset; the model parameter function is used for reflecting the corresponding relation between the model parameters and the characteristic parameters;

and calculating the model parameters according to the acquired characteristic parameters and the model parameter function.

4. The method of claim 1, wherein determining model parameters for the quality assessment model comprises:

and selecting a model parameter for the point cloud to be evaluated from one or more preset groups of candidate quality evaluation model parameters.

5. The method of claim 1, wherein determining model parameters for the quality assessment model comprises:

performing feature extraction on the point cloud to be evaluated by using a first computation submodel to obtain a first feature value of the point cloud to be evaluated;

performing feature extraction on the point cloud to be evaluated by using a second computation submodel to obtain a second feature value of the point cloud to be evaluated;

determining the model parameters according to the first eigenvalue, the second eigenvalue and a preset vector matrix;

the first computation submodel represents and extracts characteristic values related to color fluctuation on geometric distance for the point cloud to be evaluated, and the second computation submodel represents and extracts characteristic values related to color block average variance for the point cloud to be evaluated.

6. The method of claim 5, wherein the performing feature extraction on the point cloud to be evaluated by using the first computation submodel to obtain a first feature value of the point cloud to be evaluated comprises:

calculating a first characteristic value corresponding to one or more points in the point cloud to be evaluated;

and performing weighted mean calculation on the first characteristic values corresponding to the one or more points, and determining the obtained weighted mean as the first characteristic value of the point cloud to be evaluated.

7. The method of claim 6, wherein the calculating a first feature value corresponding to one or more points in the point cloud to be evaluated comprises:

aiming at a current point in the point cloud to be evaluated, determining a near-neighbor point set associated with the current point; wherein the set of neighbor points includes at least one neighbor point therein;

aiming at the near-neighbor point set, calculating the color intensity difference value of the current point and the at least one near-neighbor point in unit distance to obtain the color intensity difference value in at least one unit distance;

and calculating the weighted average value of the color intensity difference values on the at least one unit distance to obtain a first characteristic value corresponding to the current point.

8. The method of claim 7, wherein calculating the color intensity difference value of the current point and the at least one neighboring point over a unit distance comprises:

obtaining a first color intensity value of the first color component of the current point and a second color intensity value of the first color component of the at least one adjacent point;

calculating the absolute value of the difference between the first color intensity value of the current point and the second color intensity value of the at least one adjacent point to obtain the color intensity difference value of the current point and the at least one adjacent point;

and determining the color intensity difference value of the current point and the at least one adjacent point in unit distance according to the color intensity difference value of the current point and the at least one adjacent point and the distance value between the current point and the at least one adjacent point.

9. The method of claim 5, wherein the performing feature extraction on the point cloud to be evaluated by using the second computation submodel to obtain a second feature value of the point cloud to be evaluated comprises:

calculating second characteristic values corresponding to one or more non-empty pixel blocks in the point cloud to be evaluated;

and performing weighted mean calculation on second characteristic values corresponding to the one or more non-empty pixel blocks, and determining the obtained weighted mean value as a second characteristic value of the point cloud to be evaluated.

10. The method of claim 9, wherein the calculating a second feature value corresponding to one or more non-empty voxel blocks in the point cloud to be evaluated comprises:

aiming at a current non-empty pixel block in the point cloud to be evaluated, obtaining a third color intensity value of a first color component of at least one point in the current non-empty pixel block;

calculating a weighted mean value of third color intensity values of at least one point in the current non-empty pixel block to obtain a color intensity mean value of the current non-empty pixel block;

for at least one point in the current non-empty pixel block, determining a color standard deviation of the at least one point by using the third color intensity value and the color intensity average value of the current non-empty pixel block;

and calculating the weighted mean value of the color standard deviation of the at least one point to obtain a second characteristic value corresponding to the current non-empty voxel block.

11. The method of claim 5, further comprising:

acquiring a subjective quality test data set;

and training the subjective quality test data set to obtain the preset vector matrix.

12. The method of claim 5, further comprising:

selecting the preset vector matrix for determining the model parameters from one or more preset candidate vector matrices.

13. The method of claim 5, wherein determining the model parameters according to the first eigenvalue, the second eigenvalue and a preset vector matrix comprises:

constructing a feature vector based on a preset constant value, the first feature value and the second feature value;

performing multiplication operation on the characteristic vector and the preset vector matrix to obtain a model parameter vector; wherein the model parameter vector comprises a first model parameter, a second model parameter, and a third model parameter;

determining the first model parameter, the second model parameter, and the third model parameter as the model parameters.

14. The method according to claim 13, wherein the predetermined constant value is an integer.

15. A point cloud quality evaluation method applied to an encoder or a media data processing device, the method comprising:

determining characteristic parameters of the point cloud to be evaluated;

determining model parameters of a quality evaluation model;

and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.

16. The method of claim 15, wherein determining the characteristic parameters of the point cloud to be evaluated comprises:

acquiring a pre-coding parameter of the point cloud to be evaluated;

determining the characteristic parameters of the point cloud to be evaluated according to the pre-coding parameters and a preset lookup table; the preset lookup table is used for reflecting the corresponding relation between the coding parameters and the characteristic parameters.

17. The method according to claim 15 or 16, wherein the characteristic parameters of the point cloud to be evaluated comprise quantization parameters of the point cloud to be evaluated; the quantization parameters comprise geometric quantization parameters and color quantization parameters of the point cloud to be evaluated.

18. The method of claim 15, wherein determining the model parameters of the quality assessment model comprises:

acquiring a subjective quality test data set;

fitting a model parameter function based on the subjective quality test dataset; the model parameter function is used for reflecting the corresponding relation between the model parameters and the characteristic parameters;

and calculating the model parameters according to the acquired characteristic parameters and the model parameter function.

19. The method of claim 15, wherein determining the model parameters of the quality assessment model comprises:

and selecting a model parameter for the point cloud to be evaluated from one or more preset groups of candidate quality evaluation model parameters.

20. The method of claim 15, wherein determining the model parameters of the quality assessment model comprises:

performing feature extraction on the point cloud to be evaluated by using a first computation submodel to obtain a first feature value of the point cloud to be evaluated;

performing feature extraction on the point cloud to be evaluated by using a second computation submodel to obtain a second feature value of the point cloud to be evaluated;

determining the model parameters according to the first eigenvalue, the second eigenvalue and a preset vector matrix;

the first computation submodel represents and extracts characteristic values related to color fluctuation on geometric distance for the point cloud to be evaluated, and the second computation submodel represents and extracts characteristic values related to color block average variance for the point cloud to be evaluated.

21. The method of claim 20, wherein the performing feature extraction on the point cloud to be evaluated by using the first computation submodel to obtain a first feature value of the point cloud to be evaluated comprises:

calculating a first characteristic value corresponding to one or more points in the point cloud to be evaluated;

and performing weighted mean calculation on the first characteristic values corresponding to the one or more points, and determining the obtained weighted mean as the first characteristic value of the point cloud to be evaluated.

22. The method of claim 21, wherein the calculating a first feature value corresponding to one or more points in the point cloud to be evaluated comprises:

aiming at a current point in the point cloud to be evaluated, determining a near-neighbor point set associated with the current point; wherein the set of neighbor points includes at least one neighbor point therein;

aiming at the near-neighbor point set, calculating the color intensity difference value of the current point and the at least one near-neighbor point in unit distance to obtain the color intensity difference value in at least one unit distance;

and calculating the weighted average value of the color intensity difference values on the at least one unit distance to obtain a first characteristic value corresponding to the current point.

23. The method of claim 22, wherein said calculating a color intensity difference value of said current point and said at least one neighboring point over a unit distance comprises:

obtaining a first color intensity value of the first color component of the current point and a second color intensity value of the first color component of the at least one adjacent point;

calculating the absolute value of the difference between the first color intensity value of the current point and the second color intensity value of the at least one adjacent point to obtain the color intensity difference value of the current point and the at least one adjacent point;

and determining the color intensity difference value of the current point and the at least one adjacent point in unit distance according to the color intensity difference value of the current point and the at least one adjacent point and the distance value between the current point and the at least one adjacent point.

24. The method of claim 20, wherein the performing feature extraction on the point cloud to be evaluated by using the second computation submodel to obtain a second feature value of the point cloud to be evaluated comprises:

calculating second characteristic values corresponding to one or more non-empty pixel blocks in the point cloud to be evaluated;

and performing weighted mean calculation on second characteristic values corresponding to the one or more non-empty pixel blocks, and determining the obtained weighted mean value as a second characteristic value of the point cloud to be evaluated.

25. The method of claim 24, wherein the calculating a second feature value corresponding to one or more non-empty voxel blocks in the point cloud to be evaluated comprises:

aiming at a current non-empty pixel block in the point cloud to be evaluated, obtaining a third color intensity value of a first color component of at least one point in the current non-empty pixel block;

calculating a weighted mean value of third color intensity values of at least one point in the current non-empty pixel block to obtain a color intensity mean value of the current non-empty pixel block;

for at least one point in the current non-empty pixel block, determining a color standard deviation of the at least one point by using the third color intensity value and the color intensity average value of the current non-empty pixel block;

and calculating the weighted mean value of the color standard deviation of the at least one point to obtain a second characteristic value corresponding to the current non-empty voxel block.

26. The method of claim 20, further comprising:

acquiring a subjective quality test data set;

and training the subjective quality test data set to obtain the preset vector matrix.

27. The method of claim 20, further comprising:

selecting the preset vector matrix for determining the model parameters from one or more preset candidate vector matrices.

28. The method of claim 20, wherein determining the model parameters according to the first eigenvalue, the second eigenvalue and a preset vector matrix comprises:

constructing a feature vector based on a preset constant value, the first feature value and the second feature value;

performing multiplication operation on the characteristic vector and the preset vector matrix to obtain a model parameter vector; wherein the model parameter vector comprises a first model parameter, a second model parameter, and a third model parameter;

determining the first model parameter, the second model parameter, and the third model parameter as the model parameters.

29. The method according to claim 28, wherein the predetermined constant value is an integer.

30. A decoder, characterized in that the decoder comprises a parsing unit, a first determining unit and a first calculating unit; wherein the content of the first and second substances,

the analysis unit is configured to analyze the code stream and obtain the characteristic parameters of the point cloud to be evaluated;

the first determination unit is configured to determine model parameters of a quality assessment model;

the first computing unit is configured to determine a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameter and the characteristic parameter of the point cloud to be evaluated.

31. A decoder, comprising a first memory and a first processor; wherein the content of the first and second substances,

the first memory for storing a computer program operable on the first processor;

the first processor, when executing the computer program, is configured to perform the method of any of claims 1 to 14.

32. An encoder, characterized in that the encoder comprises a second determining unit and a second calculating unit; wherein the content of the first and second substances,

the second determining unit is configured to determine the characteristic parameters of the point cloud to be evaluated;

the second determination unit is further configured to determine model parameters of the quality assessment model;

the second computing unit is configured to determine a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameter and the characteristic parameter of the point cloud to be evaluated.

33. An encoder, characterized in that the encoder comprises a second memory and a second processor; wherein the content of the first and second substances,

the second memory for storing a computer program operable on the second processor;

the second processor, when executing the computer program, is configured to perform the method of any of claims 15 to 29.

34. A computer storage medium, characterized in that it stores a computer program which, when executed by a first processor, implements the method of any one of claims 1 to 14, or which, when executed by a second processor, implements the method of any one of claims 15 to 29.

Technical Field

The present application relates to the field of video image processing technologies, and in particular, to a point cloud quality evaluation method, an encoder, a decoder, and a storage medium.

Background

In a Video-based Point Cloud Compression (V-PCC) encoder framework, a Point Cloud distortion metric (PC _ error) technique is a reference algorithm for measuring the objective quality of a Point Cloud, and the PC _ error technique respectively calculates a Peak Signal to Noise Ratio (PSNR) based on geometry and a PSNR based on color (or called attributes) to represent the objective quality levels of geometry and color.

In the related art, the geometric PSNR and the color PSNR of the point cloud are independently calculated. However, when the human eye vision system receives the point cloud, the quality distortion of the geometry and the color is obtained simultaneously and affects the final human eye vision experience together, so that the related technical scheme cannot accurately reflect the subjective point cloud quality of the human eyes.

Disclosure of Invention

The application provides a point cloud quality evaluation method, an encoder, a decoder and a storage medium, which can simplify the calculation complexity of subjective quality evaluation and can improve the accuracy of the subjective quality evaluation.

In order to achieve the purpose, the technical scheme of the application is realized as follows:

in a first aspect, an embodiment of the present application provides a point cloud quality evaluation method, applied to a decoder or a media data processing device, the method including:

analyzing the code stream to obtain characteristic parameters of the point cloud to be evaluated;

determining model parameters of a quality evaluation model;

and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.

In a second aspect, an embodiment of the present application provides a point cloud quality evaluation method, applied to an encoder or a media data processing device, the method including:

determining characteristic parameters of the point cloud to be evaluated;

determining model parameters of a quality evaluation model;

and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.

In a third aspect, an embodiment of the present application provides a decoder, where the decoder includes a parsing unit, a first determining unit, and a first calculating unit; wherein the content of the first and second substances,

the analysis unit is configured to analyze the code stream and obtain the characteristic parameters of the point cloud to be evaluated;

the first determination unit is configured to determine model parameters of a quality assessment model;

the first computing unit is configured to determine a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameter and the characteristic parameter of the point cloud to be evaluated.

In a fourth aspect, an embodiment of the present application provides a decoder, including a first memory and a first processor; wherein the content of the first and second substances,

the first memory for storing a computer program operable on the first processor;

the first processor, when executing the computer program, is configured to perform the method according to the first aspect.

In a fifth aspect, an embodiment of the present application provides an encoder, which includes a second determining unit and a second calculating unit; wherein the content of the first and second substances,

the second determining unit is configured to determine the characteristic parameters of the point cloud to be evaluated;

the second determination unit is further configured to determine model parameters of the quality assessment model;

the second computing unit is configured to determine a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameter and the characteristic parameter of the point cloud to be evaluated.

In a sixth aspect, an embodiment of the present application provides an encoder, which includes a second memory and a second processor; wherein the content of the first and second substances,

the second memory for storing a computer program operable on the second processor;

the second processor is adapted to perform the method according to the second aspect when running the computer program.

In a seventh aspect, the present application provides a computer storage medium storing a computer program, where the computer program implements the method according to the first aspect when executed by a first processor or implements the method according to the second aspect when executed by a second processor.

According to the point cloud quality evaluation method, the encoder, the decoder and the storage medium, after the characteristic parameters of the point cloud to be evaluated are obtained, the model parameters of the quality evaluation model are determined; and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated. Therefore, the accuracy of subjective quality assessment can be improved by using the quality assessment model; in the quality evaluation model, the technical scheme of the application only needs to use the original point cloud and the characteristic value extracted from the original point cloud, and does not need the distortion point cloud and the matching point of the original point cloud and the distortion point cloud to be equal, so that the calculation complexity of subjective quality evaluation is simplified.

Drawings

Fig. 1 is a schematic diagram illustrating positions of point-to-point distortion and point-to-surface distortion according to an embodiment of the present disclosure;

fig. 2 is a schematic diagram of a framework of a V-PCC encoding process according to an embodiment of the present application;

fig. 3 is a schematic diagram of a framework of a V-PCC decoding process according to an embodiment of the present application;

fig. 4 is a schematic flow chart of a point cloud quality evaluation method according to an embodiment of the present disclosure;

fig. 5 is a schematic flowchart of another point cloud quality evaluation method according to an embodiment of the present disclosure;

FIG. 6 is a schematic diagram illustrating a relative position between a point reflecting color fluctuation at a geometric distance and a neighboring point according to an embodiment of the present disclosure;

FIG. 7 is a schematic structural diagram of a non-empty voxel block reflecting the average variance of color blocks according to the present embodiment;

fig. 8 is a schematic flowchart of another point cloud quality evaluation method according to an embodiment of the present disclosure;

fig. 9 is a schematic structural diagram of a decoder according to an embodiment of the present application;

fig. 10 is a schematic hardware structure diagram of a decoder according to an embodiment of the present application;

fig. 11 is a schematic structural diagram of an encoder according to an embodiment of the present disclosure;

fig. 12 is a schematic hardware structure diagram of an encoder according to an embodiment of the present disclosure.

Detailed Description

The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant application and are not limiting of the application. It should be noted that, for the convenience of description, only the parts related to the related applications are shown in the drawings.

It should be understood that the Point data set of the product appearance surface obtained by the measuring instrument in the reverse engineering can be called Point Cloud (Point Cloud). The number of points obtained by using a three-dimensional coordinate measuring machine is small, and the distance between the points is large, so that the method can be called as sparse point cloud; the point clouds obtained by using the three-dimensional laser scanner or the photographic scanner have a large number of points and are dense, and can be called dense point clouds. Here, each point may include geometric information (such as position information, i.e., x, y, z coordinates) and attribute information (such as color information, i.e., R, G, B information), which is described as a point cloud.

Specifically, the related art generally adopts a PC _ error objective point cloud distortion calculation technique, which is a reference algorithm for measuring the objective quality of a point cloud. Here, the PC _ error technique calculates PSNR of geometry and PSNR of color to represent objective quality levels of geometry and color, respectively, and may be classified into geometric PSNR based on point-to-point geometric distortion (D1), geometric PSNR and color PSNR based on point-to-surface geometric distortion (D2), and the like. The PSNR calculation for these three aspects will be specifically described below.

(1) Based on the geometric PSNR of point-to-point geometric distortion (D1), the calculation model is to calculate the geometric Mean Square Error (MSE) by using the distance difference of the geometric coordinates of the matching point pairs of the reference point cloud and the distorted point cloud, and then to root the MSEAnd calculating the geometric PSNR according to the calculated MSE. As shown in FIG. 1, the specific implementation is that let "A" and "B" represent the reference point cloud and the compressed point cloud, respectively, and the estimated compression error is eB,ATo represent the compression error of the point cloud "B" with respect to the reference point cloud "a". Here, for each point B in the compressed point cloud "BiAs indicated by the white filled dots in fig. 1; identifying corresponding points a in a reference point cloud "AjAs indicated by the black filled dots in fig. 1; while the nearest neighbors are used to locate the corresponding points. In particular, a multidimensional Tree (KD Tree) search is used to perform nearest neighbor searches to reduce computational complexity. By referencing the corresponding point a in the point cloud "AjAssociating to point B in compressed point cloud "BiTo determine an error vector E (i, j). Here, the length of the error vector is a point-to-point error, i.e. as shown in the following,

according to N in the compressed point cloud "BBA biThe point-to-point distance of the E B pointThe D1 of the entire point cloud at this time can be defined as follows,

in a similar manner to that described above,can also be obtained by referring to the above-mentioned manner, the final

That is, the PSNR value based on the point-to-surface geometric distortion (D1) is calculated as follows,

where p is the peak constant predefined by MPEG for each reference point cloud, and MSE is the mean square error of the point-to-point (D1). It should be noted that, taking the test sequence 8iVFB-Long _ address as an example, p can be defined as 1023; but is not particularly limited.

(2) Based on the geometric PSNR of point-to-surface geometric distortion (D2), the calculation model calculates geometric MSE by using the point product of the distance difference of the geometric coordinates of the matching point pairs of the reference point cloud and the distorted point cloud and the corresponding normal vector, and then calculates the geometric PSNR according to the calculated MSE. Still taking fig. 1 as an example, the specific implementation is along the normal direction NjProjecting the error vector E (i, j), a new error vector can be obtainedThus, the calculation formula of the point-to-plane error is as follows,

further, the point-to-surface error (D2) of the entire point cloud can be defined as follows,

in a similar manner to that described above,can also be obtained by referring to the above-mentioned manner, the final

That is, the PSNR value based on the point-to-surface geometric distortion (D2) is calculated as follows,

where p is the peak constant predefined by MPEG for each reference point cloud, and MSE is the mean square error of the point-to-face (D2). Here, taking the test sequence 8iVFB-Long _ address as an example, p can also be defined as 1023; but is not particularly limited.

(3) And the calculation model calculates color MSE by using the color difference value of the matching point pair of the reference point cloud and the distorted point cloud, and then calculates the color PSNR according to the calculated MSE. Here, for lossy attribute encoding, the color PSNR value is calculated as follows,

it should be noted that, in the video image, the color attribute of each point in the current point cloud is generally represented by using a first color component, a second color component, and a third color component. Wherein, in the RGB space, the three color components are red (denoted by R), green (denoted by G), and blue (denoted by B), respectively; in the YUV space, the three color components are a luminance component (represented by the Y component), a first chrominance component (represented by the U component), and a second chrominance component (represented by the V component), respectively.

Thus, for the color attribute, the MSE for each of the three color components may be calculated. Here, the conversion from RGB space to YUV space is performed using the ITU-R bt.709 standard. Since the bit depth of each point is 8 bits for the color attribute of all test data, the peak value p of PSNR calculation may be 255.

Therefore, the PSNR of the geometry and the color of the point cloud is independently calculated in the related technical scheme, however, when a human eye vision system receives the point cloud, the quality distortion of the geometry and the color is simultaneously obtained and plays a role in the final visual experience, and the related technical scheme cannot accurately reflect the subjective point cloud quality of human eyes.

In order to accurately predict the point cloud quality of human vision, the embodiment of the application provides a point cloud quality evaluation method, and after the characteristic parameters of the point cloud to be evaluated are obtained, the model parameters of a quality evaluation model are determined; and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated. Therefore, the accuracy of subjective quality assessment can be improved by using the quality assessment model; in the quality evaluation model, the technical scheme of the application only needs to use the original point cloud and the characteristic value extracted from the original point cloud, and does not need the distortion point cloud and the matching point of the original point cloud and the distortion point cloud to be equal, so that the calculation complexity of subjective quality evaluation is simplified.

It is also noted that in order to obtain good visual quality, a sufficient density of point clouds is required, which results in a large amount of point cloud data. Currently, the Moving Pictures Expert Group (MPEG) proposes two techniques of Point Cloud Compression, one is a Video-based Point Cloud Compression (V-PCC) technique, and the other is a Geometry-based Point Cloud Compression (G-PCC) technique; the embodiment of the present application will be described in detail by taking the V-PCC technology as an example.

The main idea of V-PCC is to compress the geometric and color information of the dynamic point cloud using existing video codecs. Fig. 2 is a schematic diagram illustrating a framework of a V-PCC encoding flow according to an embodiment of the present application. In fig. 2, the V-PCC encoding flow framework is applied to a point cloud encoder. Decomposing an input Three-Dimensional Point Cloud (3 DPC) by a patch (patch) to generate a set of patches; these patches can be independently mapped into a two-dimensional grid by simple orthogonal projection without suffering from auto-occlusion and without requiring resampling of the point cloud geometry. Further, mapping the extracted patch to a two-dimensional network through a packaging process to generate an occupancy map; wherein the occupancy map is composed of a binary mapping indicating whether each cell of the grid belongs to a white space or a point cloud. Generating a geometric image and a color image by utilizing patch information, occupancy map information and the like; after the geometric image and the color image are subjected to image filling, a filling geometric image and a filling color image are obtained; here, the padding process is intended to fill the empty spaces between the patches in an attempt to generate piecewise-smooth images that may be more suitable for video encoding; and the smoothing process aims to mitigate potential discontinuities that may occur at patch boundaries due to compression distortion, the point cloud geometry reconstruction process utilizes occupancy map information in order to detect non-empty pixels in the geometric image. After Video compression is performed by using an existing Video encoder, such as h.265/High Efficiency Video Coding (HEVC), a compressed geometric Video and a compressed color Video are obtained. In addition, the occupation map and the auxiliary patch information are respectively compressed to obtain a compressed occupation map and a compressed auxiliary patch information; these compressed information are then multiplexed together to generate a compressed bit stream of the final point cloud V-PCC.

That is, the basic principle of the V-PCC encoding flow framework shown in fig. 2 is to utilize a video encoder for point cloud compression. For an input three-dimensional point cloud, this is basically achieved by decomposing each point cloud patch in the three-dimensional point cloud sequence into a set of patches that are independently mapped into a two-dimensional grid of uniform blocks. The mapping is then used to store the geometric information and the color information as a geometric image and a color image, respectively. Then, the geometric image sequence and the color image sequence corresponding to the dynamic point cloud are compressed respectively using the existing video encoder, such as h.265/HEVC. Finally, the dynamic three-dimensional point cloud is reconstructed using the geometry and color videos and metadata (occupancy map of the two-dimensional mesh, auxiliary patch/block information, etc.). Wherein, the bit stream of the compressed three-dimensional point cloud can be composed of two parts of geometric information and color information. For a given platform, the size of each portion is controlled by a quantization parameter, which can take a large number of values; at the same time, quantization will introduce distortion, affecting the reconstruction quality.

Fig. 3 is a schematic diagram illustrating a framework of a V-PCC decoding process according to an embodiment of the present application. In fig. 3, the V-PCC decoding flow framework is applied to a point cloud decoder. After acquiring the compressed bit stream, the point cloud decoder firstly separates the compressed bit stream into a compressed geometric video, a compressed color video, a compressed occupancy graph, compressed auxiliary patch information and the like through demultiplexing, and then the compressed geometric video and the compressed color video are subjected to video decompression processing to obtain a decompressed geometric video and a decompressed color video; compressing the occupation map through an occupation map compression process to obtain a decompression occupation map; the compressed auxiliary patch information is decompressed through the auxiliary patch information to obtain decompressed auxiliary patch information, and then the three-dimensional point cloud input by the point cloud encoder can be restored through geometric reconstruction, smoothing processing, color reconstruction and the like.

Here, the point cloud quality evaluation method provided by the embodiment of the present application may be applicable to both a point cloud encoder and a point cloud decoder, and may even be applicable to both the point cloud encoder and the point cloud decoder. Therefore, if the point cloud encoder obtains a better prediction effect through the point cloud quality evaluation method provided by the embodiment of the application; accordingly, a better prediction effect can be obtained in the point cloud decoder.

Based on this, the technical solution of the present application is further elaborated below with reference to the drawings and the embodiments. Before the detailed description is given, it should be noted that "first", "second", "third", etc. are mentioned throughout the specification only for distinguishing different features, and do not have the functions of defining priority, precedence, size relationship, etc.

The embodiment of the application provides a point cloud quality evaluation method, which is applied to a point cloud decoder, and can be called as a decoder for short. The functions performed by the method may be implemented by the first processor in the decoder calling a computer program, which of course may be stored in the first memory, it being understood that the decoder comprises at least the first processor and the first memory.

Referring to fig. 4, a schematic flow chart of a point cloud quality evaluation method according to an embodiment of the present application is shown. As shown in fig. 4, the method may include:

s401: and analyzing the code stream to obtain the characteristic parameters of the point cloud to be evaluated.

It should be noted that the point cloud quality evaluation method according to the embodiment of the present application may be applied to a decoder, and may also be applied to a media data processing device. In practical applications, for example, in a media data processing device that occurs in a Network, such as Network optimization and quality evaluation, for example, a Content Delivery Network (CDN), the embodiments of the present application are not limited to a playback device that is used on a traditional user side and includes a decoder, but may also be other devices that include a decoder, and are not limited herein.

It should be noted that in the point cloud, the points may be all points in the point cloud, or may be some points in the point cloud, and these points are relatively concentrated in space.

In some embodiments, the characteristic parameters of the point cloud to be evaluated may include quantization parameters of the point cloud to be evaluated; the quantization parameters can comprise geometric quantization parameters and color quantization parameters of the point cloud to be evaluated.

Here, the geometric quantization parameter may be denoted by QSg, and is used to indicate a quantization step size value of the geometric video sequence; the color quantization parameter may be denoted QSc for indicating a quantization step size value of the color/property video sequence.

In the embodiment of the present application, whether the geometric quantization parameter or the color quantization parameter is determined on the encoder side according to the encoding parameter; obtaining a geometric quantization parameter and a color quantization parameter only after the coding parameter is determined, and writing the two parameters into a code stream; therefore, the two parameters can be obtained by analyzing the code stream at the decoder side, so that the subjective quality of the point cloud to be evaluated can be evaluated subsequently.

S402: model parameters of the quality assessment model are determined.

In the embodiment of the application, in order to accurately predict the point cloud quality of human eye vision, firstly, distorted point clouds of different levels generated by high-quality original point clouds can be used, a comprehensive and effective point cloud subjective quality test data set is constructed by using a Double Stimulus damage Scale (DSIS) subjective test mode, and then a simple and effective point cloud subjective quality evaluation model, which can be referred to as a quality evaluation model for short, is provided on the basis of the data set. Here, the quality evaluation model may characterize the evaluation of the subjective quality of the point cloud to be evaluated according to the model parameters.

That is, a subjective quality test data set needs to be established first. In some embodiments, the establishing the subjective quality test data set may include:

acquiring at least one reference point cloud;

compressing each reference point cloud in the at least one reference point cloud by using different values of the quantization parameters to obtain a plurality of distortion point clouds corresponding to each reference point cloud;

and obtaining the subjective quality test data set based on the at least one reference point cloud and the obtained observation data of the distortion point clouds.

Illustratively, 16 high quality raw point clouds (i.e., reference point clouds) are selected from the waterloo point cloud data set, and the contents of the point clouds may include fruits, vegetables, snacks, etc. These reference point clouds were compressed using a version 7 test module of V-PCC to obtain distorted point clouds. Wherein, for each reference point cloud, 25 levels of distorted point clouds are generated by setting 5 geometric quantization parameters (such as 26, 32, 38, 44 and 50) and 5 color quantization parameters (such as 26, 32, 38, 44 and 50). In order to display the three-dimensional point cloud and the two-dimensional video as completely as possible, a horizontal circle and a vertical circle with the radius of 5000 are sequentially selected as a virtual camera path, the circle center is located at the geometric center of the object, and a viewpoint is generated on the circles surrounding the reference point cloud and the distorted point cloud by rotating every two degrees, so that 360 image frames can be generated for each point cloud. Then, the segment of the distorted point cloud and the segment of the reference point cloud are horizontally connected into a video sequence of 10 seconds for demonstration. 30 testers (15 men and 15 women) sit in front of the screen at a distance of about 2 times the screen height from the screen, and first a training session is conducted, each level of the distorted point cloud is observed in advance to be familiar with the point cloud quality of each level of the point cloud from poor to good. A formal test session is then conducted. The testing session adopts a DSIS method to carry out subjective testing, the levels of original reference point cloud and distorted point cloud fragments are presented on a screen at the same time, an observer evaluates the distortion degree by comparing the difference between the original reference point cloud and the distorted point cloud, and gives a Score average subjective Opinion Score (MOS) in the range of 0-100; here, the higher the MOS score, the better the subjective quality is represented. The observer needs to observe a total of 2 hours, test 400 data, divided into 4 test sessions, and rest for 5 minutes during each 2 consecutive test sessions. The 400 test data of the final 30 observers are summarized, and the average MOS score of the observers is used as the final MOS score of each observation. Therefore, a subjective quality test data set of the point cloud can be obtained, and data in the data set is used as a reference standard (ground route) for subsequently establishing a quality evaluation model.

In this way, after the subjective quality test data set is established in advance, the quality assessment model can be constructed from the subjective quality test data set. Assuming that the variable (100-MOS) is defined as the offset value of MOS, MOS is usedcTo express, a point cloud subjective quality assessment model can be obtained according to the test data in the subjective quality test data set, that is, the quality assessment model according to the embodiment of the present application is as follows,

MOSc=p1QSg+p2QSc+p3 (8)

wherein QS isgAnd QScRespectively representing a geometric quantization parameter and a color quantization parameter for indicating quantization step size values of a geometric video sequence and a color video sequence; p1, p2 and p3 represent model parameters. The accuracy of the fitted quality assessment model in the subjective quality test dataset is shown in table 1 below.

TABLE 1

Point Cloud p1 p2 p3 SCC RMSE
Bag 0.223 0.183 6.342 0.949 4.954
Banana 0.247 0.08 23.601 0.902 6.336
Biscuits 0.143 0.156 12.072 0.927 4.387
Cake 0.241 0.125 10.489 0.938 5.153
Cauliflower 0.246 0.177 9.773 0.916 6.782
Flowerpot 0.291 0.075 16.212 0.877 8.339
House 0.22 0.269 3.597 0.93 7.059
Litchi 0.195 0.266 3.874 0.914 7.488
Mushroom 0.164 0.225 18.579 0.89 7.262
Ping-pong_bat 0.24 0.221 14.24 0.872 9.243
Puer_tea 0.124 0.297 11.921 0.948 5.568
Pumpkin 0.131 0.223 7.424 0.939 4.898
Ship 0.268 0.068 16.756 0.91 6.438
Statue 0.254 0.142 18.777 0.852 9.011
Stone 0.17 0.291 4.555 0.945 6.026
Tool_box 0.117 0.266 15.152 0.914 6.63
Average - - - 0.914 6.598

In table 1, p1, p2, and p3 are model parameters fitted according to the subjective quality test data set, and a Square Correlation Coefficient (SCC) represents a Correlation between a MOS value predicted according to equation (8) and an actual MOS value to measure a deviation between the predicted MOS value and the actual MOS value; the Root Mean Square Error (RMSE) represents the Root Mean Square Error between the predicted MOS value and the actual MOS value tested according to equation (8) and is used to measure the deviation between the predicted MOS value and the actual MOS value.

The SCC is closer to 1, which indicates that the deviation between the predicted MOS value and the actual MOS value is smaller, and the prediction accuracy is higher at this time, so that the subjective quality observation effect is better; on the contrary, the more the SCC deviates from 1, the larger the deviation between the predicted MOS value and the actual MOS value is, and at this time, the lower the prediction accuracy is, and the worse the subjective quality observation effect is. For RMSE, the smaller the value of RMSE is, the smaller the deviation between the predicted MOS value and the actual MOS value is, and the higher the prediction accuracy is, the better the subjective quality observation effect is; conversely, the larger the value of RMSE, the larger the deviation between the predicted MOS value and the actual MOS value, at which time the lower the prediction accuracy, the worse the subjective quality observation.

That is, table 1 shows the error between the MOS value predicted by the quality evaluation model and the actual MOS value after fitting the model parameters. As can be seen from the contents in table 1, the SCC values are basically high, and the RMSE values are low, which indicates that the accuracy of the MOS values predicted by equation (8) is high, i.e., the prediction error of the quality evaluation model is small.

It should be noted that the model parameters refer to model coefficients used for constructing the quality assessment model. In general, the model parameters may include a plurality of parameters, such as a first model parameter (denoted by p 1), a second model parameter (denoted by p 2), and a third model parameter (denoted by p 3).

The determination of the model parameters can be described in several embodiments below.

In one possible embodiment, the determination may be determined using a fit to the subjective quality test data set. The determining of the model parameters of the quality assessment model may include:

acquiring a subjective quality test data set;

fitting a model parameter function based on the subjective quality test dataset; the model parameter function is used for reflecting the corresponding relation between the model parameters and the characteristic parameters;

and calculating the model parameters according to the acquired characteristic parameters and the model parameter function.

It should be noted that, the characteristic parameters are QSg and QSc as examples, the subjective quality test data set is pre-established, and the subjective quality test data set at least includes a plurality of distorted point clouds, QSg and QSc corresponding to each distorted point cloud, and an actual MOS value. After the subjective quality test data set is obtained, a model parameter function can be fitted; the model parameter function at this time reflects the correspondence between the model parameters and QSg and QSc. After learning QSg and QSc, the fitted model parameters, such as p1, p2, and p3 shown in table 1, can be obtained for a certain point cloud to be evaluated according to the model parameter function. Then, according to the quality evaluation model shown in the formula (8), the MOS value of the point cloud can also be predicted.

In another possible embodiment, the model parameters do not necessarily need to be calculated on site, but may be spare data obtained in advance. The determining of the model parameters of the quality assessment model may include:

and selecting a model parameter for the point cloud to be evaluated from one or more preset groups of candidate quality evaluation model parameters.

That is, in the decoder or the media data processing apparatus, one or more sets of candidate quality assessment model parameters are stored in advance. At this time, the model parameters for the point cloud to be evaluated can be directly selected from the preset one or more groups of candidate quality evaluation model parameters, so as to obtain the quality evaluation model.

Further, in order to enable the quality assessment model to be used in an actual encoder or decoder, the embodiment of the present application also provides a prediction method of the model parameter based on the feature.

In yet another possible implementation, the model parameters may be determined based on the point cloud extracted features. Specifically, the prediction model parameters are mainly used for two original point cloud features, including: the first feature is a feature regarding Color Fluctuation in Geometric Distance (CFGD), and the second feature is a feature regarding Color Block Mean Variance (CBMV) to determine model parameters. As shown in fig. 5, the method may include:

s501: performing feature extraction on the point cloud to be evaluated by using a first computation submodel to obtain a first feature value of the point cloud to be evaluated;

it should be noted that the first computation submodel represents the feature value extracted from the point cloud to be evaluated and related to the color fluctuation at the geometric distance. Here, an average value of color intensity differences from N neighboring points of the field per unit distance may be used as a first feature value of the point cloud, i.e., a CFGD value. Where N is an integer greater than 0, for example, N is equal to 7, but is not particularly limited.

For the extracting of the first feature value, in some embodiments, the extracting the feature of the point cloud to be evaluated by using the first computation submodel to obtain the first feature value of the point cloud to be evaluated may include:

calculating a first characteristic value corresponding to one or more points in the point cloud to be evaluated;

and performing weighted mean calculation on the first characteristic values corresponding to the one or more points, and determining the obtained weighted mean as the first characteristic value of the point cloud to be evaluated.

Further, the calculating a first feature value corresponding to one or more points in the point cloud to be evaluated may include:

aiming at a current point in the point cloud to be evaluated, determining a near-neighbor point set associated with the current point; wherein the set of neighbor points includes at least one neighbor point therein;

aiming at the adjacent point set, calculating the color intensity difference value of the current point and the at least one adjacent point on a unit distance to obtain the color intensity difference value on the at least one unit distance;

and calculating the weighted average value of the color intensity difference values on the at least one unit distance to obtain a first characteristic value corresponding to the current point.

Further, the calculating a color intensity difference value of the current point and the at least one neighboring point in a unit distance may include:

obtaining a first color intensity value of the first color component of the current point and a second color intensity value of the first color component of the at least one adjacent point;

calculating the absolute value of the difference between the first color intensity value of the current point and the second color intensity value of the at least one adjacent point to obtain the color intensity difference value of the current point and the at least one adjacent point;

and obtaining the color intensity difference value of the current point and the at least one near-adjacent point in unit distance according to the color intensity difference value of the current point and the at least one near-adjacent point and the distance value between the current point and the at least one near-adjacent point.

It should be noted that, in the embodiment of the present application, for data of a point cloud to be evaluated, such as a first color intensity value of a current point, a first color intensity value of a neighboring point, and the like, at a decoder side, the data may be obtained by parsing a code stream or decoding the code stream.

It should be noted that, for the weighted average, the weighted values may be the same or different. In the case of the same weight value, it can be regarded as an average value, that is, the average value of the equal weight values belongs to a special weighted average value.

That is, as shown in fig. 6, p0 is the current point to be calculated, and p1, p2, …, and p7 are N points representing the nearest neighbors of p0, where N is equal to 7. Firstly, calculating the average value of the color intensity absolute difference value of the current point to be calculated and the N adjacent points in the adjacent geometric Euclidean distance to be used as the CFGD value corresponding to the current point to be calculated, then carrying out average value calculation on the CFGD values corresponding to a plurality of points in the whole point cloud, and determining the obtained average value as the CFGD value of the point cloud. Here, a specific calculation formula of CFGD of the entire point cloud is as follows,

wherein P represents the point cloud to be evaluated, and S represents the point PiT represents the number of points in the point cloud, N represents the point piThe number of neighbors in the set of neighbors. C (p)i) Represents a point piC (p) is the value of the first color component in the color attribute of (1)j) Represents a point pjOf the color attribute of (2), the value of the first color component, di,jRepresents a point piAnd point pjThe distance between the two or more of the two or more,represents a point piAnd point pjAbsolute difference in color intensity over unit distance.

It should be further noted that the distance here may be a geometric euclidean distance, or a distance calculated according to a morton code, and the like, and the embodiment of the present application is not limited; the first color component may be a Y component, but may be extended to a U component, a V component, or the like, and the embodiment of the present application is not limited thereto.

S502: performing feature extraction on the point cloud to be evaluated by using a second computation submodel to obtain a second feature value of the point cloud to be evaluated;

it should be noted that the second computation submodel represents and extracts a feature value related to the color block mean variance for the point cloud to be evaluated. Here, a mean value of color standard deviations of all points within a non-empty voxel block may be used as a second feature value of the point cloud, i.e., a CBMV value.

For the extraction of the second feature value, in some embodiments, the performing feature extraction on the point cloud to be evaluated by using the second computation submodel to obtain the second feature value of the point cloud to be evaluated may include:

calculating second characteristic values corresponding to one or more non-empty pixel blocks in the point cloud to be evaluated;

and performing weighted mean calculation on second characteristic values corresponding to the one or more non-empty pixel blocks, and determining the obtained weighted mean value as a second characteristic value of the point cloud to be evaluated.

Further, the calculating a second feature value corresponding to one or more non-empty voxel blocks in the point cloud to be evaluated may include:

aiming at a current non-empty pixel block in the point cloud to be evaluated, obtaining a third color intensity value of a first color component of at least one point in the current non-empty pixel block;

calculating a weighted mean value of third color intensity values of at least one point in the current non-empty pixel block to obtain a color intensity mean value of the current non-empty pixel block;

for at least one point in the current non-empty pixel block, determining a color standard deviation of the at least one point by using the third color intensity value and the color intensity average value of the current non-empty pixel block;

and calculating the weighted mean value of the color standard deviation of the at least one point to obtain a second characteristic value corresponding to the current non-empty voxel block.

It should be noted that the non-empty voxel block indicates that at least one point is included in the voxel block. In addition, in the embodiment of the present application, for data of a point cloud to be evaluated, such as a third color intensity value of at least one point in a current non-empty voxel block, on a decoder side, the data may also be obtained by parsing a code stream or decoding the code stream.

It should be noted that, as shown in fig. 7, the entire point cloud is first divided into a plurality of H × W × L voxel blocks, where H, W, L is an integer greater than 0, such as 8 × 8 × 8 voxel blocks. Indicated within the white box is the ith non-empty voxel block, pi1,pi2,…,pijRepresenting all points within a non-empty voxel block. Then the specific calculation formula of CBMV for the entire point cloud is as follows,

wherein, K represents the number of non-empty voxel blocks in the point cloud (i.e. the voxel block at least contains one point in the point cloud), M represents the number of all points in the ith non-empty voxel block, and C (p)ij) Denotes the p thijA numerical value of a first color component in the color attribute of the dot, μ represents an average value of the first color components of all the dots in the non-empty voxel block; here, (C (p)ij)-μ)2Representing the p-th within the ith non-empty voxel blockijColor standard deviation of the dots.

In this way, a first characteristic value (i.e., CFGD value) of the point cloud to be evaluated can be calculated according to equation (9), and a second characteristic value (i.e., CBMV value) of the point cloud to be evaluated can be calculated according to equation (10).

S503: and determining the model parameters according to the first eigenvalue, the second eigenvalue and a preset vector matrix.

It should be noted that, before S503, a preset vector matrix needs to be acquired first. In some embodiments, the method may further comprise:

acquiring a subjective quality test data set;

and training the subjective quality test data set to obtain the preset vector matrix.

Here, the predetermined vector matrix is also obtained from the subjective quality test data set. Specifically, after obtaining the subjective quality test data set, the predetermined vector matrix may be obtained by training the subjective quality test data set. Assuming that H represents a preset vector matrix, the values of the vector matrix can be trained based on the subjective quality test data set as follows,

it should be further noted that, because the preset vector matrix is obtained by training based on a large amount of test data, the vector matrix shown in formula (11) can be used for different point clouds.

Thus, after the preset vector matrix is obtained, the two characteristic values are combined at the same time, and the model parameters can be determined. In some embodiments, for S503, the determining the model parameter according to the first eigenvalue, the second eigenvalue and a preset vector matrix may include:

constructing a feature vector based on a preset constant value, the first feature value and the second feature value;

performing multiplication operation on the characteristic vector and the preset vector matrix to obtain a model parameter vector; wherein the model parameter vector comprises a first model parameter, a second model parameter, and a third model parameter;

determining the first model parameter, the second model parameter, and the third model parameter as the model parameters.

Further, in some embodiments, the predetermined constant value is an integer. In general, the preset constant value may be equal to 1, but is not particularly limited.

That is to say that the position of the first electrode,after two characteristic values of CFGD and CBMV are extracted from the point cloud to be evaluated, marking the point cloud with two variables of f1 and f2 respectively; here, the first eigenvalue F1, the second eigenvalue F2 and the preset constant value 1 may constitute a matrix of 1 × 3, i.e., a row vector, denoted by eigenvector F, [ 1F ═ 1F1 f2]. The model parameters P1, P2, and P3 may be combined into a 1 × 3 matrix, which is expressed by a model parameter vector P, [ P1P 2P 3 ]]. And the predetermined vector matrix is H, then the model parameter vector P is calculated as follows,

P=F·H (12)

it should be further noted that, assuming that a specific point cloud quality test data set is known, the model parameters may be determined by a fitting method; then, two eigenvalues are extracted by using the formula (9) and the formula (10) to form an eigenvector, so that a preset vector matrix can be obtained according to the formula (12).

Therefore, after two characteristic values of CFGD and CBMV are extracted from the point cloud to be evaluated, as H is a preset vector matrix, the model parameters can be determined according to the formula (12), so that the MOS value of the point cloud to be evaluated can be calculated by using the quality evaluation model in the following process.

S403: and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.

It should be noted that the quality evaluation model may be regarded as having a corresponding relationship between the model parameter, the characteristic parameter, and the subjective quality measurement value (i.e., MOS value). Since the characteristic parameter may be QS includedgAnd QScThe quality assessment model can then also be regarded as model parameter, QSg、QScAnd the subjective quality measurement value has a corresponding relation, and the corresponding relation is shown as a formula (8). Thus, model parameters, QS, are determinedgAnd QScAnd then, a subjective quality measurement value of the point cloud to be evaluated can be determined according to the quality evaluation model.

In some embodiments, the quality assessment model may also be in a more complex form. At this time, the model parameters include four, which are respectively represented by a, b, c and d; the quality assessment model is shown below in the following,

MOSc=aQSgQSc+bQSg+cQSc+d (13)

one more QS is increased compared to formula (8)gQScThe synchronization also adds a model parameter, thereby being more complex in form and possibly improved in accuracy, but introducing greater difficulty in determining the model parameter. Since the more complicated form has greater difficulty in practical application, the quality assessment model shown in formula (8) is preferably used in the embodiment of the present application.

In addition, the quality evaluation model provided by the embodiment of the application can be applied to the fields of code rate control, coding parameter optimization, data enhancement in the point cloud data post-processing process and the like.

Specifically, (1) in the field of Rate Control (RC), a Rate-Distortion (R-D) model may be established by using the point cloud quality evaluation method, and includes a Rate model and a Distortion model, for example, for Distortion measurement. The calculation formula applied for this is as follows,

wherein R isTMOS obtained by expression of target code rate, expression (14) or expression (15)CThe result is a minimum value, the MOSCThe result is minimum, which indicates that the MOS value is maximum, i.e. the subjective quality is best. That is, if there are multiple code rate selections, the code rate used in the code rate control needs to be less than or equal to the target code rate, and the best subjective quality needs to be obtained.

(2) In the field of encoding parameter optimization, an appropriate geometric quantization parameter (QS) may be selected according to a quality evaluation model shown in equation (8)g) And a color quantization parameter (QS)c) Thereby obtaining the required point cloud quality.

(3) In the field of data enhancement in the point cloud data post-processing process, after the point cloud is processed by adopting a certain data enhancement algorithm, the point cloud quality predicted according to the quality evaluation model is compared with the required point cloud quality, and when the predicted point cloud quality is not less than the required point cloud quality, the enhancement algorithm can stop the strategy of continuing enhancement. Or in the process of constructing a subjective quality test data set and performing deep learning on a large amount of data, at this time, according to the quality evaluation model, three levels of test sequences of high quality, medium quality, low quality and the like can be selected as target data of point cloud quality to perform enhancement processing, so as to obtain the required point cloud quality.

In addition, after the quality evaluation model provided in the embodiment of the present application obtains the subjective quality measurement value, the subjective quality measurement value may also be used for network optimization at the decoder side.

(1) And for the media data processing equipment in the network, optimizing network parameters according to the subjective quality measurement value. For example, the priority, routing table and the like of the transmission unit containing the point cloud code stream data are adjusted.

(2) For the playing device at the user side, for example, the subjective quality measure value (or the mapping value of the subjective quality measure value) may be fed back to the media data processing device in the network, and then the media data processing device performs network optimization on the transmission network according to the subjective quality measure value fed back by the user.

(3) For the playing device at the user side, for example, the post-processing unit parameter of the playing device may be adjusted according to the subjective quality measurement value (or the mapping value of the subjective quality measurement value), and the point cloud obtained after decoding the code stream is enhanced (for example, post-processing filtering, etc., may refer to an image/video post-processing process).

In summary, in order to obtain a more effective quality assessment model, a comprehensive and effective point cloud subjective test data set is first established based on the existing V-PCC encoder. A more accurate and simple quality assessment model is further constructed on this data set. In order to make the quality evaluation model widely applied in the field of practical application, two characteristic values such as color fluctuation and color block average variance on the geometric distance can be extracted from the original point cloud at this time, and a characteristic vector F is formed by the two characteristic values and a preset constant value 1, so that the model parameters can be predicted by using P ═ F · H shown in formula (12).

In the embodiment of the present application, a test is performed on a test sequence of a point cloud subjective test data set according to a quality assessment model shown in equation (8). Wherein, the Pearson Linear Correlation Coefficient (PLCC) is used for measuring whether two data sets are on one line, namely for measuring the Linear relation between distance variables; the Spearman Rank-order Correlation Coefficient (SRCC) is used for reflecting the closeness degree of the relation between two groups of variables and is also called a grade difference method; that is, in the embodiment of the present application, both PLCC and SRCC are used to reflect the consistency between the MOS value predicted from the quality evaluation model and the actual MOS value.

Here, according to the PC _ error technique used by V-PCC in the related art, PLCC and SRCC that can obtain PSNR and MOS values of the Y component are only 0.3956 and 0.3926; on the basis of the quality evaluation model shown in equation (8), 0.9167 and 0.9174 have been reached on the PLCC and SRCC data, respectively, as shown in table 2; this data clearly demonstrates that the scheme of the present application improves the accuracy of subjective quality assessment. In addition, the PC _ error technology in the related technical scheme needs to use the original point cloud and the encoded/decoded distorted point cloud when calculating the PSNR of the Y component, and the demand of the point cloud is large; the quality evaluation model provided by the application only needs the original point cloud and two characteristic values extracted from the original point cloud, and does not need the distortion point cloud and the matching points of the original point cloud and the distortion point cloud to be equal, so that the calculation complexity of subjective quality evaluation is simplified.

TABLE 2

Model Type Model PLCC SRCC RMSE
FR PSNRY 0.3956 0.3926 20.2058
RR proposed algorithm 0.9167 0.9174 8.7933

Wherein FR is a Full Reference (FR) method, and RR is a Reduced Reference (RR) method. Here, FR is a method adopted in the related art, and RR is a method adopted in the embodiment of the present application. From table 2, it is apparent that the PLCC and SRCC of the embodiment of the present application are much higher than those of the related art, and the RMSE of the embodiment of the present application is much lower than that of the related art, indicating that the accuracy of subjective quality assessment in the embodiment of the present application is high.

The embodiment of the application provides a point cloud quality evaluation method which is applied to a decoder or media data processing equipment. Analyzing the code stream to obtain characteristic parameters of the point cloud to be evaluated; determining model parameters of a quality evaluation model; and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated. Therefore, the accuracy of subjective quality assessment can be improved by using the quality assessment model; in the quality evaluation model, the technical scheme of the application only needs to use the original point cloud and the characteristic value extracted from the original point cloud, and does not need the distortion point cloud and the matching point of the original point cloud and the distortion point cloud to be equal, so that the calculation complexity of subjective quality evaluation is simplified.

The embodiment of the application provides a point cloud quality evaluation method which is applied to a point cloud encoder, namely an encoder. The functions implemented by the method may be implemented by the second processor in the encoder calling the computer program, although the computer program may be stored in the second memory, it is understood that the encoder comprises at least the second processor and the second memory.

Referring to fig. 8, a schematic flow chart of another point cloud quality evaluation method according to an embodiment of the present application is shown. As shown in fig. 8, the method may include:

s801: and determining the characteristic parameters of the point cloud to be evaluated.

It should be noted that the point cloud quality evaluation method according to the embodiment of the present application may be applied to an encoder, and may also be applied to a media data processing device. In practical applications, for example, in a media data processing device that occurs in a Network, such as Network optimization and quality evaluation, for example, a Content Delivery Network (CDN), the embodiments of the present application are not limited to a playback device that is used on a traditional user side and includes an encoder, but may also be other devices that include an encoder, and are not limited herein.

It should be noted that in the point cloud, the points may be all points in the point cloud, or may be some points in the point cloud, and these points are relatively concentrated in space.

In some embodiments, the characteristic parameters of the point cloud to be evaluated may include quantization parameters of the point cloud to be evaluated; the quantization parameters can comprise geometric quantization parameters and color quantization parameters of the point cloud to be evaluated.

Here, the geometric quantization parameter may be denoted by QSg, and is used to indicate a quantization step size value of the geometric video sequence; the color quantization parameter may be denoted QSc for indicating a quantization step size value of the color/property video sequence.

In addition, whether it is a geometric quantization parameter or a color quantization parameter, it is determined from the encoding parameter at the encoder side. In some embodiments, for S801, the determining the characteristic parameters of the point cloud to be evaluated may include:

acquiring a pre-coding parameter of the point cloud to be evaluated;

determining the characteristic parameters of the point cloud to be evaluated according to the pre-coding parameters and a preset lookup table; the preset lookup table is used for reflecting the corresponding relation between the coding parameters and the characteristic parameters.

Further, the obtaining of the pre-coding parameters of the point cloud to be evaluated may include:

carrying out pre-coding processing on the current block by utilizing multiple prediction modes to obtain rate distortion cost values corresponding to each prediction mode; wherein different prediction modes correspond to different coding parameters

And selecting a minimum rate distortion cost value from the obtained multiple rate distortion cost values, and determining a coding parameter corresponding to the minimum rate distortion cost value as the precoding parameter.

It should be noted that, at the encoder side, for the determination of the precoding parameter, a simple decision strategy may be adopted, for example, the determination is performed according to the size of the distortion value; a complex decision strategy, such as determination based on the result of Rate Distortion Optimization (RDO), may also be adopted, and the embodiment of the present application is not limited in any way. Generally, the precoding parameter of the current block can be determined in an RDO manner.

In this way, the encoder stores a preset lookup table, and the preset lookup table is used for reflecting the corresponding relationship between the encoding parameters and the characteristic parameters. The characteristic parameters are QSg and QSc as examples, and the preset lookup table may also be used to reflect the corresponding relationship between the encoding parameters and QSg and QSc; thus, after the precoding parameter of this time is determined, QSg and QSc corresponding to the precoding parameter can be searched from a preset lookup table, and QSg and QSc found at this time are the characteristic parameters of the point cloud to be evaluated, so that the subjective quality of the point cloud to be evaluated can be evaluated subsequently.

It should be noted that, at the encoder side, the feature parameters (such as QSg and QSc) need to be written into the code stream and then transmitted from the encoder to the decoder, so as to obtain the feature parameters such as QSg and QSc at the decoder side later by parsing the code stream.

S802: model parameters of the quality assessment model are determined.

The model parameters refer to parameters used for constructing the quality estimation model. In general, the model parameters may include a plurality of parameters, such as a first model parameter (denoted by p 1), a second model parameter (denoted by p 2), and a third model parameter (denoted by p 3).

The determination of the model parameters can be described in several embodiments below.

In one possible embodiment, the determination may be determined using a fit to the subjective quality test data set. The determining of the model parameters of the quality assessment model may include:

acquiring a subjective quality test data set;

fitting a model parameter function based on the subjective quality test dataset; the model parameter function is used for reflecting the corresponding relation between the model parameters and the characteristic parameters;

and calculating the model parameters according to the acquired characteristic parameters and the model parameter function.

It should be noted that, the characteristic parameters are QSg and QSc as examples, the subjective quality test data set is pre-established, and the subjective quality test data set at least includes a plurality of distorted point clouds, QSg and QSc corresponding to each distorted point cloud, and an actual MOS value. After the subjective quality test data set is obtained, a model parameter function can be fitted; the model parameter function at this time reflects the correspondence between the model parameters and QSg and QSc. After learning QSg and QSc, the fitted model parameters, such as p1, p2, and p3 shown in table 1, can be obtained for a certain point cloud to be evaluated according to the model parameter function. Then, according to the quality evaluation model shown in the formula (8), the MOS value of the point cloud can also be predicted.

In another possible embodiment, the model parameters do not necessarily need to be calculated on site, but may be spare data obtained in advance. The determining of the model parameters of the quality assessment model may include:

and selecting a model parameter for the point cloud to be evaluated from one or more preset groups of candidate quality evaluation model parameters.

That is, in the decoder or the media data processing apparatus, one or more sets of candidate quality assessment model parameters are stored in advance. At this time, the model parameters for the point cloud to be evaluated can be directly selected from the preset one or more groups of candidate quality evaluation model parameters, so as to obtain the quality evaluation model.

Further, in order to enable the quality assessment model to be used in an actual encoder or decoder, the embodiment of the present application also provides a prediction method of the model parameter based on the feature.

In yet another possible implementation, the model parameters may be determined based on the point cloud extracted features. The determining of the model parameters of the quality assessment model may include:

performing feature extraction on the point cloud to be evaluated by using a first computation submodel to obtain a first feature value of the point cloud to be evaluated;

performing feature extraction on the point cloud to be evaluated by using a second computation submodel to obtain a second feature value of the point cloud to be evaluated;

and determining the model parameters according to the first eigenvalue, the second eigenvalue and a preset vector matrix.

It should be noted that the first computation submodel represents the feature value extracted from the point cloud to be evaluated and related to the color fluctuation at the geometric distance. Here, an average value of color intensity differences from N neighboring points of the field per unit distance may be used as a first feature value of the point cloud, i.e., a CFGD value. Where N is an integer greater than 0, for example, N is equal to 7, but is not particularly limited.

It should be further noted that the second computation submodel represents and extracts a feature value related to the color block mean variance from the point cloud to be evaluated. Here, a mean value of color standard deviations of all points within a non-empty voxel block may be used as a second feature value of the point cloud, i.e., a CBMV value.

The following is a detailed description of the extraction process of the two feature values.

For the extracting of the first feature value, in some embodiments, the extracting the feature of the point cloud to be evaluated by using the first computation submodel to obtain the first feature value of the point cloud to be evaluated may include:

calculating a first characteristic value corresponding to one or more points in the point cloud to be evaluated;

and performing weighted mean calculation on the first characteristic values corresponding to the one or more points, and determining the obtained weighted mean as the first characteristic value of the point cloud to be evaluated.

Further, the calculating a first feature value corresponding to one or more points in the point cloud to be evaluated may include:

aiming at a current point in the point cloud to be evaluated, determining a near-neighbor point set associated with the current point; wherein the set of neighbor points includes at least one neighbor point therein;

aiming at the adjacent point set, calculating the color intensity difference value of the current point and the at least one adjacent point on a unit distance to obtain the color intensity difference value on the at least one unit distance;

and calculating the weighted average value of the color intensity difference values on the at least one unit distance to obtain a first characteristic value corresponding to the current point.

Further, the calculating a color intensity difference value of the current point and the at least one neighboring point in a unit distance may include:

obtaining a first color intensity value of the first color component of the current point and a second color intensity value of the first color component of the at least one adjacent point;

calculating the absolute value of the difference between the first color intensity value of the current point and the second color intensity value of the at least one adjacent point to obtain the color intensity difference value of the current point and the at least one adjacent point;

and obtaining the color intensity difference value of the current point and the at least one near-adjacent point in unit distance according to the color intensity difference value of the current point and the at least one near-adjacent point and the distance value between the current point and the at least one near-adjacent point.

It should be noted that, in the embodiment of the present application, for data of a point cloud to be evaluated, such as a first color intensity value of a current point, a first color intensity value of a neighboring point, and the like, at an encoder side, the data may be obtained through the data of the point cloud to be evaluated, and at the same time, the data needs to be written into a code stream to be transmitted to a decoder by the encoder.

It should be noted that, for the weighted average, the weighted values may be the same or different. In the case of the same weight value, it can be regarded as an average value, that is, the average value of the equal weight values belongs to a special weighted average value.

That is, as shown in fig. 6, p0 is the current point to be calculated, and p1, p2, …, and p7 are N points representing the nearest neighbors of p0, where N is equal to 7. Firstly, calculating the average value of the color intensity absolute difference value of the current point to be calculated and the N adjacent points in the adjacent geometric Euclidean distance to be used as the CFGD value corresponding to the current point to be calculated, then carrying out average value calculation on the CFGD values corresponding to a plurality of points in the whole point cloud, and determining the obtained average value as the CFGD value of the point cloud. Here, the specific calculation formula of CFGD of the entire point cloud is as shown in the above formula (9), and the details are described in the decoder side.

For the extraction of the second feature value, in some embodiments, the performing feature extraction on the point cloud to be evaluated by using the second computation submodel to obtain the second feature value of the point cloud to be evaluated may include:

calculating second characteristic values corresponding to one or more non-empty pixel blocks in the point cloud to be evaluated;

and performing weighted mean calculation on second characteristic values corresponding to the one or more non-empty pixel blocks, and determining the obtained weighted mean value as a second characteristic value of the point cloud to be evaluated.

Further, the calculating a second feature value corresponding to one or more non-empty voxel blocks in the point cloud to be evaluated may include:

aiming at a current non-empty pixel block in the point cloud to be evaluated, obtaining a third color intensity value of a first color component of at least one point in the current non-empty pixel block;

calculating a weighted mean value of third color intensity values of at least one point in the current non-empty pixel block to obtain a color intensity mean value of the current non-empty pixel block;

for at least one point in the current non-empty pixel block, determining a color standard deviation of the at least one point by using the third color intensity value and the color intensity average value of the current non-empty pixel block;

and calculating the weighted mean value of the color standard deviation of the at least one point to obtain a second characteristic value corresponding to the current non-empty voxel block.

It should be noted that the non-empty voxel block indicates that at least one point is included in the voxel block. In addition, in the embodiment of the present application, for data of the point cloud to be evaluated, such as a third color intensity value of at least one point in the current non-empty voxel block, on the encoder side, the data may be obtained through the data of the point cloud to be evaluated, and meanwhile, the data needs to be written into a code stream to be transmitted to the decoder by the encoder.

That is, as shown in fig. 7, the entire point cloud is first divided into a plurality of H × W × L voxel blocks, where H, W, L is an integer greater than 0, such as 8 × 8 × 8 voxel blocks. Indicated within the white box is the ith non-empty voxel block, pi1,pi2,…,pijRepresenting all points within a non-empty voxel block. Then get it readyThe specific formula for calculating CBMV of each point cloud is shown in the above equation (10), and the details are described in the decoder side.

Thus, a first characteristic value (i.e., CFGD value) of the point cloud to be evaluated can be calculated according to the above equation (9), and a second characteristic value (i.e., CBMV value) of the point cloud to be evaluated can be calculated according to the above equation (10).

It should be noted that, before determining the model parameters, a preset vector matrix needs to be obtained. In some embodiments, the method may further comprise:

acquiring a subjective quality test data set;

and training the subjective quality test data set to obtain the preset vector matrix.

Here, the predetermined vector matrix is also obtained from the subjective quality test data set. Specifically, after obtaining the subjective quality test data set, the predetermined vector matrix may be obtained by training the subjective quality test data set. Assuming that H represents a preset vector matrix, the values of the vector matrix can be trained based on the subjective quality test data set as shown in equation (11) above.

It should be further noted that, because the preset vector matrix is obtained by training based on a large amount of test data, the vector matrix shown in formula (11) can be used for different point clouds.

Thus, after obtaining the predetermined vector matrix, the two eigenvalues can be combined to determine the model parameters. In some embodiments, the determining the model parameter according to the first eigenvalue, the second eigenvalue and a preset vector matrix may include:

constructing a feature vector based on a preset constant value, the first feature value and the second feature value;

performing multiplication operation on the characteristic vector and the preset vector matrix to obtain a model parameter vector; wherein the model parameter vector comprises a first model parameter, a second model parameter, and a third model parameter;

determining the first model parameter, the second model parameter, and the third model parameter as the model parameters.

Further, in some embodiments, the predetermined constant value is an integer. In general, the preset constant value may be equal to 1, but is not particularly limited.

Namely, after two characteristic values of CFGD and CBMV are extracted from the point cloud to be evaluated, two variables of f1 and f2 are respectively used for marking; here, the first eigenvalue F1, the second eigenvalue F2 and the preset constant value 1 may constitute a matrix of 1 × 3, i.e., a row vector, denoted by eigenvector F, [ 1F ═ 1F1 f2]. The model parameters P1, P2, and P3 may be combined into a 1 × 3 matrix, which is expressed by a model parameter vector P, [ P1P 2P 3 ]]And the predetermined vector matrix is H, the model parameter vector P is calculated as shown in the above equation (12).

Thus, after two characteristic values of CFGD and CBMV are extracted from the point cloud to be evaluated, since H is a preset vector matrix, the model parameters can be determined according to the above equation (12), so that the MOS value of the point cloud to be evaluated can be calculated by using the quality evaluation model in the following.

S803: and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.

It should be noted that the quality evaluation model may be regarded as having a corresponding relationship between the model parameter, the characteristic parameter, and the subjective quality measurement value (i.e., MOS value). Since the characteristic parameter may be QS includedgAnd QScThe quality assessment model can then also be regarded as model parameter, QSg、QScAnd the subjective quality measurement value has a corresponding relation, and the corresponding relation is shown as a formula (8). Thus, model parameters, QS, are determinedgAnd QScAnd then, a subjective quality measurement value of the point cloud to be evaluated can be determined according to the quality evaluation model.

In addition, after the quality evaluation model provided in the embodiment of the present application obtains the subjective quality measurement value, the subjective quality measurement value may also be used for network optimization at the encoder side.

(1) And transmitting the subjective quality measurement value (or the mapping value of the subjective quality measurement value) to an encoder optimization module, and determining encoding parameters of the encoder by the encoder optimization module according to the subjective quality measurement value, such as a rate distortion optimization process, a code rate control process and the like.

(2) From the point cloud code stream sending side, the subjective quality measurement value can also be sent to a media data processing device in the network for network optimization of the transmission network.

(3) The subjective quality measurement value can also be sent to the playing device at the user side, and then the playing device at the user side can compare the subjective quality measurement value with the local evaluation value thereof to evaluate the transmission quality of the network and feed the evaluation result back to the media data processing device in the network for network optimization.

The embodiment of the application provides a point cloud quality evaluation method which is applied to an encoder or media data processing equipment. Determining characteristic parameters of the point cloud to be evaluated; determining model parameters of a quality evaluation model; and then, determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated. Therefore, the accuracy of subjective quality assessment can be improved by using the quality assessment model; in the quality evaluation model, the technical scheme of the application only needs to use the original point cloud and the characteristic value extracted from the original point cloud, and does not need the distortion point cloud and the matching point of the original point cloud and the distortion point cloud to be equal, so that the calculation complexity of subjective quality evaluation is simplified.

Based on the same inventive concept of the foregoing embodiment, refer to fig. 9, which shows a schematic structural diagram of a decoder 90 provided in an embodiment of the present application. As shown in fig. 9, the decoder 90 may include: an analysis unit 901, a first determination unit 902 and a first calculation unit 903; wherein the content of the first and second substances,

the analysis unit 901 is configured to analyze the code stream and obtain characteristic parameters of the point cloud to be evaluated;

a first determining unit 902 configured to determine model parameters of the quality assessment model;

a first calculating unit 903, configured to determine a subjective quality measure value of the point cloud to be evaluated by using the quality evaluation model according to the model parameter and the feature parameter of the point cloud to be evaluated.

In some embodiments, the characteristic parameters of the point cloud to be evaluated comprise quantization parameters of the point cloud to be evaluated; the quantization parameters comprise geometric quantization parameters and color quantization parameters of the point cloud to be evaluated.

In some embodiments, referring to fig. 9, the decoder 90 may further comprise a first fitting unit 904 configured to obtain a subjective quality test data set; fitting a model parameter function based on the subjective quality test dataset; the model parameter function is used for reflecting the corresponding relation between the model parameters and the characteristic parameters;

the first calculating unit 903 is further configured to calculate the model parameters according to the acquired feature parameters and the model parameter function.

In some embodiments, the first determining unit 902 is further configured to select a model parameter for the point cloud to be evaluated from one or more preset sets of candidate quality evaluation model parameters.

In some embodiments, the first calculating unit 903 is further configured to perform feature extraction on the point cloud to be evaluated by using a first calculating sub-model, so as to obtain a first feature value of the point cloud to be evaluated; performing feature extraction on the point cloud to be evaluated by using a second computation submodel to obtain a second feature value of the point cloud to be evaluated;

a first determining unit 902, further configured to determine the model parameter according to the first eigenvalue, the second eigenvalue, and a preset vector matrix;

the first computation submodel represents and extracts characteristic values related to color fluctuation on geometric distance for the point cloud to be evaluated, and the second computation submodel represents and extracts characteristic values related to color block average variance for the point cloud to be evaluated.

In some embodiments, the first calculating unit 903 is further configured to calculate a first feature value corresponding to one or more points in the point cloud to be evaluated; and performing weighted mean calculation on the first characteristic values corresponding to the one or more points, and determining the obtained weighted mean as the first characteristic value of the point cloud to be evaluated.

In some embodiments, the first determining unit 902 is further configured to determine, for a current point in the point cloud to be evaluated, a set of neighboring points associated with the current point; wherein the set of neighbor points includes at least one neighbor point therein;

a first calculating unit 903, further configured to calculate, for the set of near-neighboring points, a color intensity difference value of the current point and the at least one near-neighboring point in a unit distance, so as to obtain a color intensity difference value in at least one unit distance; and calculating a weighted average value of the color intensity difference values in the at least one unit distance to obtain a first characteristic value corresponding to the current point.

In some embodiments, the first calculating unit 903 is further configured to obtain a first color intensity value of the first color component of the current point and a second color intensity value of the first color component of the at least one neighboring point; calculating the absolute value of the difference between the first color intensity value of the current point and the second color intensity value of the at least one adjacent point to obtain the color intensity difference value of the current point and the at least one adjacent point;

the first determining unit 902 is further configured to determine a color intensity difference value of the current point and the at least one neighboring point in a unit distance according to the color intensity difference value of the current point and the at least one neighboring point and a distance value between the current point and the at least one neighboring point.

In some embodiments, the first calculating unit 903 is further configured to calculate a second feature value corresponding to one or more non-empty voxel blocks in the point cloud to be evaluated; and performing weighted mean calculation on second characteristic values corresponding to the one or more non-empty voxel blocks, and determining the obtained weighted mean value as a second characteristic value of the point cloud to be evaluated.

In some embodiments, the first calculating unit 903 is further configured to obtain, for a current non-empty pixel block in the point cloud to be evaluated, a third color intensity value of a first color component of at least one point in the current non-empty pixel block; calculating a weighted mean value of third color intensity values of at least one point in the current non-empty pixel block to obtain a color intensity mean value of the current non-empty pixel block; and further configured to determine, for at least one point within the current non-empty voxel block, a color standard deviation for the at least one point using the third color intensity value and a color intensity average of the current non-empty voxel block; and calculating the weighted mean value of the color standard deviation of the at least one point to obtain a second characteristic value corresponding to the current non-empty voxel block.

In some embodiments, referring to fig. 9, the decoder 90 may further comprise a first training unit 905 configured to obtain a subjective quality test data set; and training the subjective quality test data set to obtain the preset vector matrix.

In some embodiments, the first determining unit 902 is further configured to select the preset vector matrix for determining the model parameter from one or more preset sets of candidate vector matrices.

In some embodiments, referring to fig. 9, the decoder 90 may further include a first constructing unit 906 configured to construct a feature vector based on a preset constant value, the first feature value, and the second feature value;

the first calculating unit 903 is further configured to perform multiplication operation on the feature vector and the preset vector matrix to obtain a model parameter vector; wherein the model parameter vector comprises a first model parameter, a second model parameter, and a third model parameter;

a first determining unit 902 further configured to determine the first model parameter, the second model parameter and the third model parameter as the model parameters.

In some embodiments, the predetermined constant value is an integer.

It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.

Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Therefore, the present embodiment provides a computer storage medium applied to the decoder 90, and the computer storage medium stores a computer program, and the computer program realizes the method described in the decoder side in the foregoing embodiment when executed by the first processor.

Based on the above-mentioned composition of the decoder 90 and the computer storage medium, referring to fig. 10, it shows a specific hardware structure example of the decoder 90 provided in the embodiment of the present application, which may include: a first communication interface 1001, a first memory 1002, and a first processor 1003; the various components are coupled together by a first bus system 1004. It is understood that the first bus system 1004 is used to enable communications for connections between these components. The first bus system 1004 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as the first bus system 1004 in fig. 10. Wherein the content of the first and second substances,

a first communication interface 1001, which is used for receiving and sending signals during the process of receiving and sending information with other external network elements;

a first memory 1002 for storing a computer program capable of running on the first processor 1003;

a first processor 1003 configured to, when running the computer program, perform:

analyzing the code stream to obtain characteristic parameters of the point cloud to be evaluated;

determining model parameters of a quality evaluation model;

and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.

It is to be appreciated that the first memory 1002 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The first memory 1002 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.

And the first processor 1003 may be an integrated circuit chip having signal processing capability. In implementation, the steps of the above method may be implemented by integrated logic circuits of hardware or instructions in the form of software in the first processor 1003. The first Processor 1003 may be a general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, or discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the first memory 1002, and the first processor 1003 reads the information in the first memory 1002, and completes the steps of the method in combination with the hardware thereof.

It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof. For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.

Optionally, as another embodiment, the first processor 1003 is further configured to execute the method in any one of the foregoing embodiments when running the computer program.

The present embodiment provides a decoder that may include a parsing unit, a first determining unit, and a first calculating unit. In the decoder, the accuracy of subjective quality assessment can be improved by using a quality assessment model; in the quality evaluation model, the technical scheme of the application only needs to use the original point cloud and the characteristic value extracted from the original point cloud, and does not need the distortion point cloud and the matching point of the original point cloud and the distortion point cloud to be equal, so that the calculation complexity of subjective quality evaluation is simplified.

Based on the same inventive concept of the foregoing embodiment, refer to fig. 11, which shows a schematic structural diagram of an encoder 110 according to an embodiment of the present application. As shown in fig. 11, the encoder 110 may include: a second determination unit 1101 and a second calculation unit 1102; wherein the content of the first and second substances,

a second determining unit 1101 configured to determine a characteristic parameter of the point cloud to be evaluated;

a second determining unit 1101, further configured to determine model parameters of the quality assessment model;

a second calculating unit 1102 configured to determine a subjective quality measure value of the point cloud to be evaluated by using the quality evaluation model according to the model parameter and the characteristic parameter of the point cloud to be evaluated.

In some embodiments, referring to fig. 11, the encoder 110 may further include a finding unit 1103 configured to obtain a pre-encoding parameter of the point cloud to be evaluated; determining the characteristic parameters of the point cloud to be evaluated according to the pre-coding parameters and a preset lookup table; the preset lookup table is used for reflecting the corresponding relation between the coding parameters and the characteristic parameters.

In some embodiments, the characteristic parameters of the point cloud to be evaluated comprise quantization parameters of the point cloud to be evaluated; the quantization parameters comprise geometric quantization parameters and color quantization parameters of the point cloud to be evaluated.

In some embodiments, referring to fig. 11, the encoder 110 may further comprise a second fitting unit 1104 configured to obtain a subjective quality test data set; fitting a model parameter function based on the subjective quality test dataset; the model parameter function is used for reflecting the corresponding relation between the model parameters and the characteristic parameters;

the second calculating unit 1102 is further configured to calculate the model parameters according to the acquired feature parameters and the model parameter function.

In some embodiments, the second determining unit 1101 is further configured to select a model parameter for the point cloud to be evaluated from one or more preset sets of candidate quality evaluation model parameters.

In some embodiments, the second calculating unit 1102 is further configured to perform feature extraction on the point cloud to be evaluated by using a first calculating sub-model, so as to obtain a first feature value of the point cloud to be evaluated; performing feature extraction on the point cloud to be evaluated by using a second computation submodel to obtain a second feature value of the point cloud to be evaluated;

a second determining unit 1101, further configured to determine the model parameter according to the first eigenvalue, the second eigenvalue, and a preset vector matrix;

the first computation submodel represents and extracts characteristic values related to color fluctuation on geometric distance for the point cloud to be evaluated, and the second computation submodel represents and extracts characteristic values related to color block average variance for the point cloud to be evaluated.

In some embodiments, the second calculating unit 1102 is further configured to calculate a first feature value corresponding to one or more points in the point cloud to be evaluated; and performing weighted mean calculation on the first characteristic values corresponding to the one or more points, and determining the obtained weighted mean as the first characteristic value of the point cloud to be evaluated.

In some embodiments, the second determining unit 1101 is further configured to determine, for a current point in the point cloud to be evaluated, a set of neighboring points associated with the current point; wherein the set of neighbor points includes at least one neighbor point therein;

a second calculating unit 1102, further configured to calculate, for the set of near-neighboring points, a color intensity difference value of the current point and the at least one near-neighboring point in a unit distance, so as to obtain a color intensity difference value in at least one unit distance; and calculating a weighted average value of the color intensity difference values in the at least one unit distance to obtain a first characteristic value corresponding to the current point.

In some embodiments, the second calculating unit 1102 is further configured to obtain a first color intensity value of the first color component of the current point and a second color intensity value of the first color component of the at least one neighboring point; calculating the absolute value of the difference between the first color intensity value of the current point and the second color intensity value of the at least one adjacent point to obtain the color intensity difference value of the current point and the at least one adjacent point;

a second determining unit 1101, further configured to determine a color intensity difference value of the current point and the at least one neighboring point in a unit distance according to the color intensity difference value of the current point and the at least one neighboring point and a distance value between the current point and the at least one neighboring point.

In some embodiments, the second calculating unit 1102 is further configured to calculate a second feature value corresponding to one or more non-empty pixel blocks in the point cloud to be evaluated; and performing weighted mean calculation on second characteristic values corresponding to the one or more non-empty voxel blocks, and determining the obtained weighted mean value as a second characteristic value of the point cloud to be evaluated.

In some embodiments, the second calculating unit 1102 is further configured to obtain, for a current non-empty pixel block in the point cloud to be evaluated, a third color intensity value of the first color component of at least one point in the current non-empty pixel block; calculating a weighted mean value of third color intensity values of at least one point in the current non-empty pixel block to obtain a color intensity mean value of the current non-empty pixel block; and further configured to determine, for at least one point within the current non-empty voxel block, a color standard deviation for the at least one point using the third color intensity value and a color intensity average of the current non-empty voxel block; and calculating the weighted mean value of the color standard deviation of the at least one point to obtain a second characteristic value corresponding to the current non-empty voxel block.

In some embodiments, referring to fig. 11, the encoder 110 may further comprise a second training unit 1105 configured to obtain a subjective quality test data set; and training the subjective quality test data set to obtain the preset vector matrix.

In some embodiments, the second determining unit 1101 is further configured to obtain a subjective quality test data set; and training the subjective quality test data set to obtain the preset vector matrix.

In some embodiments, referring to fig. 11, the encoder 110 may further include a second constructing unit 1106 configured to construct a feature vector based on a preset constant value, the first feature value, and the second feature value;

the second calculating unit 1102 is further configured to perform multiplication operation on the feature vector and the preset vector matrix to obtain a model parameter vector; wherein the model parameter vector comprises a first model parameter, a second model parameter, and a third model parameter;

a second determining unit 1101, further configured to determine the first model parameter, the second model parameter and the third model parameter as the model parameters.

In some embodiments, the predetermined constant value is an integer.

It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.

Based on the understanding that the integrated unit, if implemented in the form of a software functional module and not sold or used as a standalone product, can be stored in a computer readable storage medium, the present embodiment provides a computer storage medium applied to the encoder 110, and the computer storage medium stores a computer program, and the computer program is executed by the second processor to implement the method described in the encoder side in the foregoing embodiments.

Based on the above-mentioned composition of the encoder 110 and the computer storage medium, referring to fig. 12, it shows a specific hardware structure example of the encoder 110 provided in the embodiment of the present application, which may include: a second communication interface 1201, a second memory 1202, and a second processor 1203; the various components are coupled together by a second bus system 1204. It is understood that the second bus system 1204 is used to enable connective communication between these components. The second bus system 1204 includes a power bus, a control bus, and a status signal bus, in addition to the data bus. But for clarity of illustration the various buses are labeled as the second bus system 1204 in figure 12. Wherein the content of the first and second substances,

a second communication interface 1201, configured to receive and transmit signals during information transmission and reception with other external network elements;

a second memory 1202 for storing a computer program operable on the second processor 1203;

a second processor 1203, configured to, when executing the computer program, perform:

determining characteristic parameters of the point cloud to be evaluated;

determining model parameters of a quality evaluation model;

and determining a subjective quality measurement value of the point cloud to be evaluated by using the quality evaluation model according to the model parameters and the characteristic parameters of the point cloud to be evaluated.

Optionally, as another embodiment, the second processor 1203 is further configured to execute the method of any of the previous embodiments when the computer program is executed.

It is to be understood that the second memory 1202 is similar in hardware functionality to the first memory 1002, and the second processor 1203 is similar in hardware functionality to the first processor 1003; and will not be described in detail herein.

The present embodiment provides an encoder that may include a second determining unit and a second calculating unit. In the encoder, the accuracy of subjective quality assessment can be improved by using a quality assessment model; in the quality evaluation model, the technical scheme of the application only needs to use the original point cloud and the characteristic value extracted from the original point cloud, and does not need the distortion point cloud and the matching point of the original point cloud and the distortion point cloud to be equal, so that the calculation complexity of subjective quality evaluation is simplified.

It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.

The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.

The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.

Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.

The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.

The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于梯度模相似度离差最小化的比特分配方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类