Video quality evaluation method, device, equipment and medium

文档序号:309232 发布日期:2021-11-26 浏览:34次 中文

阅读说明:本技术 视频质量评估方法、装置、设备及介质 (Video quality evaluation method, device, equipment and medium ) 是由 张武强 于 2021-09-13 设计创作,主要内容包括:本公开涉及一种视频质量评估方法、装置、设备及介质。其中,视频质量评估方法包括:获取待评估视频;从待评估视频中提取各个颜色通道对应的视频颜色数据;根据视频颜色数据,对待评估视频进行多颜色通道的时空域分析,得到待评估视频的时空域评估参数;根据时空域评估参数,确定待评估视频的视频质量。根据本公开实施例,能够提高视频质量评估的准确性,并且可以适用于各种视频质量评估场景。(The present disclosure relates to a video quality assessment method, apparatus, device, and medium. The video quality evaluation method comprises the following steps: acquiring a video to be evaluated; extracting video color data corresponding to each color channel from a video to be evaluated; according to the video color data, performing time-space domain analysis of a multi-color channel on a video to be evaluated to obtain time-space domain evaluation parameters of the video to be evaluated; and determining the video quality of the video to be evaluated according to the time-space domain evaluation parameters. According to the embodiment of the disclosure, the accuracy of video quality evaluation can be improved, and the method and the device can be applied to various video quality evaluation scenes.)

1. A method for video quality assessment, comprising:

acquiring a video to be evaluated;

extracting video color data corresponding to each color channel from the video to be evaluated;

performing time-space domain analysis of a multi-color channel on the video to be evaluated according to the video color data to obtain time-space domain evaluation parameters of the video to be evaluated;

and determining the video quality of the video to be evaluated according to the time-space domain evaluation parameters.

2. The method according to claim 1, wherein the extracting video color data corresponding to each color channel from the video to be evaluated comprises:

extracting a plurality of video frames from the video to be evaluated;

performing color decomposition on each video frame to obtain image color data corresponding to each color channel;

and generating video color data corresponding to each color channel according to the image color data.

3. The method of claim 2, wherein the extracting the plurality of video frames from the video to be evaluated comprises:

determining a video decoding mode according to the video type of the video to be evaluated;

performing video decoding on the video to be evaluated according to the video decoding mode to obtain video decoding data;

and performing frame extraction processing on the video decoding data according to a preset frame extraction frequency to obtain the plurality of video frames.

4. The method of claim 2, wherein generating video color data corresponding to each of the color channels from the image color data comprises:

and according to the video frame sequence of the video frames, splicing the image color data corresponding to each color channel to obtain the video color data corresponding to each color channel.

5. The method according to claim 1, wherein the performing, according to the video color data, a spatio-temporal domain analysis of a plurality of color channels on the video to be evaluated to obtain spatio-temporal domain evaluation parameters of the video to be evaluated comprises:

performing time-space domain analysis on the video to be evaluated according to the video color data aiming at each color channel to obtain a single-channel time-space domain characteristic value corresponding to the color channel;

fusing the single-channel time-space domain characteristic values to obtain multi-channel time-space domain characteristic values;

and taking the multi-channel time-space domain characteristic value as the time-space domain evaluation parameter.

6. The method according to claim 5, wherein the performing, according to the video color data, a spatio-temporal analysis on the video to be evaluated to obtain a single-channel spatio-temporal eigenvalue corresponding to the color channel comprises:

for each pixel in a video picture of the video to be evaluated, performing time domain analysis on the pixel according to the video color data to obtain a time domain characteristic value of the pixel for the color channel;

and according to the time domain characteristic value, performing space domain analysis on the video to be evaluated aiming at the color channel to obtain a single-channel time-space domain characteristic value corresponding to the color channel.

7. The method of claim 6, wherein the temporally analyzing the pixel according to the video color data to obtain a temporal feature value of the pixel for the color channel comprises:

calculating local time domain characteristic values and global time domain characteristic values of the pixels aiming at the time domain characteristic values of the color channels according to the video color data;

fusing the local temporal feature values and the global temporal feature values into temporal feature values of the pixels for the color channels.

8. The method of claim 7, wherein the local temporal feature value is used to represent a color channel value change rate between two consecutive video frames, and the global temporal feature value is used to represent a color channel value change rate between two video frames spaced by a preset number of frames.

9. The method of claim 7, wherein fusing the local temporal feature value and the global temporal feature value into a temporal feature value of the pixel for the color channel comprises:

and splicing the local time domain characteristic value and the global time domain characteristic value to obtain a time domain characteristic value of the pixel aiming at the color channel.

10. The method according to claim 6, wherein the performing, according to the temporal eigenvalue, spatial analysis on the video to be evaluated with respect to the color channel to obtain a single-channel temporal-spatial eigenvalue corresponding to the color channel comprises:

dividing the video picture into a plurality of image blocks;

for each image block, calculating a spatial domain characteristic value of the image block for the color channel according to the time domain characteristic value corresponding to the pixel contained in the image block;

and taking the space domain characteristic value corresponding to each image block as a single-channel time-space domain characteristic value corresponding to the color channel.

11. The method according to claim 10, wherein the fusing the single-channel time-space domain eigenvalues to obtain a multi-channel time-space domain eigenvalue comprises:

calculating the difference between the maximum spatial domain characteristic value and the minimum spatial domain characteristic value corresponding to each image block aiming at each color channel;

and taking the difference value corresponding to each image block as the multi-channel time-space domain characteristic value.

12. The method according to claim 11, wherein the determining the video quality of the video to be evaluated according to the spatio-temporal domain evaluation parameters comprises:

and adding the difference values corresponding to the image blocks to obtain the video quality.

13. A video quality assessment apparatus, comprising:

the video acquisition module is configured to acquire a video to be evaluated;

the data extraction module is configured to extract video color data corresponding to each color channel from the video to be evaluated;

the parameter analysis module is configured to perform time-space domain analysis of a multi-color channel on the video to be evaluated according to the video color data to obtain time-space domain evaluation parameters of the video to be evaluated;

and the quality evaluation module is configured to determine the video quality of the video to be evaluated according to the time-space domain evaluation parameters.

14. A video quality evaluation apparatus characterized by comprising:

a processor;

a memory for storing executable instructions;

wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the video quality assessment method of any of the above claims 1-12.

15. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, causes the processor to carry out a video quality assessment method according to any of the preceding claims 1 to 12.

Technical Field

The present disclosure relates to the field of video processing technologies, and in particular, to a method, an apparatus, a device, and a medium for video quality assessment.

Background

The video quality assessment plays a crucial role in the automatic driving scene, and can be used for evaluating the visibility of the external scene of the vehicle in the video. There are generally two types of methods for evaluating video quality: subjective methods and objective methods. Subjective methods rely on human subjects to view video and provide quality ratings, objective methods use algorithms to predict how human viewers rate video.

Real-time assessment of the autodrive scenario is more practical because the objective method does not require intense human involvement. Depending on whether reference video (or features of reference video) is needed, objective methods can be further divided into three categories: a Full Reference (FR) video quality evaluation method that evaluates video quality using a complete original video signal as comparison data, a partial Reference (RR) video quality evaluation method that evaluates video quality using extracted partial video features as comparison data, and a No Reference (NR) video quality evaluation method that evaluates video quality using only actual data obtained by a user.

However, the existing objective methods have the problems of limited use scenes and low accuracy.

Disclosure of Invention

To solve the above technical problem or at least partially solve the above technical problem, the present disclosure provides a video quality assessment method, apparatus, device, and medium.

In a first aspect, the present disclosure provides a video quality assessment method, including:

acquiring a video to be evaluated;

extracting video color data corresponding to each color channel from a video to be evaluated;

according to the video color data, performing time-space domain analysis of a multi-color channel on a video to be evaluated to obtain time-space domain evaluation parameters of the video to be evaluated;

and determining the video quality of the video to be evaluated according to the time-space domain evaluation parameters.

In a second aspect, the present disclosure provides a video quality assessment apparatus, including:

the video acquisition module is configured to acquire a video to be evaluated;

the data extraction module is configured to extract video color data corresponding to each color channel from a video to be evaluated;

the parameter analysis module is configured to perform time-space domain analysis of a multi-color channel on the video to be evaluated according to the video color data to obtain time-space domain evaluation parameters of the video to be evaluated;

and the quality evaluation module is configured to determine the video quality of the video to be evaluated according to the time-space domain evaluation parameters.

In a third aspect, the present disclosure provides a video quality evaluation apparatus, comprising:

a processor;

a memory for storing executable instructions;

wherein the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the video quality assessment method of the first aspect.

In a fourth aspect, the present disclosure provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to implement the video quality assessment method of the first aspect.

Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:

the video quality assessment method, device, equipment and medium of the disclosed embodiment can extract video color data corresponding to each color channel from a video to be assessed, then perform time-space domain analysis of a multi-color channel on the video to be assessed according to the video color data to obtain time-space domain assessment parameters of the video to be assessed, and assess the video quality of the video to be assessed according to the time-space domain assessment parameters so as to evaluate the video quality of the video to be assessed by using the time-space domain characteristics of the multi-color channel of the video to be assessed, thus, even if no reference video exists, the video quality assessment of the video to be assessed can be realized, as the time-space domain characteristics of the multi-color channel of the video to be assessed can represent the visibility of objects, the visibility of the objects in the video can be paid more attention to rather than the richness when the video quality assessment is performed on the video to be assessed, the video quality assessment method is also suitable for scenes with clear images and less edge information, and therefore the video quality assessment method can be suitable for various quality assessment scenes, and accuracy of video quality assessment is improved.

Drawings

The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.

Fig. 1 is a schematic flow chart illustrating a video quality assessment method according to an embodiment of the present disclosure;

fig. 2 is a schematic flow chart illustrating a video quality assessment process provided by an embodiment of the present disclosure;

fig. 3 is a schematic structural diagram illustrating a video quality assessment apparatus provided by an embodiment of the present disclosure;

fig. 4 shows a schematic structural diagram of a video quality assessment apparatus provided by an embodiment of the present disclosure.

Detailed Description

Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.

It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.

The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.

It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.

It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.

The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.

In the related art, there are generally two types of methods for evaluating video quality: subjective methods and objective methods. The subjective method needs to artificially and subjectively evaluate the video quality, the efficiency is low, and the evaluation standard is difficult to be uniformly formulated. The objective methods include an FR video quality evaluation method, an RR video quality evaluation method, and an NR video quality evaluation method.

According to the FR video quality evaluation method and the PR video quality evaluation method, due to the fact that a reference video is needed, under the scene that only a video to be evaluated is available, the FR video quality evaluation method and the PR video quality evaluation method cannot be evaluated, and therefore the applicable scene is limited.

The NR video quality evaluation method generally includes a video quality evaluation method based on the amount of video information and a video quality evaluation method based on the edge feature of a video image. The video quality evaluation method based on the video information amount is mostly used for analyzing the richness of video content, but the visibility of the external scene of a vehicle in a video is more concerned under an automatic driving scene instead of the richness of the video content, so that the method is not suitable for the automatic driving scene. The video quality evaluation method based on the video image edge features estimates based on the edge features in the video image, and the method is not high in accuracy for scenes with clear images and less edge information.

Therefore, the video quality evaluation method in the related art is not high in accuracy, and the application scene is relatively limited.

In view of this, the embodiments of the present disclosure provide a method, an apparatus, a device, and a medium for video quality assessment, which can improve accuracy of video quality assessment and can be applied to various video quality assessment scenarios. Next, a video quality evaluation method provided by the embodiment of the present disclosure is first explained.

Fig. 1 shows a schematic flow chart of a video quality assessment method provided by an embodiment of the present disclosure.

In some embodiments of the present disclosure, the method shown in fig. 1 may be applied to a video quality assessment apparatus of a vehicle.

As shown in fig. 1, the video quality assessment method may include the following steps.

And S110, acquiring a video to be evaluated.

Specifically, the video to be evaluated is a video that needs to be subjected to video quality evaluation.

The color mode of the video to be evaluated is not limited in the present disclosure, and the color mode of the video to be evaluated may be, for example, a "red green blue" color mode (RGB color mode for short), a printing color mode (CMYK color mode for short), or the like. The RGB color scheme is a variety of colors obtained by changing three color channels of red (R), green (G) and blue (B) and superimposing them on each other. The CMYK color scheme is to obtain various colors by changing four color channels of cyan (C), magenta (M), yellow (Y), and black (K) and superimposing them on each other.

The scene of the video to be evaluated is not limited in the present disclosure, and the video to be evaluated may be, for example, a video acquired in a scene such as automatic driving and live broadcasting. Optionally, in the embodiment of the present disclosure, the video to be evaluated may be a video of an external environment of the vehicle, which is collected during the driving of the vehicle. For example, vehicle ambient video captured by a vehicle in an autonomous driving scenario is used to assist autonomous driving.

The format of the Video to be evaluated is not limited in the present disclosure, and the Video to be evaluated may include, for example, Video in the format of Real Media Variable bit Rate (RMVB), MP4, Moving Picture Experts Group (MPEG), Audio Video Interleaved (AVI), and the like.

And S120, extracting video color data corresponding to each color channel from the video to be evaluated.

Specifically, the video to be evaluated generally includes a plurality of frames of images (a frame of image is also referred to as a video frame), each frame of image includes a plurality of pixels, and each pixel is composed of color values of M (M is the number of color channels of the video to be evaluated) different color channels. The color values of the same color channel of each pixel in each frame image in the video to be evaluated form video color data corresponding to the color channel, and the video color data in different color channels in the video to be evaluated can be independently distinguished by extracting the video color data corresponding to each color channel.

Specifically, extracting video color data corresponding to a certain color channel extracts color values corresponding to the color channel of each pixel in each frame of image in the video to be evaluated.

Exemplarily, the video to be evaluated in the RGB color mode needs to extract video color data corresponding to the R channel, video color data corresponding to the G channel, and B video color data, respectively. Extracting the video color data corresponding to the R channel, that is, extracting the color values of the R channel of each pixel in each frame of image in the video to be evaluated, and extracting G, B the video color data corresponding to the channel in the same way, which is not described herein again.

For example, the video in CMYK color mode needs to extract the video color data corresponding to the C channel, the video color data corresponding to the M channel, the video color data corresponding to the Y channel, and the K video color data, respectively. Extracting the video color data corresponding to the C channel, that is, extracting the color value of the C channel of each pixel in each frame of image in the video to be evaluated, and extracting M, Y, K the video color data corresponding to the channel for the same reason, which is not described herein again.

S130, according to the video color data, performing time-space domain analysis of a multi-color channel on the video to be evaluated to obtain time-space domain evaluation parameters of the video to be evaluated.

Specifically, the time-space domain estimation parameter represents factors which comprehensively affect the video quality from two aspects of time domain and space domain. The time-space domain evaluation parameter may be a time-space domain evaluation parameter obtained by performing space domain analysis on the time-space domain evaluation parameter.

Optionally, the time-space domain evaluation parameter may include time-space domain evaluation parameters corresponding to a plurality of color channels, and the time-space domain evaluation parameter corresponding to each color channel may be a time-space domain eigenvalue corresponding to the color channel obtained after performing spatial domain analysis on the time-space domain eigenvalue corresponding to the color channel.

For example, for a video to be evaluated in an RGB color mode, the time-space domain evaluation parameters may include an R channel time-space domain evaluation parameter, a G channel time-space domain evaluation parameter, and a B channel time-space domain evaluation parameter; for a video to be evaluated in a CMYK color mode, the time-space domain evaluation parameters may include C-channel time-space domain evaluation parameters, M-channel time-space domain evaluation parameters, Y-channel time-space domain evaluation parameters, and K-channel time-space domain evaluation parameters.

And S140, determining the video quality of the video to be evaluated according to the time-space domain evaluation parameters.

Specifically, the time-space domain evaluation parameter may be a quantization parameter, and the video quality determined according to the time-space domain evaluation parameter may also be a quantization parameter, so that the video quality of the video to be evaluated can be known more intuitively.

In particular, the present disclosure does not limit the specific implementation of determining video quality from the time-space domain estimation parameters.

Optionally, the video quality of the video to be evaluated may be calculated by performing a weighted summation or a direct summation on the time-space domain evaluation parameters of each color channel.

For example, for the video to be evaluated in the RGB color mode, the video quality of the video to be evaluated may be calculated by performing a weighted summation or a direct summation of the spatio-temporal domain evaluation parameters of the R channel, the spatio-temporal domain evaluation parameters of the G channel, and the spatio-temporal domain evaluation parameters of the B channel.

The video quality assessment method, device, equipment and medium of the embodiments of the present disclosure can extract video color data corresponding to each color channel from a video to be assessed, and then perform time-space domain analysis of multiple color channels on the video to be assessed according to the video color data, thereby obtaining time-space domain assessment parameters of the video to be assessed, and assess the video quality of the video to be assessed according to the time-space domain assessment parameters, so that video quality assessment of the video to be assessed can be achieved even without a reference video, and since the time-space domain characteristics of the multiple color channels of the video to be assessed can represent the visibility of objects, when performing video quality assessment on the video to be assessed, the method can focus on the visibility but not richness of objects in the video, and is also suitable for some scenes with clear images but less edge information, thus, the video quality assessment method of the embodiments of the present disclosure can be applied to various quality assessment scenes, the accuracy of video quality assessment is improved.

In another embodiment of the present disclosure, extracting video color data corresponding to each color channel from a video to be evaluated includes: extracting a plurality of video frames from a video to be evaluated; performing color decomposition on each video frame to obtain image color data corresponding to each color channel; and generating video color data corresponding to each color channel according to the image color data.

Specifically, the extraction mode for extracting the video frame is not limited, and exemplarily, the video to be evaluated is subjected to frame extraction according to a preset frame extraction frequency, so that the result of performing quality evaluation on the extracted video frame can represent the actual quality of the whole video to be evaluated; or, extracting the key video frames in the video to be evaluated, so that the quality of the key video segments in the video to be evaluated can be evaluated emphatically.

Optionally, extracting a plurality of video frames from the video to be evaluated includes: determining a video decoding mode according to the video type of a video to be evaluated; according to a video decoding mode, performing video decoding on a video to be evaluated to obtain video decoding data; and performing frame extraction processing on the video decoding data according to a preset frame extraction frequency to obtain a plurality of video frames.

Generally, a video that a user can directly view is obtained by subjecting video data to a video encoding process. The video coding mode of the video to be evaluated can be determined according to the video type of the video to be evaluated, and decoding can be regarded as a reverse process of coding, so that the video decoding mode can be determined after the video coding mode of the video to be evaluated is determined.

After video decoding is carried out on a video to be evaluated, the obtained video decoding data comprises image frame decoding data corresponding to each frame of image, frame extraction processing is carried out on the video decoding data according to a preset frame extraction frequency, the image frame decoding data corresponding to a plurality of frames of images can be extracted, and a plurality of video frames can be obtained. Illustratively, n video frames, img respectively, are extracted from a video to be evaluatedt1、imgt2、imgt3…imgt4Wherein, imgtiRepresenting the video frame extracted at the time stamp ti.

It can be understood that, the video decoding processing is performed before the frame extraction processing is performed on the video to be evaluated, so that the original video decoding data can be obtained through the video decoding processing no matter what type of video is of the video to be evaluated, and the subsequent frame extraction processing is facilitated.

Specifically, as described above, each frame image includes a plurality of pixels, each of which is synthesized from color values of M different color channels. The color values of the same color channel in each frame of image constitute the image color data corresponding to the color channel. Extracting image color data corresponding to a certain color (for example, S color) channel, that is, extracting color values of the color (for example, S color) channel of each pixel in the same frame image by using video decoded data corresponding to a single frame image as an operation object. For example, extracting the image color data corresponding to the R channel extracts the color value of the R channel of each pixel in the same frame of image, and extracting the image color data corresponding to other channels is the same, and is not described herein again.

Specifically, in the extracted n video frames, image color data corresponding to the same color channel of each video frame is spliced according to a preset splicing modeAnd then, the video color data corresponding to the color channel can be obtained. The preset splicing mode is not limited in the present disclosure, and optionally, video color data corresponding to each color channel is generated according to image color data, including: and splicing the image color data corresponding to each color channel according to the video frame sequence of the plurality of video frames to obtain the video color data corresponding to each color channel. The video color data corresponding to a certain color (e.g., S color) channel may be in the form of a list, with the correspondence expressed as Slist=[St1,St2,…,Stn]Wherein S istiThe image color data corresponding to the S color channel of the video frame extracted at the ti timestamp is represented.

Illustratively, for the video to be evaluated in the RGB color mode, the color mode of n video frames extracted from the video to be evaluated is also the RGB color mode. Extracting color values of R channels of pixels in the same frame image to obtain image color data corresponding to the R channels of the video frames, splicing the image color data corresponding to the R channels of the video frames according to the video frame sequence of the video frames to which the image color data belong to obtain video color data corresponding to the R channels, wherein the video color data corresponding to the R channels can be in a list form and correspondingly expressed as Rlist=[Rt1,Rt2,…,Rtn]Wherein R istiRepresenting the image color data corresponding to the R channel of the video frame extracted at the ti timestamp. The same manner for obtaining the video color data corresponding to the G, B channel is not repeated here, and the video color data corresponding to the G channel may also be in a list form, which is correspondingly expressed as Glist=[Gt1,Gt2,…,Gtn]Wherein G istiImage color data corresponding to the G channel representing the video frame extracted at the ti timestamp; the video color data corresponding to the B channel can also be in a list form, and the corresponding expression is Blist=[Bt1,Bt2,…,Btn]Wherein B istiRepresenting the image color data corresponding to the B channel of the video frame extracted at the ti timestamp.

Illustratively, for the CMYK color mode video to be evaluated, from thereThe color mode of the extracted n video frames is also a CMYK color mode. Extracting the color value of the C channel of each pixel in the same frame image to obtain the image color data corresponding to the C channel of the video frame, splicing the image color data corresponding to the C channel of each video frame according to the video frame sequence of the video frame to which the image color data belongs to obtain the video color data corresponding to the C channel, wherein the video color data corresponding to the C channel can be in a list form and correspondingly expressed as Clist=[Ct1,Ct2,…,Ctn]Wherein, CtiRepresenting the image color data corresponding to the C channel of the video frame extracted at the ti timestamp. The same manner is applied to the acquisition of the video color data corresponding to the M, Y, K channel, which is not described herein, and the video color data corresponding to the M channel may also be in the form of a list, which is correspondingly expressed as Mlist=[Mt1,Mt2,…,Mtn]Wherein M istiImage color data corresponding to the G channel representing the video frame extracted at the ti timestamp; the video color data corresponding to the Y channel can also be in a list form, and the corresponding expression is Ylist=[Yt1,Yt2,…,Ytn]Wherein Y istiImage color data corresponding to the Y channel representing the video frame extracted at the ti timestamp; the video color data corresponding to the K channel can also be in a list form, and the correspondence is expressed as Klist=[Kt1,Kt2,…,Ktn]Wherein Y istiRepresenting the image color data corresponding to the K channel of the video frame extracted at the ti timestamp.

It can be understood that, by splicing the image color data corresponding to the same color channel of each video frame according to the video frame sequence of a plurality of video frames, the arrangement sequence of the data list in the obtained video color data corresponding to each color channel is the same as the arrangement sequence of the data list in the video to be evaluated, which is convenient for subsequent video quality evaluation according to the video color data and is beneficial to improving the accuracy of the video quality evaluation result.

In another embodiment of the present disclosure, performing a multi-color-channel time-space domain analysis on a video to be evaluated according to video color data to obtain time-space domain evaluation parameters of the video to be evaluated, including: for each color channel, performing time-space domain analysis on a video to be evaluated according to video color data to obtain a single-channel time-space domain characteristic value corresponding to the color channel; fusing the single-channel time-space domain characteristic values to obtain multi-channel time-space domain characteristic values; and taking the multi-channel time-space domain characteristic value as a time-space domain evaluation parameter.

Specifically, the single-channel time-space domain characteristic value corresponding to a certain color (for example, S color) channel refers to a factor that affects the video quality of one color (for example, S color) channel of the video to be evaluated from two dimensions, namely, a time domain and a space domain, and is obtained by analyzing the video to be evaluated from the time-space domain according to the video color data corresponding to the color (for example, S color) channel.

Optionally, for each color channel, performing time-space domain analysis on the video to be evaluated according to the video color data, and obtaining a single-channel time-space domain characteristic value corresponding to the color channel includes: for each color channel, performing time domain analysis on the video to be evaluated according to the video color data to obtain a time domain characteristic value of the video to be evaluated for the color channel; and performing space-domain analysis aiming at the color channel on the video to be evaluated according to the time-domain characteristic value to obtain a single-channel time-space domain characteristic value corresponding to the color channel.

Optionally, the time domain feature values include local time domain feature values and global time domain feature values. The local temporal feature value is used to characterize the color channel value variation rate between two consecutive video frames, where "two consecutive video frames" refers to two adjacent frames of a plurality of video frames extracted from the video to be evaluated. The global temporal feature value is used to characterize the color channel value variation rate between two video frames that are spaced by a preset number of frames.

Optionally, for each color channel, performing time domain analysis on the video to be evaluated according to the video color data, and obtaining a time domain feature value of the video to be evaluated for the color channel includes: for each color channel, performing time domain analysis on the video to be evaluated according to the video color data to obtain a local time domain characteristic value and a global time domain characteristic value of the video to be evaluated, wherein the local time domain characteristic value and the global time domain characteristic value are specific to the time domain characteristic value of the color channel; and fusing the local time domain characteristic value and the global time domain characteristic value into a time domain characteristic value of the video to be evaluated, which is specific to the color channel.

It can be understood that the local time domain feature value can better express the similarity between two continuous frames so as to express the jitter degree of the video, the global time domain feature can better express the overall feature on the time sequence, and the local time domain feature value and the global time domain feature value are fused to form the time domain feature value, so that the time domain feature value can better express the similarity between two continuous frames and can better express the overall feature on the time sequence. The present disclosure does not limit the specific fusion manner of the local time domain feature value and the global time domain feature.

Specifically, the multi-channel time-space domain eigenvalue refers to a factor that affects the video quality of each color channel of the video to be evaluated from two dimensions of a time domain and a space domain, and is obtained by fusing the single-channel time-space domain eigenvalues.

In another implementation of the present disclosure, for each color channel, performing time-space domain analysis on a video to be evaluated according to video color data, and obtaining a single-channel time-space domain feature value corresponding to the color channel includes: performing time domain analysis on each pixel in a video picture of a video to be evaluated according to video color data to obtain a time domain characteristic value of the pixel for a color channel; and performing space-domain analysis aiming at the color channel on the video to be evaluated according to the time-domain characteristic value to obtain a single-channel time-space domain characteristic value corresponding to the color channel.

Specifically, the temporal feature value of a pixel for a color channel refers to a factor that affects the video quality of one pixel in the video to be evaluated from a temporal perspective. The present disclosure is not limited to the specific implementation of obtaining temporal feature values for color channels for pixels.

Optionally, performing temporal analysis on the pixel according to the video color data to obtain a temporal feature value of the pixel for the color channel, including: calculating local time domain characteristic values and global time domain characteristic values of the pixels aiming at the time domain characteristic values of the color channels according to the video color data; and fusing the local time domain characteristic value and the global time domain characteristic value into the time domain characteristic value of the pixel aiming at the color channel.

Specifically, the local temporal feature values of the pixels for the color channels are used to characterize the color channel value change rate of two consecutive video frames at the same pixel location. The present disclosure does not limit the specific manner in which the local temporal feature values for the color channels of the pixels are calculated.

Optionally, the local temporal feature value for the color channel of the pixel is calculated according to the following formula:

wherein f islRepresenting a time-sequential local feature extractor oriented to a single-channel continuous frame of data, n representing the number of video frames extracted from the video to be evaluated, list i]Representing a time stamp of tiThe data of the video frame under this pixel, list [ i +1 ]]Representing a time stamp of ti+lThe video frame under that pixel.

Video color data S corresponding to a certain color (e.g. S color) channellistInput time sequence local feature extractor flThe local time domain characteristic value S of each pixel aiming at the S channel can be obtainedlocal,Slocal=fl(Slist)。

Specifically, the global temporal feature values of the pixels for the color channels are used to characterize the color channel value change rate of two consecutive video frames at the same pixel position. The present disclosure does not limit the specific manner in which the global temporal feature values for the color channels of the pixels are calculated.

Optionally, the global temporal feature value for the color channel of the pixel is calculated according to the following formula:

wherein f isgRepresenting a time-sequential global feature extractor oriented to a single-channel continuous frame data, n representing the number of video frames extracted from the video to be evaluated, list i]Representing a time stamp of tiThe video frame under that pixel, list [ i + x ]]Representing a time stamp of ti+xX is the interval preset frame number, and the present disclosure does not limit the specific value of X, which is exemplarily 10.

Video color data S corresponding to a certain color (e.g. S color) channellistInput time sequence local feature extractor fgThe local time domain characteristic value S of each pixel aiming at the S channel can be obtainedglocal,Sglocal=fg(Slist)。

Illustratively, when the color mode of the video to be evaluated is an RGB color mode, the video color data R corresponding to the R channel is usedlistRespective input time sequence local feature extractor flAnd a time-series global feature extractor fgThe local time domain characteristic value R of each pixel aiming at the R channel can be obtainedlocalAnd a global temporal feature value R for the R channel for each pixelglocal,Rlocal=fl(Rlist),Rglocal=fg(Rlist). Similarly, a local time domain characteristic value G for the G channel of each pixel can be obtainedlocalAnd global time domain eigenvalue GgocalAnd local time domain characteristic value B of each pixel for B channellocalAnd a global time domain eigenvalue Bgocal,Glocal=fl(Glist),Blocal=fl(Blist),Gglocal=fg(Glist),Bglocal=fg(Blist)。

Exemplarily, when the color mode of the video to be evaluated is the CMYK color mode, the video color data C corresponding to the C channel is usedlistRespective input time sequence local feature extractor flAnd a time-series global feature extractor fgThe local time domain characteristic value C of each pixel aiming at the C channel can be obtainedlocalAnd a global temporal feature value C for the C channel for each pixelglocal,Clocal=fl(Clist),Cglocal=fg(Clist). Similarly, a local time domain characteristic value M for the M channels of each pixel can be obtainedlocalAnd a global time domain feature value MglocalLocal time domain characteristic value Y of each pixel for Y channellocalAnd a global time domain eigenvalue YglocalAnd local time domain characteristic value K for K channel of each pixellocalAnd a global time domain eigenvalue Kglocal,Mlocal=fl(Mlist),Ylocal=fl(Ylist),Klocal=fl(Klist),Mglocal=fg(Mlist),Yglocal=fg(Ylist),Kglocal=fg(Klist)。

Optionally, fusing the local temporal feature value and the global temporal feature value into a temporal feature value of the pixel for the color channel, including: and splicing the local time domain characteristic value and the global time domain characteristic value to obtain a time domain characteristic value of the pixel aiming at the color channel.

Specifically, the function used in the process of splicing the local time domain feature value and the global time domain feature value is not limited in this disclosure, and for example, the local time domain feature value and the global time domain feature value may be spliced through a text chaining (concat) operation, and the concat operation may be implemented by a CONCATENATE function. Temporal eigenvalues S of such pixels for a certain color (e.g., S color) channeltemp=θ(Slocal,Sglobal) And theta represents the concat operation.

Illustratively, when the color mode of the video to be evaluated is an RGB color mode, the temporal characteristic value R of the pixel for the R channeltemp=θ(Rlocal,Rglobal) Temporal eigenvalues G of the pixels for the G channeltemp=θ(Glocal,Gglobal) Temporal eigenvalues G of the pixels for the B channeltemp=θ(Blocal,Bglobal)。

Illustratively, when the color mode of the video to be evaluated is a CMYK color mode, the temporal characteristic value C of the pixel for the C channeltemp=θ(Clocal,Cglobal) Temporal eigenvalues M of pixels for the M channelstemp=θ(Mlocal,Mglobal) Temporal eigenvalues Y of the pixels for the Y-channeltemp=θ(Ylocal,Yglobal) Temporal eigenvalues K of pixels for the K channeltemp=θ(Klocal,Kglobal)。

Optionally, according to the time domain eigenvalue, performing spatial domain analysis on the video to be evaluated for the color channel to obtain a single-channel time-spatial domain eigenvalue corresponding to the color channel, including: dividing a video picture into a plurality of image blocks; aiming at each image block, calculating a spatial domain characteristic value of the image block aiming at a color channel according to a time domain characteristic value corresponding to a pixel contained in the image block; and taking the space domain characteristic value corresponding to each image block as a single-channel time-space domain characteristic value corresponding to the color channel.

Specifically, when the video picture is divided into a plurality of image blocks in the spatial domain, the video picture can be equally divided into the plurality of image blocks, so that the division of the image blocks can be simple; at least two image blocks with different areas can be divided, so that the image blocks can be flexibly divided according to actual conditions.

Specifically, each image block includes a plurality of pixels, each color channel of each pixel corresponds to a time domain feature value, and the time domain feature values corresponding to the same color channel of each pixel in the same image block can be calculated according to a preset calculation mode to obtain a space domain feature value of the image block for the color channel. The present disclosure does not limit the specific implementation of the preset calculation manner.

Optionally, the time domain feature values corresponding to the same color channel of each pixel in the image block are summed to obtain a single-channel spatial domain feature value of the color channel of the image block.

In particular, a video picture is divided into m1*m2The single-channel spatial domain characteristic value of the z-th image block for a certain color (S color) is:

wherein x isz,yzRespectively representing the horizontal and vertical coordinates of the upper left corner of the z-th image block; h isz,wzRespectively representing the height and the width of the z-th image block; p and q respectively represent the horizontal and vertical coordinates of the pixel.

Illustratively, when the color mode of the video to be evaluated is the RGB color mode, the video pictures may be equally divided into m1*m2The single-channel spatial domain characteristic value of the z-th image block for R isThe single-channel spatial domain eigenvalue for G isThe single-channel spatial domain eigenvalue for B is

Illustratively, when the color mode of the video to be evaluated is a CMYK color mode, the video pictures can be equally divided into m1*m2The single-channel spatial domain characteristic value of the z-th image block aiming at C isThe single-channel spatial domain eigenvalue for M isThe single-channel spatial domain eigenvalue for Y isThe single-channel spatial domain eigenvalue for K is

Optionally, the single-channel time-space domain eigenvalue is fused to obtain a multi-channel time-space domain eigenvalue, including: calculating the difference between the maximum spatial domain characteristic value and the minimum spatial domain characteristic value corresponding to each image block aiming at each color channel; and taking the difference value corresponding to each image block as a multi-channel time-space domain characteristic value.

Specifically, multi-channel time-space domain eigenvalue Mul for the z-th image blockst(z) is calculated according to the following formula:

Mulst(z)=

max(S1st[z],S2st[z],…SMst[z])-min(S1st[z],S2st[z],…SMst[z]);

wherein Sm isst[z]And the z-th image block is a single-channel spatial domain characteristic value aiming at Sm colors, M is a positive integer, M is more than or equal to 1 and less than or equal to M, and M is the total number of color channels contained in the video to be evaluated.

Illustratively, when the color mode of the video to be evaluated is an RGB color mode, the multi-channel space-time domain eigenvalues for the z-th image block are as follows:

Mulst(z)=max(Rst[z],Gst[z],Bst[z])-min(Rst[z],Gst[z],Bst[z])。

illustratively, when the color mode of the video to be evaluated is a CMYK color mode, the multi-channel spatial-temporal eigenvalue Mul for the z-th image blockst(z) is as follows:

Mulst(z)=

max(Cst[z],Mst[z],Yst[z],Kst[z]),-min(Rst[z],Gst[z],Bst[z],Kst[z])。

optionally, determining the video quality of the video to be evaluated according to the time-space domain evaluation parameter includes: and adding the difference values corresponding to the image blocks to obtain the video quality.

In particular, the amount of the solvent to be used,

hereinafter, a vehicle control method provided by the embodiment of the present disclosure will be described in detail based on a specific example.

Fig. 2 shows a schematic flow chart of a video quality evaluation process provided by an embodiment of the present disclosure.

As shown in fig. 2, the video quality evaluation process may specifically include the following steps.

S210, according to a preset frame extraction frequency, performing frame extraction processing on a video to be evaluated to obtain n video frames.

As shown in FIG. 2, the n video frames are img respectivelyt1、imgt2、…imgtn

S220, performing color decomposition on each video frame to obtain image color data corresponding to a red color channel, image color data corresponding to a green color channel and image color data corresponding to a blue color channel.

As shown in FIG. 2, a video frame imgt1After color decomposition, image color data R corresponding to the red color channel is obtainedt1Image color data G corresponding to green color channelt1And image color data B corresponding to the blue channelt1(ii) a Video frame imgt2After color decomposition, image color data R corresponding to the red color channel is obtainedt2Image color data G corresponding to green color channelt2And image color data B corresponding to the blue channelt2(ii) a And so on.

And S230, splicing the image color data corresponding to each color channel according to the video frame sequence of the n video frames to obtain video color data corresponding to a red color channel, video color data corresponding to a green color channel and video color data corresponding to a blue color channel.

As shown in FIG. 2, the red channel is assigned to each image colorColor data (R)t1To Rtn) Splicing to obtain video color data corresponding to the red color channel; splicing the image color data corresponding to the green channel to obtain the video color data (G) corresponding to the green color channelt1To Gtn) (ii) a Splicing the image color data corresponding to the blue channel to obtain the video color data (B) corresponding to the blue color channelt1To Btn)。

S240, performing time-space domain analysis on the video to be evaluated according to the video color data for each color channel to obtain a single-channel time-space domain characteristic value corresponding to the red channel, a single-channel time-space domain characteristic value corresponding to the green channel and a single-channel time-space domain characteristic value corresponding to the blue channel.

And S250, fusing the single-channel time-space domain characteristic value corresponding to the red channel, the single-channel time-space domain characteristic value corresponding to the green channel and the single-channel time-space domain characteristic value corresponding to the blue channel to obtain a multi-channel time-space domain characteristic value.

And S260, determining the video quality of the video to be evaluated according to the multi-channel time-space domain characteristics.

Fig. 3 shows a schematic structural diagram of a video quality evaluation apparatus 300 according to an embodiment of the present disclosure.

In some embodiments of the present disclosure, the apparatus shown in fig. 3 may be applied to a video quality assessment device of a vehicle, wherein the video quality assessment device of the vehicle may be an autopilot system controller of the vehicle.

As shown in fig. 3, the video quality assessment apparatus 300 may include a video acquisition module 310 configured to acquire a video to be assessed; the data extraction module 320 may be configured to extract video color data corresponding to each color channel from a video to be evaluated; the parameter analysis module 330 may be configured to perform time-space domain analysis of multiple color channels on the video to be evaluated according to the video color data, so as to obtain time-space domain evaluation parameters of the video to be evaluated; the quality evaluation module 340 may be configured to determine the video quality of the video to be evaluated according to the time-space domain evaluation parameter.

The video quality assessment device of the embodiment of the disclosure can extract video color data corresponding to each color channel from a video to be assessed, then perform time-space domain analysis of multiple color channels on the video to be assessed according to the video color data to obtain time-space domain assessment parameters of the video to be assessed, and assess the video quality of the video to be assessed according to the time-space domain assessment parameters, so that the video quality assessment of the video to be assessed can be realized even without a reference video, and because the time-space domain characteristics of the multiple color channels of the video to be assessed can represent the visibility of objects, the video quality assessment device of the embodiment of the disclosure can pay more attention to the visibility but not richness of the objects in the video when performing video quality assessment on the video to be assessed, and is also suitable for some scenes with clear images but less edge information, thus, the video quality assessment method of the embodiment of the disclosure can be suitable for various quality assessment scenes, the accuracy of video quality assessment is improved.

In some embodiments of the present disclosure, the data extraction module 320 may include: the video frame extraction submodule, the image color data obtaining submodule and the video color data generation submodule;

a video frame extraction submodule, which can be configured to extract a plurality of video frames from a video to be evaluated;

the image color data obtaining submodule can be configured to perform color decomposition on the video frames aiming at each video frame to obtain image color data corresponding to each color channel;

and the video color data generation submodule can be configured to generate video color data corresponding to each color channel according to the image color data.

In some embodiments of the present disclosure, the video frame decimation sub-module may include: the device comprises a video decoding mode determining unit, a video decoding data obtaining unit and a video frame obtaining unit;

the video decoding mode determining unit may be configured to determine a video decoding mode according to a video type of a video to be evaluated;

the video decoding data obtaining unit can be configured to perform video decoding on the video to be evaluated according to a video decoding mode to obtain video decoding data;

the video frame obtaining unit may be configured to perform frame extraction processing on the video decoded data according to a preset frame extraction frequency to obtain a plurality of video frames.

In some embodiments of the present disclosure, the video color data generation sub-module may be specifically configured to splice the image color data corresponding to each color channel according to a video frame sequence of a plurality of video frames, so as to obtain the video color data corresponding to each color channel.

In some embodiments of the present disclosure, the parameter analysis module 330 may include: obtaining a submodule by using a single-channel time-space domain characteristic value, obtaining a submodule by using a multi-channel time-space domain characteristic value and obtaining a submodule by using a time-space domain evaluation parameter;

the single-channel time-space domain characteristic value obtaining submodule can be configured to perform time-space domain analysis on a video to be evaluated according to video color data for each color channel to obtain a single-channel time-space domain characteristic value corresponding to the color channel;

the multi-channel time-space domain characteristic value obtaining submodule can be configured to fuse single-channel time-space domain characteristic values to obtain multi-channel time-space domain characteristic values;

and the time-space domain evaluation parameter obtaining submodule can be configured to take the multi-channel time-space domain characteristic value as a time-space domain evaluation parameter.

In some embodiments of the present disclosure, the single-channel time-space domain eigenvalue obtaining submodule may include: a pixel time domain characteristic value obtaining unit and a single-channel time-space domain characteristic value obtaining unit;

a pixel time domain feature value obtaining unit, configured to perform time domain analysis on a pixel according to video color data for each pixel in a video picture of a video to be evaluated, so as to obtain a time domain feature value of the pixel for a color channel;

and the single-channel time-space domain characteristic value obtaining unit is used for carrying out space domain analysis aiming at the color channel on the video to be evaluated according to the time domain characteristic value to obtain a single-channel time-space domain characteristic value corresponding to the color channel.

In some embodiments of the present disclosure, the pixel time domain feature value obtaining unit may include: the time domain characteristic value calculating sub-unit comprises a time domain characteristic value calculating sub-unit and a pixel time domain characteristic value obtaining sub-unit;

the time domain feature value calculation operator unit can be configured to calculate a local time domain feature value and a global time domain feature value of a pixel for a time domain feature value of a color channel according to the video color data;

the pixel temporal feature value obtaining subunit may be configured to fuse the local temporal feature value and the global temporal feature value into a temporal feature value of the pixel for the color channel.

In some embodiments of the present disclosure, the local temporal feature value is used to characterize a color channel value variation rate between two consecutive video frames, and the global temporal feature value is used to characterize a color channel value variation rate between two video frames spaced apart by a preset number of frames.

In some embodiments of the present disclosure, the pixel time domain feature value obtaining subunit may be specifically configured to splice the local time domain feature value and the global time domain feature value to obtain a time domain feature value of the pixel for the color channel.

In some embodiments of the present disclosure, the single-channel time-space domain feature value obtaining unit may include: the video image segmentation method comprises a video image division subunit, an image block space domain feature value calculation operator unit and a single-channel time-space domain feature value obtaining subunit;

a video picture dividing subunit configurable to divide a video picture into a plurality of image blocks;

the image block spatial domain feature value calculation operator unit may be configured to calculate, for each image block, a spatial domain feature value of the image block for the color channel according to a time domain feature value corresponding to a pixel included in the image block;

and the single-channel time-space domain characteristic value obtaining subunit can be configured to use the space domain characteristic value corresponding to each image block as a single-channel time-space domain characteristic value corresponding to the color channel.

In some embodiments of the present disclosure, the multi-channel time-space domain eigenvalue obtaining submodule may include: a difference value calculation unit and a multi-channel time-space domain characteristic value obtaining unit;

a difference calculation unit configured to calculate, for each color channel, a difference between a maximum spatial domain feature value and a minimum spatial domain feature value corresponding to each image block;

and the multi-channel time-space domain characteristic value obtaining unit can be configured to take the difference value corresponding to each image block as a multi-channel time-space domain characteristic value.

The quality evaluation module 340 may be specifically configured to add the difference values corresponding to the image blocks to obtain the video quality.

It should be noted that the video quality assessment apparatus 300 shown in fig. 3 may perform each step in the method embodiments shown in fig. 1 and fig. 2, and implement each process and effect in the method embodiments shown in fig. 1 and fig. 2, which are not described herein again.

Fig. 4 shows a schematic structural diagram of a video quality assessment apparatus provided by an embodiment of the present disclosure.

In some embodiments of the present disclosure, the video quality assessment apparatus shown in fig. 4 may be an autonomous driving system of a vehicle.

As shown in fig. 4, the video quality assessment apparatus may include a processor 401 and a memory 402 having stored computer program instructions.

Specifically, the processor 401 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured to implement one or more Integrated circuits of the embodiments of the present Application.

Memory 402 may include a mass storage for information or instructions. By way of example, and not limitation, memory 402 may include a Hard Disk Drive (HDD), a floppy Disk Drive, flash memory, an optical Disk, a magneto-optical Disk, tape, or a Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 402 may include removable or non-removable (or fixed) media, where appropriate. Memory 402 may be internal or external to the integrated gateway device, where appropriate. In a particular embodiment, the memory 402 is a non-volatile solid-state memory. In a particular embodiment, the Memory 402 includes Read-Only Memory (ROM). The ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (Electrically Erasable PROM, EPROM), Electrically Erasable PROM (Electrically Erasable PROM, EEPROM), Electrically Alterable ROM (Electrically Alterable ROM, EAROM), or flash memory, or a combination of two or more of these, where appropriate.

The processor 401 reads and executes the computer program instructions stored in the memory 402 to perform the steps of the video quality assessment method provided by the embodiments of the present disclosure.

In one example, the video quality assessment device may also include a transceiver 403 and a bus 404. As shown in fig. 4, the processor 401, the memory 402 and the transceiver 403 are connected via a bus 404 to complete communication therebetween.

Bus 404 comprises hardware, software, or both. By way of example, and not limitation, a BUS may include an Accelerated Graphics Port (AGP) or other Graphics BUS, an Enhanced Industry Standard Architecture (EISA) BUS, a Front-Side BUS (FSB), a HyperTransport (HT) interconnect, an Industry Standard Architecture (ISA) BUS, an InfiniBand interconnect, a Low Pin Count (LPC) BUS, a memory Bus, a Micro Channel Architecture (MCA) Bus, a Peripheral Component Interconnect (PCI) Bus, a PCI-Express (PCI-X) Bus, a Serial Advanced Technology Attachment (SATA) Bus, a Video Electronics Standards Association Local Bus (VLB) Bus, or other suitable Bus, or a combination of two or more of these. Bus 404 may include one or more buses, where appropriate. Although specific buses are described and shown in the embodiments of the application, any suitable buses or interconnects are contemplated by the application.

The disclosed embodiments also provide a computer-readable storage medium, which may store a computer program, and when the computer program is executed by a processor, the processor is enabled to implement the video quality assessment method provided by the disclosed embodiments.

The storage medium may, for example, include a memory 402 of computer program instructions that are executable by a processor 401 of a video quality assessment device to perform a video quality assessment method provided by embodiments of the present disclosure. Alternatively, the storage medium may be a non-transitory computer readable storage medium, for example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a Compact disc read only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.

It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the term "comprises/comprising" is intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.

The foregoing are merely exemplary embodiments of the present disclosure, which enable those skilled in the art to understand or practice the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:视频编码方法、模型训练方法及相关装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!