Video signal processing method and device

文档序号:1630847 发布日期:2020-01-14 浏览:21次 中文

阅读说明:本技术 一种视频信号处理方法及装置 (Video signal processing method and device ) 是由 王正 袁乐 吴仁坚 黄芳 于 2018-07-19 设计创作,主要内容包括:本申请公开了一种视频信号处理方法及装置,能够根据与待处理视频信号的初始亮度值对应的饱和度调节因子,对待处理视频信号进行色度补偿,使得人眼所感知的色度补偿后的视频信号得颜色更加接近进行亮度映射前视频信号的颜色。(The application discloses a video signal processing method and device, which can perform chrominance compensation on a video signal to be processed according to a saturation adjustment factor corresponding to an initial luminance value of the video signal to be processed, so that the color of the video signal after the chrominance compensation, which is perceived by human eyes, is closer to the color of the video signal before luminance mapping.)

1. A video signal processing method, comprising:

performing brightness mapping on an initial brightness value of a video signal to be processed to obtain an adjusted brightness value;

determining a saturation mapping curve according to the ratio of the adjusted brightness value to the initial brightness value;

determining a saturation adjustment factor corresponding to the initial brightness value according to the saturation mapping curve;

and adjusting the chromatic value of the video signal to be processed based on the saturation adjusting factor.

2. The method of claim 1, wherein the saturation mapping curve is a function of the initial luminance value as an independent variable and the ratio as a dependent variable.

3. The method according to claim 1 or 2, wherein the saturation adjustment factor is determined by a mapping table comprising a correspondence of abscissa and ordinate values of at least one sample point on the saturation mapping curve.

4. The method according to any one of claims 1 to 3, wherein said adjusting chrominance values of said video signal to be processed comprises:

and adjusting the chromatic value of the video signal to be processed based on the product of a preset chromatic component gain coefficient and the saturation adjusting factor.

5. The method of claim 4, wherein the chrominance values comprise a first chrominance value of a first chrominance signal corresponding to the video signal to be processed and a second chrominance value of a second chrominance signal corresponding to the video signal to be processed, the preset chrominance component gain coefficients comprise a preset first chrominance component gain coefficient and a preset second chrominance component gain coefficient, and the adjusting the chrominance values of the video signal to be processed based on a product of the preset chrominance component gain coefficient and the saturation adjustment factor comprises:

adjusting the first chrominance value based on a product of the preset first chrominance component gain factor and the saturation adjustment factor;

and adjusting the second chrominance value based on the product of the preset second chrominance component gain coefficient and the saturation adjustment factor.

6. The method according to any of claims 1 to 5, wherein said luminance mapping an initial luminance value of the video signal to be processed to obtain an adjusted luminance value comprises:

and performing brightness mapping on the initial brightness value according to a brightness mapping curve to obtain the adjusted brightness value.

7. The method according to claim 6, wherein the brightness mapping curve is used to indicate a mapping relationship between the initial brightness value and the adjusted brightness value, an abscissa of the brightness mapping curve is the initial brightness value before brightness mapping, and an ordinate of the brightness mapping curve is the adjusted brightness value after brightness mapping.

8. The method according to claim 7, wherein the saturation mapping curve belongs to a target non-linear space, a preset first original brightness mapping curve is a non-linear curve, and before the brightness mapping of the initial brightness value according to the brightness mapping curve to obtain the adjusted brightness value, the method further comprises:

respectively converting a first abscissa value and a first ordinate value corresponding to at least one sampling point on the first original brightness mapping curve from a nonlinear space to a linear space to obtain a second abscissa value and a second ordinate value, wherein the abscissa of the first original brightness mapping curve is a brightness value before brightness mapping, and the ordinate of the first original brightness mapping curve is a brightness value after brightness mapping;

respectively converting the second abscissa value and the second ordinate value from a linear space to a non-linear space to obtain the initial brightness value and the adjusted brightness value;

and determining the brightness mapping curve according to the mapping relation between the initial brightness value and the adjusted brightness value, wherein the brightness mapping curve belongs to the target nonlinear space.

9. The method according to claim 7, wherein the saturation mapping curve belongs to a target non-linear space, and the preset second original brightness mapping curve is a linear curve, further comprising:

respectively converting a linear space to a nonlinear space for a third abscissa value and a third ordinate value corresponding to at least one sampling point on the second original brightness mapping curve to obtain the initial brightness value and the adjusted brightness value, wherein the abscissa of the second original brightness mapping curve is a brightness value before brightness mapping, and the ordinate of the second original brightness mapping curve is a brightness value after brightness mapping;

and determining the brightness mapping curve according to the mapping relation between the initial brightness value and the adjusted brightness value, wherein the brightness mapping curve belongs to the target nonlinear space.

10. A video signal processing apparatus, comprising: a processor and a transmission interface;

the transmission interface is used for receiving or transmitting a video signal;

the processor is used for calling the software instructions in the memory and executing the following steps:

carrying out brightness mapping on an initial brightness value of a video signal to be processed to obtain an adjusted brightness value;

determining a saturation mapping curve according to the ratio of the adjusted brightness value to the initial brightness value;

determining a saturation adjustment factor corresponding to the initial brightness value according to the saturation mapping curve;

and adjusting the chromatic value of the video signal to be processed based on the saturation adjusting factor.

11. The apparatus of claim 10, wherein the saturation mapping curve is a function of the initial luminance value as an independent variable and the ratio as a dependent variable.

12. The video signal processing apparatus of claim 10 or 11, wherein the processor is specifically configured to:

and determining the saturation adjustment factor by a mapping relation table, wherein the mapping relation table comprises an abscissa value and an ordinate value of at least one sampling point on the saturation mapping curve.

13. The video signal processing apparatus of any of claims 10 to 12, wherein the processor is specifically configured to:

and adjusting the chromatic value of the video signal to be processed based on the product of a preset chromatic component gain coefficient and the saturation adjusting factor.

14. The apparatus of claim 13, wherein the chrominance values comprise a first chrominance value of a first chrominance signal corresponding to the video signal to be processed and a second chrominance value of a second chrominance signal corresponding to the video signal to be processed, and wherein the predetermined chrominance component gain factors comprise a predetermined first chrominance component gain factor and a predetermined second chrominance component gain factor, the processor being configured to:

adjusting the first chrominance value based on a product of the preset first chrominance component gain factor and the saturation adjustment factor;

and adjusting the second chrominance value based on the product of the preset second chrominance component gain coefficient and the saturation adjustment factor.

15. The video signal processing apparatus of any of claims 10 to 14, wherein the processor is specifically configured to: and performing brightness mapping on the initial brightness value according to a brightness mapping curve to obtain the adjusted brightness value, wherein the brightness mapping curve is used for indicating the mapping relation between the initial brightness value and the adjusted brightness value, the abscissa of the brightness mapping curve is the initial brightness value before the brightness mapping, and the ordinate of the brightness mapping curve is the adjusted brightness value after the brightness mapping.

16. The apparatus of claim 15, wherein the saturation mapping curve belongs to a target non-linear space, a preset first original luminance mapping curve is a non-linear curve, and before luminance mapping the initial luminance values according to the luminance mapping curve to obtain the adjusted luminance values, the processor is further configured to:

respectively converting a first abscissa value and a first ordinate value corresponding to at least one sampling point on the first original brightness mapping curve from a nonlinear space to a linear space to obtain a second abscissa value and a second ordinate value, wherein the abscissa of the first original brightness mapping curve is a brightness value before brightness mapping, and the ordinate of the first original brightness mapping curve is a brightness value after brightness mapping;

respectively converting the second abscissa value and the second ordinate value from a linear space to a non-linear space to obtain the initial brightness value and the adjusted brightness value;

and determining the brightness mapping curve according to the mapping relation between the initial brightness value and the adjusted brightness value, wherein the brightness mapping curve belongs to the target nonlinear space.

17. The video signal processing apparatus of claim 15, wherein the saturation mapping curve belongs to a target non-linear space, the preset second original luminance mapping curve is a linear curve, and the processor is further configured to:

respectively converting a linear space to a nonlinear space for a third abscissa value and a third ordinate value corresponding to at least one sampling point on the second original brightness mapping curve to obtain the initial brightness value and the adjusted brightness value, wherein the abscissa of the second original brightness mapping curve is a brightness value before brightness mapping, and the ordinate of the second original brightness mapping curve is a brightness value after brightness mapping;

and determining the brightness mapping curve according to the mapping relation between the initial brightness value and the adjusted brightness value, wherein the brightness mapping curve belongs to the target nonlinear space.

18. A computer-readable storage medium having stored therein instructions which, when run on a computer or processor, cause the computer or processor to perform the method of any of claims 1 to 9.

Technical Field

The present application relates to the field of display technologies, and in particular, to a video signal processing method and apparatus.

Background

High Dynamic Range (HDR) is a popular technology for the video industry in recent years, and is also a direction for the development of the video industry in the future. Compared with a conventional Standard Dynamic Range (SDR) video signal, the HDR video signal has a larger dynamic range and higher luminance. However, a large number of existing display devices cannot achieve the brightness of the HDR video signal, so when the HDR video signal is displayed, brightness mapping processing needs to be performed on the HDR signal according to the capability of the display device, so that the HDR video signal is suitable for being displayed on the current device; the HDR signal luminance processing method based on red-green-blue (RGB) space is a more common method, and has a wide application in practical display devices.

In the HDR video signal brightness mapping method based on RGB space, a more common processing method is to use formula Cout=((Cin/Lin-1)*s+1)*LoutAlternative luminance mapping formula Cout=(Lout/Lin)*CinThe luminance mapping is achieved by introducing a color saturation adjustment factor s, where LinIs the linear luminance, L, before luminance mapping of the HDR signaloutIs the linear luminance after luminance mapping of the HDR signal, CinIs the linear signal color component R before the luminance mapping of the HDR signalin、GinOr Bin,CoutIs a linear signal color component R after luminance mapping of the HDR signalout、GoutOr Bout. But according to the above formula, the adjusted Rout、GoutAnd BoutThe change in color saturation of (a) may cause a relatively severe Hue Shift (Hue Shift), i.e., a Shift of the color of the luminance-mapped video signal perceived by the human eye from the color of the HDR video signal before luminance mapping.

Disclosure of Invention

The application provides a video signal processing method and a video signal processing device, which are used for solving the problem of tone drift caused by adjustment of color saturation while performing brightness mapping by using an HDR signal brightness mapping method of an RGB space based on adjustment of a color saturation adjustment factor.

In a first aspect, an embodiment of the present application provides a video signal processing method, including the following steps: determining a saturation adjustment factor corresponding to an initial brightness value of a video signal to be processed, wherein the mapping relation between the saturation adjustment factor and the initial brightness value is determined by a saturation mapping curve, the saturation mapping curve is determined by the ratio of an adjustment brightness value to the initial brightness value, and the adjustment brightness value is obtained by mapping the initial brightness value according to a preset brightness mapping curve; and adjusting the chromatic value of the video signal to be processed based on the saturation adjusting factor.

By adopting the method, the chrominance of the video signal to be processed can be adjusted, and the chrominance compensation improves the color saturation of the video signal after the chrominance value adjustment, so that the color of the video signal after the chrominance adjustment, which is perceived by human eyes, is closer to the color of the video signal before the luminance mapping is carried out.

In one possible design, the saturation mapping curve is a function of the initial luminance value as an independent variable and the saturation mapping curve is a function of the dependent variable by the ratio of the adjusted luminance value to the initial luminance value.

The saturation mapping curve may thus be represented by a function representing a mapping between the initial luminance value adjustment luminance value and the ratio of the initial luminance value.

In one possible design, the saturation adjustment factor is determined by the following equation:

fsmNLTF1(eNLTF1)=ftmNLTF1(eNLTF1)/eNLTF1

wherein, eNLTF1 is an initial luminance value, ftmNLTF1() represents a luminance mapping curve, fsmNLTF1() represents a saturation mapping curve, and correspondingly, ftmNLTF1(eNLTF1) represents an adjusted luminance value corresponding to the initial luminance value, and fsmNLTF1(eNLTF1) represents a saturation adjustment factor corresponding to the initial luminance value.

When determining the saturation adjustment factor corresponding to the initial brightness value of the video signal to be processed, the initial brightness value of the video signal to be processed may be used as the independent variable of the above formula, and the dependent variable obtained by calculation may be used as the saturation adjustment factor corresponding to the initial brightness value of the video signal to be processed.

In one possible design, the saturation adjustment factor is determined from a mapping table that includes an abscissa value and an ordinate value of at least one sample point on the saturation mapping curve.

Thereby representing a saturation mapping curve according to the mapping relation table; when the saturation adjusting factor corresponding to the initial brightness value of the video signal to be processed is determined, the saturation adjusting factor corresponding to the initial brightness value of the video signal to be processed can be determined by combining table lookup and a linear interpolation method.

In one possible design, adjusting chrominance values of a video signal to be processed includes: and adjusting the chroma value of the video signal to be processed based on the product of the preset chroma component gain coefficient and the saturation adjusting factor.

In one possible design, the chrominance values include a first chrominance value of a first chrominance signal corresponding to the video signal to be processed and a second chrominance value of a second chrominance signal corresponding to the video signal to be processed, and the preset chrominance component gain coefficients include a preset first chrominance component gain coefficient and a preset second chrominance component gain coefficient, and the chrominance values of the video signal to be processed may be adjusted based on a product of the preset chrominance component gain coefficient and the saturation adjustment factor by: adjusting the first chrominance value based on a product of a preset first chrominance component gain coefficient and a saturation adjustment factor; and adjusting the second chrominance value based on the product of the preset second chrominance component gain coefficient and the saturation adjustment factor.

In one possible design, if the saturation mapping curve belongs to the target non-linear space, the preset first original luminance mapping curve is a non-linear curve, and the method further includes: respectively converting a nonlinear space into a linear space for a first abscissa value and a first ordinate value corresponding to at least one sampling point on the first original brightness mapping curve so as to obtain a second abscissa value and a second ordinate value; respectively converting the second abscissa value and the second ordinate value from a linear space to a nonlinear space to obtain an initial brightness value and an adjusted brightness value; and determining a brightness mapping curve according to the mapping relation between the initial brightness value and the adjusted brightness value, wherein the brightness mapping curve belongs to a target nonlinear space.

Thus, a saturation mapping curve belonging to the target non-linear space can be determined from the non-linear first original luminance mapping curve.

In one possible design, if the saturation mapping curve belongs to the target non-linear space, the preset second original luminance mapping curve is a linear curve, and the method further includes: respectively converting a linear space to a nonlinear space for a third abscissa value and a third ordinate value corresponding to at least one sampling point on the second original brightness mapping curve so as to obtain an initial brightness value and an adjusted brightness value; and determining a brightness mapping curve according to the mapping relation between the initial brightness value and the adjusted brightness value, wherein the brightness mapping curve belongs to a target nonlinear space.

The saturation mapping curve belonging to the target non-linear space can thus be determined from the linear second original luminance mapping curve.

In one possible design, the method further includes: and adjusting the initial brightness value according to the brightness mapping curve to obtain an adjusted brightness value.

In one possible design, the initial luminance value may be adjusted according to the luminance mapping curve to obtain an adjusted luminance value by: and determining a first target ordinate value corresponding to the first target abscissa as an adjustment brightness value according to the first target abscissa value corresponding to the initial brightness value.

In one possible design, the initial luminance value may be adjusted according to the luminance mapping curve to obtain an adjusted luminance value by: and determining a third target ordinate value corresponding to the third target abscissa as an adjusted brightness value according to the third target abscissa value corresponding to the initial brightness value.

In a second aspect, the present application provides a video signal processing apparatus having the function of implementing the method provided in any one of the possible designs of the first aspect and the first aspect. The functions can be realized by hardware, or by hardware executing corresponding software, or by a combination of software and hardware. The hardware or software includes one or more modules corresponding to the above-described functions.

The video signal processing device provided by the embodiment of the application can comprise a first determining unit and an adjusting unit; the first determining unit is used for determining a saturation adjusting factor corresponding to an initial brightness value of a video signal to be processed, wherein a mapping relation between the saturation adjusting factor and the initial brightness value is determined by a saturation mapping curve, the saturation mapping curve is determined by a ratio of an adjusted brightness value to the initial brightness value, and the adjusted brightness value is obtained by mapping the initial brightness value according to a preset brightness mapping curve; and the adjusting unit is used for adjusting the chromatic value of the video signal to be processed based on the saturation adjusting factor.

With the above configuration, the first determining unit of the video signal processing apparatus may determine the saturation adjustment factor, and the adjusting unit of the video signal processing apparatus may adjust the chromaticity value of the video signal to be processed according to the saturation adjustment factor.

In one possible design, the saturation mapping curve is a function of the initial luminance value as an independent variable and the ratio as a dependent variable.

In one possible design, the saturation adjustment factor may be determined by the following equation: f. ofsmNLTF1(eNLTF1)=ftmNLTF1(eNLTF1)/eNLTF1, wherein eNLTF1 is the initial luminance value, ftmNLTF1() represents the luminance mapping curve, fsmNLTF1() represents the saturation mapping curve, corresponding to ftmNLTF1(eNLTF1) represents the adjusted luminance value corresponding to the initial luminance value, fsmNLTF1(eNLTF1) represents the saturation adjustment factor for the initial luminance value.

In one possible design, the saturation adjustment factor may be determined from a mapping table that includes an abscissa value and an ordinate value of at least one sampling point on the saturation mapping curve.

In one possible design, the adjusting unit may adjust the chrominance value of the video signal to be processed based on a product of a preset chrominance component gain coefficient and the saturation adjusting factor.

In one possible design, the chrominance values include a first chrominance value of a first chrominance signal corresponding to the video signal to be processed and a second chrominance value of a second chrominance signal corresponding to the video signal to be processed, the preset chrominance component gain coefficients include a preset first chrominance component gain coefficient and a preset second chrominance component gain coefficient, and the adjusting unit may be specifically configured to: adjusting the first chrominance value based on a product of a preset first chrominance component gain coefficient and the saturation adjustment factor; and adjusting the second chrominance value based on the product of a preset second chrominance component gain coefficient and the saturation adjusting factor.

In one possible design, the saturation mapping curve belongs to a target non-linear space, a preset first original brightness mapping curve is a non-linear curve, and the video signal processing apparatus may further include a first converting unit, a second converting unit, and a second determining unit; the first conversion unit is used for respectively converting a first abscissa value and a first ordinate value corresponding to at least one sampling point on the first original brightness mapping curve from a nonlinear space to a linear space so as to obtain a second abscissa value and a second ordinate value; a second conversion unit, configured to perform linear-to-nonlinear-space conversion on the second abscissa value and the second ordinate value, respectively, so as to obtain the initial luminance value and the adjusted luminance value; and the second determining unit is used for determining the brightness mapping curve according to the mapping relation between the initial brightness value and the adjusted brightness value, wherein the brightness mapping curve belongs to the target nonlinear space.

In one possible design, if the saturation mapping curve belongs to the target non-linear space and the preset second original luminance mapping curve is a linear curve, the video signal processing apparatus may further include a third converting unit and a third determining unit: the third conversion unit is configured to perform conversion from a linear space to a nonlinear space on a third abscissa value and a third ordinate value corresponding to at least one sampling point on the second original luminance mapping curve, so as to obtain the initial luminance value and the adjusted luminance value; and the third determining unit is used for determining the brightness mapping curve according to the mapping relation between the initial brightness value and the adjusted brightness value, wherein the brightness mapping curve belongs to the target nonlinear space.

In one possible design, the video signal processing apparatus may further include a brightness adjustment unit configured to adjust the initial brightness value according to the brightness mapping curve to obtain the adjusted brightness value.

In a possible design, the brightness adjusting unit is specifically configured to determine, according to a target first abscissa value corresponding to the initial brightness value, a target first ordinate value corresponding to the target first abscissa as the adjusted brightness value.

In a possible design, the brightness adjusting unit is specifically configured to determine, according to a target third abscissa value corresponding to the initial brightness value, a target third ordinate value corresponding to the target third abscissa as the adjusted brightness value.

In a third aspect, embodiments of the present application provide a video signal processing apparatus, which includes a processor and a memory, where the memory is used to store necessary instructions and data, and the processor calls the instructions in the memory to implement the functions involved in any one of the possible designs of the method embodiments and the method embodiments described in the first aspect.

In a fourth aspect, the present application provides a computer program product, which comprises a computer program that, when executed on a computer or processor, causes the computer or processor to implement the functions involved in any one of the possible designs of the method embodiments and method embodiments of the first aspect.

In a fifth aspect, embodiments of the present application provide a computer-readable storage medium for storing a program or instructions, which when invoked in a computer, can cause the computer to perform the functions involved in any one of the possible designs of the method embodiments and method embodiments of the first aspect.

Drawings

FIG. 1-a is a schematic diagram of an exemplary PQ EOTF curve provided in an embodiment of the present application;

FIG. 1-b is an exemplary PQ EOTF provided by an embodiment of the present application-1A schematic of the curve;

FIG. 2-a is a schematic diagram of an exemplary HLG OETF curve provided in an embodiment of the present application;

FIG. 2-b is an exemplary HLG OETF provided in embodiments of the present application-1A schematic of the curve;

FIG. 3-a is a block diagram of an exemplary video signal processing system according to an embodiment of the present disclosure;

3-b is a schematic diagram of an architecture of another exemplary video signal processing system provided by an embodiment of the present application;

3-c is a schematic structural diagram of an exemplary video signal processing apparatus provided in an embodiment of the present application;

fig. 4 is a schematic diagram illustrating steps of an exemplary video signal processing method according to an embodiment of the present application;

FIG. 5 is a schematic diagram of an exemplary saturation mapping curve provided by an embodiment of the present application;

FIG. 6 is a diagram illustrating an exemplary luminance mapping curve provided by an embodiment of the present application;

FIG. 7 is a flow chart illustrating an exemplary luminance mapping provided by an embodiment of the present application;

fig. 8 is a flowchart illustrating an exemplary video signal processing method according to an embodiment of the present application;

fig. 9 is a schematic flowchart of another exemplary video signal processing method provided in an embodiment of the present application;

fig. 10 is a schematic flowchart of another exemplary video signal processing method provided in the embodiments of the present application;

fig. 11 is a schematic structural diagram of another exemplary video signal processing apparatus according to an embodiment of the present application;

FIG. 12-a is a schematic diagram of another exemplary video signal processing apparatus according to an embodiment of the present application;

FIG. 12-b is a schematic diagram of another exemplary video signal processing apparatus according to an embodiment of the present application;

12-c are schematic structural diagrams of another exemplary video signal processing apparatus provided in an embodiment of the present application;

fig. 13 is a flowchart illustrating an exemplary color gamut conversion method provided in an embodiment of the present application;

fig. 14 is a flowchart illustrating an exemplary method for converting an HDR HLG signal into an HDR PQTV according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more clear, the present application will be further described in detail with reference to the accompanying drawings.

The term "at least one" as referred to herein means one, or more than one, i.e. including one, two, three and more; "plurality" means two, or more than two, i.e., including two, three, and more than two.

First, some concepts or terms referred to by the embodiments of the present application are explained in order to facilitate understanding of the embodiments of the present application.

Base color value (color value): a value corresponding to a particular image color component (e.g., R, G, B or Y).

Digital code value: a digital representation value of the image signal, the digital coding value being used to represent the non-linear base color value.

Linear color value (linear color value): the linear primary color values, which are proportional to the light intensity, should in an alternative case be normalized to 0,1, E for short.

Nonlinear color value (nonlinear color value): the non-linear primary color values, which are normalized digital representation values of the image information, are proportional to the digitally encoded values, and in an alternative case, the values should be normalized to [0,1], abbreviated E'.

Electro-optical transfer function (EOTF): a conversion relationship from a non-linear base color value to a linear base color value.

Photoelectric transfer function (OETF): a conversion relationship from linear to non-linear base color values.

Metadata (Metadata): data describing the video source information is carried in the video signal.

Dynamic metadata (dynamic metadata): metadata associated with each frame of image, the metadata varying from picture to picture.

Static metadata (static metadata): metadata associated with the sequence of images, the metadata remaining unchanged within the sequence of images.

Luminance signal (luma): representing a combination of non-linear primary color signals, with the symbol Y'.

Luminance mapping (luminance mapping): the brightness of the source image is mapped to the brightness of the target system.

Color volume (colour volume): the display can present a volume of chrominance and luminance in the chrominance space.

Display adaptation (display adaptation): the video signal is processed to adapt the display characteristics of the target display.

Source image (source picture): the HDR preprocesses the input image in the stage.

Master monitor (Mastering Display): the reference display is used when the video signal is edited and produced and is used for determining the effect of video editing and production;

linear Scene Light (Linear Scene Light) signal: in the HDR video technology, an HDR video signal with content as scene light refers to scene light captured by a camera/camera sensor, and is generally a relative value; the linear scene optical signal is subjected to HLG coding to obtain an HLG signal, the HLG signal is a scene optical signal, and the HLG signal is nonlinear; the scene light signal generally needs to be converted into a display light signal through the OOTF to be displayed on the display device;

linear Display Light (Linear Display Light) signal: in the HDR video technology, an HDR video signal with content as display light refers to display light emitted by a display device, and generally has an absolute value and a unit nits; the method comprises the steps that a PQ signal is obtained after a linear display optical signal is subjected to PQ coding, wherein the PQ signal is a display optical signal, and the PQ signal is a nonlinear signal; the display light signal is displayed on a display device according to the absolute value brightness of the display light signal in a general standard;

light-to-light conversion curve (OOTF): a curve for converting one optical signal into another optical signal in video technology;

dynamic Range (Dynamic Range): a ratio of maximum luminance to minimum luminance in the video signal;

luminance-chrominance (Luma-Chroma, LCC), the luminance and chrominance separating three components of a video signal;

perceptual quantization (Perceptual Quantizer, PQ): an HDR standard, also an HDR conversion equation, PQ is determined by the visual ability of the human. The video signal displayed by the display device is typically a PQ encoded format video signal.

PQ EOTF curves: converting the PQ encoded electrical signal into a linear optical signal, unit nits; the conversion formula is:

Figure BDA0002118891480000061

wherein, E' is an input electric signal and takes a value range of [0,1 ]; the fixed parameter values are as follows:

m1=2610/16384=0.1593017578125;

m2=2523/4096x128=78.84375;

c1=3424/4096=0.8359375=c3-c2+1;

c2=2413/4096x32=18.8515625;

c3=2392/4096x32=18.6875;

the PQ EOTF curve is shown in FIG. 1-a: the input is an electrical signal in the range of [0,1], and the output is a linear optical signal of [0,10000] nits;

PQ EOTF-1the curve: inverse curve of PQ EOTF; the physical meaning is that [0,10000]]Linear optical signals of nits are converted into PQ-encoded electrical signals; the conversion formula is:

Figure BDA0002118891480000062

PQ EOTF-1the curves are shown in FIG. 1-b: the input is [0,10000]Linear optical signal of nits, output is 0,1]A range of electrical signals;

color Gamut (Color Gamut): a certain color space contains a range of colors, and the related color gamut standards are bt.709, bt.2020.

Mixed Log Gamma (Hybrid Log Gamma, HLG): one HDR standard, the video signal captured by a camera, video camera, image sensor, or other kind of image capture device is a video signal in HLG encoding format.

HLG OETF curves: HLG coding is carried out on a linear scene optical signal to convert the linear scene optical signal into a curve of a nonlinear electrical signal, and the conversion formula is as follows:

Figure BDA0002118891480000063

where E is the input linear scene light signal, range [0,1 ]; e' is the output nonlinear electrical signal, range [0,1 ];

the fixed parameters a-0.17883277, b-0.28466892, and c-0.55991073 are shown in fig. 2-a as an example graph of the HLGOETF curve.

HLG OETF-1The curve: the inverse curve of HLG OETF converts HLG encoded nonlinear electrical signals into linear scene optical signals, for example, as follows:

Figure BDA0002118891480000071

as shown in FIG. 2-b, is HLG OETF-1An exemplary graph of the curve, where E' is the input nonlinear electrical signal, range [0,1]](ii) a E is the output linear scene light signal, range [0,1]]。

Linear space: the linear space in the present application refers to a space where a linear optical signal is located;

nonlinear space: the nonlinear space in the application refers to a space where a linear optical signal is converted by using a nonlinear curve; the HDR common nonlinear curve comprises a PQ EOTF-1 curve, an HLG OETF curve and the like, and the SDR common nonlinear curve comprises a gamma curve; linear light signals are generally considered to be visually linear with respect to the human eye after being encoded by the above-mentioned non-linear curve. It should be understood that the non-linear space may be considered as a visually linear space.

Gamma Correction (Gamma Correction): gamma correction is a method for performing nonlinear tone editing on an image, and can detect a dark color part and a light color part in an image signal and increase the proportion of the dark color part and the light color part, thereby improving the contrast effect of the image. The photoelectric conversion characteristics of current display screens, photographic film, and many electronic cameras can all be non-linear. The relationship between the output and the input of these non-linear components can be expressed as a power function, namely: output ═ (input)γ

The non-linear conversion of the color values output by the device is due to the fact that the human visual system is not linear, and human senses visual stimuli through comparison. The outside strengthens the stimulus in a certain proportion, and the stimulus is uniformly increased for people. Therefore, the physical quantity increasing in an equal ratio series is uniform to human perception. In order to display the input color according to the human visual law, the linear color value needs to be converted into the nonlinear color value through the nonlinear conversion in the form of the above power function. The value γ of gamma can be determined according to the photoelectric conversion curve of the color space.

Color Space (Color Space): the color may be a different perception of the eye by different frequencies of light, or may represent the presence of different frequencies of light objectively. A color space is a range of colors defined by a coordinate system that one establishes to represent colors. Together with the color model, the color gamut defines a color space. Wherein the color model is an abstract mathematical model representing colors with a set of color components. The color model may include, for example, a Red Green Blue (RGB) mode, a print four Color (CMYK) mode. Color gamut refers to the aggregate of colors that a system is capable of producing. Illustratively, Adobe RGB and sRGB are two different color spaces based on the RGB model.

Each device, such as a display or printer, has its own color space and can only generate colors within its gamut. When moving an image from one device to another, the colors of the image may change on different devices as each device converts and displays RGB or CMYK according to its own color space.

The RGB space according to the embodiment of the present application is a space in which the luminance of three primary colors, red, green, and blue, is used to quantitatively represent a video signal; the YCC space is a color space representing luminance-chrominance separation in the present application, three components of a YCC space video signal represent luminance-chrominance, respectively, and common YCC space video signals are YUV, YCbCr, ICtCp, and the like.

The linear space referred to in the embodiments of the present application refers to a space in which a linear optical signal is located;

the nonlinear space referred to in the embodiments of the present application refers to a space where a linear optical signal is converted by a nonlinear curve; the nonlinear curve commonly used for HDR is PQ EOTF-1 curve, HLG OETF curve and the like, and the nonlinear curve commonly used for SDR is gamma curve.

According to the method, the chroma value of the video signal to be processed can be adjusted according to the saturation adjusting factor corresponding to the initial brightness value of the video signal to be processed, so that chroma compensation can be performed on the video signal to be processed, saturation change generated by RGB visible brightness mapping performed on the video signal to be processed is compensated, and the tone drift phenomenon is relieved.

Hereinafter, embodiments of the present application will be described in detail with reference to the drawings. First, a video signal processing system provided in the embodiment of the present application is introduced, then video signal processing apparatuses provided in the embodiment of the present application are respectively introduced, and finally a specific implementation manner of the video signal processing method provided in the embodiment of the present application is introduced.

As shown in fig. 3-a, a video signal processing system 100 provided by an embodiment of the present application may include a signal source 101 and a video signal processing apparatus 102 provided by an embodiment of the present application. The signal source 101 is configured to input a video signal to be processed into the video signal processing apparatus 102, and the video signal processing apparatus 102 is configured to process the video signal to be processed according to the video signal processing method provided in the embodiment of the present application. In an alternative case, as the video signal processing apparatus 102 shown in fig. 1 may have a display function, the video signal processing system 100 provided in the embodiment of the present application may further display the video signal processed by the video signal, in this case, the processed video signal does not need to be output to a display device, in this case, the video signal processing apparatus 102 may be a display device such as a television or a display with a video signal processing function.

In another structure of the video signal processing system 100 shown in fig. 3-b, the system 100 further includes a display device 103, where the display device 103 may be a device with a display function, such as a television, a monitor, or a display screen, and the display device 103 is used for receiving the video signal transmitted by the video signal processing apparatus 102 and displaying the received video signal. The video signal processing apparatus 102 may be a playing device, such as a set-top box.

In the exemplary video signal processing system 100, if the video signal to be processed generated by the video signal source 101 is an HDR signal that has not been subjected to RGB spatial luminance mapping, the signal can be processed in the video signal processing apparatus 102 by the video signal processing method provided in the embodiment of the present application, and at this time, the video signal processing apparatus 102 may have an RGB spatial luminance mapping function for the HDR signal; if the video signal to be processed generated by the video signal source 101 is a video signal that has undergone RGB spatial luminance mapping, for example, a video signal that has undergone RGB spatial luminance mapping and undergone color space conversion to the nonlinear NTFL1 space in the embodiment of the present application may be used, and the video signal is subjected to color saturation compensation by the video signal processing apparatus 102. In the implementation of the present application, the conversion of the video signal from YUV space to RGB space, or from RGB space to YUV space, can be performed by standard conversion procedures in the prior art.

Specifically, the video signal processing apparatus 102 provided in the embodiment of the present application may have a structure as shown in fig. 3-c, and it can be seen that the video signal processing apparatus 102 may include a processing unit 301, and the processing unit 301 may be configured to implement the steps involved in the video signal processing method provided in the embodiment of the present application, for example, determine a saturation adjustment factor corresponding to an initial luminance value of the video signal to be processed, and adjust a chrominance value of the video signal to be processed based on the saturation adjustment factor, and the like.

Illustratively, the video signal processing apparatus 102 may further include a storage unit 302, in which computer programs, instructions and data are stored, the storage unit 302 may be coupled to the processing unit 301, and is used for supporting the processing unit 301 to call the computer programs and instructions in the storage unit 302 to implement the steps involved in the video signal processing method provided by the embodiment of the present application, and in addition, the storage unit 302 may also be used for storing data. In various embodiments of the present application, coupled refers to being interconnected in a particular way, including being directly connected or being indirectly connected through other devices. For example, may be coupled by various types of interfaces, transmission lines, or buses, etc.

Illustratively, the video signal processing apparatus 102 may further include a transmitting unit 303 and/or a receiving unit 304, where the transmitting unit 303 may be configured to output the processed video signal, and the receiving unit 304 may receive the video signal to be processed generated by the video signal source 101. Illustratively, the transmitting unit 303 and/or the receiving unit 304 may be a video signal interface, such as a High Definition Multimedia Interface (HDMI).

Illustratively, the video signal processing apparatus 102 may further include a display unit 305, such as a display screen, for displaying the processed video signal.

A video signal processing method provided by the embodiment of the present application is described below with reference to fig. 4, where the method includes the following steps:

step S101: determining a saturation adjustment factor corresponding to an initial brightness value of a video signal to be processed; in an optional case, the mapping relationship between the saturation adjustment factor and the initial brightness value is determined by a saturation mapping curve, the saturation mapping curve is determined by a ratio of the adjusted brightness value to the initial brightness value, and the adjusted brightness value is obtained by mapping the initial brightness value according to a preset brightness mapping curve;

step S102: and adjusting the chromatic value of the video signal to be processed based on the saturation adjusting factor.

By adopting the method, the chrominance compensation can be carried out on the video signal to be processed according to the saturation adjustment factor, and the chrominance compensation improves the color saturation of the video signal after the chrominance value adjustment, so that the color of the video signal after the chrominance value adjustment, which is perceived by human eyes, is closer to the color of the video signal before the luminance mapping is carried out.

If in the above method, the video signal to be processed is based on the color saturation adjustment factor s and the formula-Cout=((Cin/Lin-1)*s+1)*LoutVideo signal obtained by RGB space brightness mapping of HDR signal, or video signal to be processed is to be adjusted according to color saturation adjustment factor s and formula-Cout=((Cin/Lin-1)*s+1)*LoutThe HDR signal subjected to RGB spatial luminance mapping may alleviate tone shift of the HDR signal caused by the RGB spatial luminance mapping based on the video signal processing method provided in the embodiment of the present application.

Specifically, the video signal to be processed according to the embodiment of the present application may be an HDR signal, or may be a video signal obtained by performing luminance mapping and/or spatial conversion on the HDR signal. Here, the HDR signal may be an HDR HLG signal; alternatively, the HDR signal may be an HDRPQ signal.

It should be understood that the initial luminance value of the video signal to be processed according to the embodiments of the present application relates to a linear luminance value of the video signal to be processed before luminance mapping. In a possible implementation manner, if the saturation mapping curve belongs to the target non-linear space, the linear luminance value of the video signal to be processed before the luminance mapping may be converted from the linear space to the target non-linear space, and the obtained luminance value is used as the initial luminance value of the video signal to be processed.

For example, the saturation mapping curve according to the embodiment of the present application may be a function in which the initial luminance value is an independent variable and the ratio of the adjusted luminance value to the initial luminance value is a dependent variable. For example, the saturation mapping curve may be a curve as shown in fig. 5, wherein an abscissa of the saturation mapping curve represents an initial luminance value of the video signal to be processed, and an ordinate of the saturation mapping curve represents a saturation adjustment factor, which is, for example, a ratio of an adjusted luminance value and the initial luminance value in the embodiment of the present application. When the saturation adjustment factor corresponding to the initial brightness value is determined, according to a saturation mapping curve, the ratio of the adjusted brightness value corresponding to the initial brightness value may be used as the saturation adjustment factor corresponding to the initial brightness value.

In one possible embodiment, the saturation adjustment factor may be determined by the following equation:

fsmNLTF1(eNLTF1)=ftmNLTF1(eNLTF1)/eNLTF1(5);

wherein eNLTF1 is the initial luminance value of the video signal to be processed, ftmNLTF1() represents a luminance mapping curve, fsmNLTF1() represents the saturation mapping curve, and thus, correspondingly, ftmNLTF1(eNLTF1) indicates the adjusted luminance value corresponding to the initial luminance value, fsmNLTF1(eNLTF1) represents a saturation adjustment factor corresponding to the initial luminance value.

Exemplary, ftmNLTF1() can be used to represent a luminance mapping curve belonging to the nonlinear space NLTF1, which is nonlinear, fsmNLTF1() represents the saturation mapping curve belonging to the nonlinear space NLTF1, eNLTF1 may be the initial luminance value, f, of the video signal to be processed belonging to the nonlinear space NLTF1smNLTF1(eNLTF1) indicates a saturation regulatory factorThe saturation adjustment factor is used for performing brightness adjustment on the video signal to be processed which belongs to the nonlinear space NLTF1 and has an initial brightness value of eNLTF 1.

In implementation, the initial luminance value of the video signal to be processed may be used as an independent variable (i.e., input) of the above equation (5), and the dependent variable determining equation (5) (i.e., output of equation (5)) may be used as a saturation adjustment factor corresponding to the initial luminance value.

In another possible embodiment, the saturation adjustment factor may be determined from a mapping table that includes an abscissa value and an ordinate value of at least one sampling point on the saturation mapping curve. Specifically, the saturation adjustment factor may be determined according to a one-dimensional mapping relation table as shown in table 1, where table 1 is generated according to the saturation mapping Curve SM _ Curve, and an abscissa and an ordinate located in the same row in table 1 represent an abscissa value and an ordinate value of one sampling point on the saturation mapping Curve SM _ Curve.

Abscissa value of sampling point Ordinate values of sampling points
SM_Curve_x1 SM_Curve_y1
SM_Curve_x2 SM_Curve_y2
…… ……
SM_Curve_xn SM_Curve_yn

TABLE 1 one-dimensional mapping relationship table generated from the saturation mapping Curve SM _ Curve

As shown in Table 1, SM _ Curve _ x1、SM_Curve_x2……SM_Curve_xnRespectively represent the abscissa values of the 1 st and the 2 nd 2 … … th sampling points on the saturation mapping Curve, SM _ Curve _ y1、SM_Curve_y2……SM_Curve_ynAnd the ordinate values of the 1 st sampling point and the 2 nd sampling point 2 … … n on the saturation mapping curve are respectively shown. When determining the saturation adjustment factor corresponding to the initial brightness value of the video signal to be processed according to the mapping relationship table shown in table 1, the initial brightness value of the video signal to be processed may be used as the abscissa value of the sampling point, and the ordinate value of the sampling point corresponding to the abscissa value may be used as the determined saturation adjustment factor.

In addition, in the implementation, a saturation adjustment factor corresponding to the initial brightness value of the video signal to be processed may also be determined by linear interpolation or other interpolation methods. For example, the saturation adjustment factor may be determined by a linear interpolation method according to an initial brightness value of the video signal to be processed, an abscissa value of p sampling points greater than the initial brightness value, an ordinate value of sampling points corresponding to the abscissa values of the p sampling points, an abscissa value of q sampling points less than the initial brightness value, and an ordinate value of sampling points corresponding to the abscissa values of the q sampling points, where p and q are positive integers.

For example, there are various ways to determine the luminance mapping curve involved in step S101 in the embodiment of the present application, and several alternative ways are described below as examples:

firstly, according to a preset nonlinear first original brightness mapping curve, determining a brightness mapping curve belonging to a target nonlinear space

It should be understood that the first original luminance mapping curve according to the embodiment of the present application is a characteristic curve used in a luminance mapping process of a video signal (e.g., an HDR signal) in a non-linear space, and is used to characterize a correspondence between luminance values of the video signal before the luminance mapping and luminance values after the luminance mapping in the non-linear space. The first original luminance mapping curve may be generated in a non-linear space, or may be converted to the non-linear space after being generated in a linear space.

FIG. 6 is a diagram of a first raw luminance mapping curve, which is an inverse PQ EOTF curve of PQ EOTF in a non-linear space-1The abscissa of the first original luminance mapping curve represents a non-linear encoded luminance signal of the HDR PQ signal before luminance mapping, that is, a non-linear encoded luminance signal obtained by non-linear PQ encoding luminance values of the HDR PQ signal before luminance mapping, and the ordinate of the luminance mapping curve represents a non-linear encoded luminance signal obtained by non-linear PQ encoding luminance values of the HDR PQ signal after luminance mapping, that is, a non-linear encoded luminance signal obtained by non-linear PQ encoding luminance values of the HDR PQ signal after luminance mapping, wherein the range of the abscissa of the first original luminance mapping curve is [0,1 [, 1]]The longitudinal coordinate value range is [0,1]]。

In a manner of determining a luminance mapping curve provided in the embodiment of the present application, if the saturation mapping curve belongs to a target non-linear space and a preset first original luminance mapping curve is a non-linear curve, a first abscissa value and a first ordinate value corresponding to at least one sampling point on the first original luminance mapping curve may be converted from a non-linear space to a linear space to obtain a second abscissa value and a second ordinate value, respectively, and thereafter, respectively carrying out linear space to target nonlinear space conversion on the second abscissa value and the second ordinate value to obtain an initial brightness value and an adjusted brightness value having a mapping relation with the initial brightness value, therefore, a brightness mapping curve can be determined according to the mapping relation between the initial brightness value and the adjusted brightness value, and the determined brightness mapping curve belongs to the target nonlinear space. The luminance mapping curve may be used to determine a saturation mapping curve belonging to the target non-linear space.

In an optional case, the luminance mapping may be performed on an initial luminance value of the video signal to be processed according to a luminance mapping curve, and an adjusted luminance value obtained by the luminance mapping may be used as a luminance value of the signal to be processed after the luminance mapping, where the specific method is as follows: according to the brightness mapping curve, a target first ordinate value corresponding to a target first abscissa value corresponding to an initial brightness value of the signal to be processed is determined, and the target first ordinate value is used as an adjustment brightness value.

Determining a brightness mapping curve belonging to the target nonlinear space according to a preset linear second original brightness mapping curve

It should be understood that the second original luminance mapping curve according to the embodiment of the present application is a characteristic curve used in a luminance mapping process of a video signal (e.g., an HDR signal) in a linear space, and is used to characterize a corresponding relationship between a linear luminance value of the video signal before luminance mapping and a linear luminance value of the video signal after luminance mapping in a non-linear space. The second original luminance mapping curve may be generated in a non-linear space and then converted into a linear space, or may be generated in a linear space.

In the method for determining a brightness mapping curve provided in the embodiment of the present application, if the saturation mapping curve belongs to a target non-linear space and the preset second original brightness mapping curve is a linear curve, a conversion from the linear space to the non-linear space may be performed on a third abscissa value and a third ordinate value corresponding to at least one sampling point on the second original brightness mapping curve, so as to obtain an initial brightness value and an adjusted brightness value, and then the brightness mapping curve may be determined according to a mapping relationship between the initial brightness value and the adjusted brightness value, where the brightness mapping curve belongs to the target non-linear space. In an implementation, the luminance mapping curve may be used to determine a saturation mapping curve belonging to a target non-linear space.

In addition, in the implementation, the luminance mapping may be performed on the initial luminance value of the video signal to be processed according to the luminance mapping curve, and the adjusted luminance value obtained by the luminance mapping is used as the luminance value of the signal to be processed after the luminance mapping, and the specific method is as follows: according to the brightness mapping curve, a target third ordinate value corresponding to a target third abscissa value corresponding to the initial brightness value of the signal to be processed is determined, and the target third ordinate value is used as an adjustment brightness value.

The manner in which the saturation mapping curve is determined provided by the embodiments of the present application is described below.

If the first original luminance mapping Curve TM _ Curve belongs to the non-linear space, the set of the abscissa and the ordinate of the sampling point on the first original luminance mapping Curve can be expressed as:

TM_Curve={TM_Curve_xn,TM_Curve_yn} (6);

wherein, TM _ Curve _ xnIs a first abscissa value of the nth sample point on the first original luminance mapping Curve, TM1_ Curve _ ynA first ordinate value of an nth sampling point on the first original brightness mapping curve is shown, wherein n is a positive integer;

assuming that the space to which the first original brightness mapping curve belongs is a non-linear space PQ EOTF-1,PQ EOTF-1And for the inverse curve of PQEOTF, converting the first abscissa from a nonlinear space to a linear space to obtain a second abscissa value:

TM_Curve_L_xn=PQ_EOTF(TM_Curve_xn) (7);

wherein PQ _ EOTF () is an expression of a PQ EOTF Curve, TM _ Curve _ L _ xnA second abscissa value, TM _ Curve _ x, representing the nth sample pointnA first ordinate value representing an nth sample point;

and converting the first ordinate from a nonlinear space to a linear space, wherein the obtained second ordinate is as follows:

TM_Curve_L_yn=PQ_EOTF(TM_Curve_yn) (8);

wherein, TM _ Curve _ L _ ynA second ordinate value, TM _ Curve _ y, representing the nth sample pointnA first ordinate value representing an nth sample point;

if the target nonlinear space is a nonlinear NLTF1 space, where NLTF1 is a gamma curve, the gamma coefficient is Gmm ═ 2.4, and the conversion expression when any linear luminance value is converted into the nonlinear NLTF1 space is:

NLTF1(E)=(E/MaxL)^(1/Gmm) (9);

in equation (9), E is a linear luminance value in a linear space, and the luminance range is [0,10000] nits, and MaxL is a normalized maximum luminance, which may be taken as MaxL being 10000 in this embodiment;

and converting the second abscissa from a linear space to a target nonlinear space to obtain an initial brightness value:

TM_Curve_NLTF1_xn=NLTF1(TM_Curve_L_xn) (10);

wherein, TM _ Curve _ NLTF1_ xnNLTF1(TM _ Curve _ L _ x) as an initial luminance valuen) Indicating that the linear luminance value TM _ Curve _ L _ x is to be measurednLuminance value after conversion to non-linear NLTF1 space, TM _ Curve _ L _ xnIs a second abscissa;

and converting the second ordinate from a linear space to a target nonlinear space to obtain an adjusted brightness value:

TM_Curve_NLTF1_yn=NLTF1(TM_Curve_L_yn) (11);

wherein, TM _ Curve _ NLTF1_ ynTo adjust the luminance value, NLTF1(TM _ Curve _ L _ y)n) Indicating that the linear luminance value TM _ Curve _ L _ y is to be measurednLuminance value after conversion to nonlinear NLTF1 space, TM _ Curve _ L _ ynIs a second ordinate;

it should be noted that a mapping relationship exists between an initial brightness value determined according to any sampling point on the first original brightness mapping curve and an adjusted brightness value determined according to the sampling point, so that the brightness mapping curve can be obtained by selecting a sampling point of the adjusted brightness value corresponding to the initial brightness value as an abscissa value and a sampling point of the adjusted brightness value corresponding to the initial brightness value as an ordinate value, and formulating a curve according to the sampling points;

the horizontal coordinate value and the vertical coordinate value of the sampling points on the Curve represent a brightness mapping Curve TM _ Curve _ NLTF 1:

TM_Curve_NLTF1={TM_Curve_NLTF1_xn,TM_Curve_NLTF1_yn} (12);

wherein, TM _ Curve _ NLTF1_ xnDenotes the initial luminance value, TM _ Curve _ NLTF1_ ynIndicating the corresponding adjusted brightness of the initial brightness valueThe value n is a positive integer.

It should be noted that the luminance mapping Curve TM _ Curve _ NLTF1, determined according to the above method, belongs to the nonlinear NLTF1 space.

From the luminance mapping Curve TM _ Curve _ NLTF1 as represented by equation (12), an expression of the saturation mapping Curve SM _ Curve belonging to the nonlinear NLTF1 space can be determined by the following method:

the saturation mapping Curve SM _ Curve can be expressed as:

SM_Curve={SM_Curve_NLTF1_xn,SM_Curve_NLTF1_yn} (13);

wherein:

SM_Curve_NLTF1_xn=TM_Curve_NLTF1_xn(14);

SM_Curve_NLTF1_yn=TM_Curve_NLTF1_yn/TM_Curve_NLTF1_xn(15);

among the above formulas (13) to (15), SM _ Curve _ NLTF1_ xnFor the abscissa of the nth sample point on the saturation mapping Curve, TM _ Curve _ NLTF1_ xnThe abscissa of the nth sampling point on the brightness mapping Curve TM _ Curve _ NLTF 1;

SM_Curve_NLTF1_ynfor the ordinate of the nth sample point on the saturation mapping Curve, TM _ Curve _ NLTF1_ ynIs the ordinate of the nth sample point on the luminance mapping Curve TM _ Curve _ NLTF 1.

Another method of determining a saturation mapping curve is provided below.

If the expression of the non-linear first original luminance mapping Curve TM _ Curve is:

Figure BDA0002118891480000131

wherein e represents the input of the first original luminance mapping curve, i.e. the first abscissa value of a sample point on the first original luminance mapping curve, and ftm (e) represents the first ordinate value of the sample point;

function hmt () is defined as follows:

hmt(x)=0.2643×α0(x)+0.5081×α1(x)+β0(x) (17);

wherein the content of the first and second substances,

converting the first abscissa value e of the sampling point to a linear space, and thereby, expressing a second abscissa value of the sampling point in the linear space by eL;

the second ordinate value f obtained after the conversion of the first ordinate value ftm (e) into linear space can be expressed by the following formulatmL(eL):

ftmL(eL)=PQ_EOTF(ftm(e))=PQ_EOTF(ftm(PQ_EOTF-1(eL))) (18);

Where PQ _ EOTF () is an expression of the PQ EOTF curve.

If the target nonlinear space is a nonlinear NLTF1 space, where NLTF1 is a gamma curve, the gamma coefficient Gmm is 2.4, and the conversion expression when any linear luminance value is converted into the nonlinear NLTF1 space can refer to the above equation (9), then the linear space is converted into the target nonlinear space for the second abscissa value eL, and the obtained initial luminance value can be represented as eNLTF 1;

for the second ordinate value ftmL(eL) converting the linear space into the target nonlinear space to obtain an adjusted brightness value ftmNLTF1(eNLTF1) can be expressed as:

ftmNLTF1(eNLTF1)=NLTF1(ftmL(eL))=NLTF1(PQ_EOTF(ftm(PQ_EOTF-1(eL))))=NLTF1(PQ_EOTF(ftm(PQ_EOTF-1(NLTF1-1(eNLTF1))))); (19);

where NLTF1() represents a conversion expression when an arbitrary linear luminance value is converted into a nonlinear NLTF1 space, NLTF1-1() Represents an inverse expression of NLTF1 ().

Thus, the luminance mapping Curve TM _ Curve _ NLTF1, which belongs to the nonlinear NLTF1 space, can be expressed according to the above equation (19).

Determining the saturation mapping Curve according to the above luminance mapping Curve TM _ currve _ NLTF1, the saturation mapping Curve SM _ currve can be represented by the following formula:

fsmNLTF1(eNLTF1)=ftmNLTF1(eNLTF1)/eNLTF1 (20);

where eNLTF1 represents the initial luminance value, fsmNLTF1(eNLTF1) represents a saturation adjustment factor corresponding to the initial brightness value eNLTF 1.

Another method of determining a saturation mapping curve is provided below. If the second original luminance mapping Curve TM _ Curve belongs to the linear space, the set of the abscissa and the ordinate of the sampling point on the first original luminance mapping Curve is expressed as:

TM_Curve={TM_Curve_xn,TM_Curve_yn} (21);

wherein, TM _ Curve _ xnTM1_ Curve _ y, a third abscissa value of the nth sample point on the second original luminance mapping CurvenA third ordinate value of an nth sampling point on the second original brightness mapping curve is obtained, wherein n is a positive integer;

if the target nonlinear space is a nonlinear NLTF1 space, where NLTF1 is a gamma curve, the gamma coefficient Gmm is 2.4, and the conversion expression when any linear luminance value is converted into the nonlinear NLTF1 space refers to equation (9);

and converting the third abscissa from a linear space to a target nonlinear space to obtain an initial brightness value:

TM_Curve_NLTF1_xn=NLTF1(TM_Curve_xn) (22);

wherein, TM _ Curve _ NLTF1_ xnNLTF1(TM _ Curve _ x) as an initial luminance valuen) Indicates the third abscissa value TM _ Curve _ xnLuminance values after conversion to the nonlinear NLTF1 space;

and converting the third ordinate from a linear space to a target nonlinear space, wherein the obtained adjusting brightness value is as follows:

TM_Curve_NLTF1_yn=NLTF1(TM_Curve_L_yn) (23)

wherein, TM _ Curve _ NLTF1_ ynTo adjust the luminance value, NLTF1(TM_Curve_yn) Denotes the third ordinate TM _ Curve _ ynLuminance values after conversion to the nonlinear NLTF1 space;

it should be noted that a mapping relationship exists between an initial brightness value determined according to any sampling point on the second original brightness mapping curve and an adjusted brightness value determined according to the sampling point, so that the brightness mapping curve can be obtained by selecting a sampling point of the adjusted brightness value corresponding to the initial brightness value as an abscissa value and a sampling point of the adjusted brightness value corresponding to the initial brightness value as an ordinate value, and formulating a curve according to the sampling points;

the horizontal coordinate value and the vertical coordinate value of the sampling points on the Curve represent a brightness mapping Curve TM _ Curve _ NLTF 1:

TM_Curve_NLTF1={TM_Curve_NLTF1_xn,TM_Curve_NLTF1_yn} (24);

wherein, TM _ Curve _ NLTF1_ xnDenotes the initial luminance value, TM _ Curve _ NLTF1_ ynAnd n is a positive integer.

It should be noted that the luminance mapping Curve TM _ Curve _ NLTF1, determined according to the above method, belongs to the nonlinear NLTF1 space.

From the luminance mapping Curve TM _ Curve _ NLTF1 as represented by equation (24), the expression of the saturation mapping Curve SM _ Curve belonging to the nonlinear NLTF1 space can be determined by the following method:

the saturation mapping Curve SM _ Curve can be expressed as:

SM_Curve={SM_Curve_NLTF1_xn,SM_Curve_NLTF1_yn} (25);

wherein:

SM_Curve_NLTF1_xn=TM_Curve_NLTF1_xn(26);

SM_Curve_NLTF1_yn=TM_Curve_NLTF1_yn/TM_Curve_NLTF1_xn(27);

among the above formulas (25) to (27), SM _ Curve _ NLTF1_ xnFor the abscissa of the nth sample point on the saturation mapping Curve, TM _ Curve _ NLTF1_ xnThe abscissa of the nth sampling point on the brightness mapping Curve TM _ Curve _ NLTF 1;

SM_Curve_NLTF1_ynfor the ordinate of the nth sample point on the saturation mapping Curve, TM _ Curve _ NLTF1_ ynIs the ordinate of the nth sample point on the luminance mapping Curve TM _ Curve _ NLTF 1.

The following is another method of video signal processing provided by the present application.

If the third abscissa of any one of the sampling points on the second original brightness mapping curve is known as e, the third ordinate of the sampling point on the second original brightness mapping curve is known as ftm(e) The second original brightness mapping curve is a brightness mapping curve generated in a linear space;

if the target nonlinear space is a nonlinear NLTF1 space, where NLTF1 is a gamma curve and the gamma coefficient is Gmm ═ 2.4, the conversion expression for converting any linear luminance value into the nonlinear NLTF1 space can refer to the above equation (9);

converting the third abscissa value e from the linear space to the target non-linear space, and representing the obtained initial brightness value as eNLTF 1;

for the third ordinate value ftm(e) Converting the linear space into the target nonlinear space to obtain an adjusted brightness value ftmNLTF1(eNLTF1) can be expressed as:

ftmNLTF1(eNLTF1)=NLTF1(ftm(e)) (28);

where NLTF1() represents a conversion expression when an arbitrary linear luminance value is converted into a nonlinear NLTF1 space.

Thus, a luminance mapping Curve TM _ Curve _ NLTF1, which luminance mapping Curve TM _ Curve _ NLTF1 belongs to the nonlinear NLTF1 space, can be expressed according to the above equation (28).

Determining the saturation mapping Curve according to the above luminance mapping Curve TM _ currve _ NLTF1, the saturation mapping Curve SM _ currve can be represented by the following formula:

fsmNLTF1(eNLTF1)=ftmNLTF1(eNLTF1)/eNLTF1 (29);

where eNLTF1 represents the initial luminance value, fsmNLTF1(eNLTF1) represents a saturation adjustment factor corresponding to the initial brightness value eNLTF 1.

In the implementation of step S102, after determining the saturation adjustment factor, the chrominance value of the video signal to be processed may be adjusted based on the product of the preset chrominance component gain coefficient and the saturation adjustment factor. Specifically, the mapping relationship between the chrominance signal and the chrominance component gain coefficient in the video signal to be processed may be predetermined, and when the video signal to be processed is adjusted based on the video signal processing method provided in the embodiment of the present application, the chrominance signal in the video signal to be processed is adjusted based on the product of the chrominance component gain coefficient and the saturation adjustment factor corresponding to the chrominance signal in the video signal to be processed.

In a specific implementation, if the video signal to be processed includes two or more chrominance signals, the chrominance value of each chrominance signal may be adjusted according to the product of the chrominance component gain factor and the saturation adjustment factor respectively corresponding to each chrominance signal. Specifically, if the video signal to be processed is a YCC signal, the YCC signal includes a first chrominance signal and a second chrominance signal, and the predetermined chrominance component gain factor includes a predetermined first chrominance component gain factor and a predetermined second chrominance component gain factor, where the first chrominance signal corresponds to a first chrominance value and the second chrominance signal corresponds to a second chrominance value, when adjusting the chrominance value of the YCC signal, the first chrominance value corresponding to the first chrominance signal may be adjusted according to a product of the first chrominance component gain factor and the saturation adjustment factor, and the second chrominance value corresponding to the second chrominance signal may be adjusted according to a product of the predetermined second chrominance component gain factor and the saturation adjustment factor.

For example, if the video signal to be processed is a YUV signal YUV0, where the saturation adjustment factor determined according to the initial luminance value of YUV0 is SMCoef, the first chrominance component gain coefficient corresponding to the first chrominance component U of YUV0 is Ka, the second chrominance component gain coefficient corresponding to the second chrominance component V of YUV0 is Kb, the luminance component value of YUV0 is Y0, the chrominance value of the first chrominance component U is U0, and the chrominance value of the second chrominance component V is V0, the process of adjusting the chrominance values of the YUV signal may be:

taking the product of the first chrominance component gain coefficient Ka and SMCoef as the first chrominance component adjustment factor SMCoefa, and the product of the second chrominance component gain coefficient Kb and SMCoef as the second chrominance component adjustment factor SMCoefb, such that:

SMCoefa=SMCoef*Ka (30);

SMCoefb=SMCoef*Kb (31)。

thereafter, the product U0 'of the first chroma component adjustment factor SMCoefa and U0 may be used as the chroma value of the adjusted first chroma component, and the product V0' of the second chroma component adjustment factor SMCoefb and V0 may be used as the chroma value of the adjusted second chroma component.

A pair Y provided by the embodiments of the present application is described belowsCbsCrsA process of signal processing, wherein YsCbsCrsThe terminal passes through the second generation source coding standard (2)ndaudio video coding standard, AVS2) decodes the reconstructed and chroma upsampled restored 4:4:4 YCbCr nonlinear video signal. Y issCbsCrs inEach component is a 10-bit digitally encoded value.

(1) According to YsCbsCrsSignal calculation non-linear RsG`sB`sA signal;

Figure BDA0002118891480000161

Figure BDA0002118891480000162

wherein Y issCbsCrsThe signal is a 10-bit limited range digital code value, R' obtained by the processingsG`sB`sIs the floating point nonlinear basis color value, R ″sG`sB`sThe value range of each component is adjusted to [0,1]]An interval.

1) According to RsG`sB`sSignal computation linearity RsGsBsSignals and calculating an input signal RsGsBsLinear brightness Ys

Es=HLG_OETF-1(E`s) (34);

E in the equationsRepresents RsGsBsAny component of the signal having a value in the range of [0,1]An interval; e' systemsFinger RsG`sB`sAny component in the signal, function HLG _ OETF-1() Defined according to ITU bt.2100 as follows:

Figure BDA0002118891480000163

wherein a is 0.17883277, b is 1-4a, c is 0.5-a ln (4 a);

RsGsBslinear brightness YsThe calculation is as follows:

Ys=0.2627Rs+0.6780Gs+0.0593Bs(36);

in the formula, YsIs a real number having a value of [0,1]]An interval.

(2) According to linear luminance YsCalculating YtA signal;

according to linear luminance YsCalculating display brightness Yd

Yd=1000(Ys)1.2(37);

According to YtSignal calculation visual linear luminance YdPQ

YdPQ=PQ_EOTF-1(Yd) (38);

Wherein the content of the first and second substances,

Figure BDA0002118891480000164

m1=2610/16384=0.1593017578125;

m2=2523/4096*128=78.84375;

c1=3424/4096=0.8359375=c3-c2+1;

c2=2413/4096*32=18.8515625;

c3=2392/4096*32=18.6875;

for YdPQMapping brightness to obtain YtPQ

YtPQ=ftm(YdPQ) (39);

Function f in equationtm() The definition is as follows:

Figure BDA0002118891480000165

where function hmt () is defined as follows:

hmt(x)=0.2643×α0(x)+0.5081×α1(x)+β0(x) (41);

Figure BDA0002118891480000171

linear luminance according to vision YdPQCalculating the linear luminance Y after the normalized luminance mappingt

Yt=PQ_EOTF(YtPQ) (43);

Wherein the content of the first and second substances,

Figure BDA0002118891480000172

thus, YtThe calculation formula of (2) is as follows:

Yt=PQ_EOTF(ftm(PQ_EOTF-1(1000(Ys)1.2)) (44);

in the formula, YtIs a real number having a value of [0,100 ]]An interval.

(3) According to Yt、YsCalculating a brightness mapping gain TmGain;

the luminance mapping gain TmGain is calculated as shown in the formula:

Figure BDA0002118891480000173

(4) calculating saturation mapping gain SmGain from luminance mapping gain TmGain

a. Calculating the non-linear display brightness value before brightness mapping:

YdGMM=(Yd/1000)1/γ=(1000(Ys)1.2/1000)1/γ(46);

b. calculating the non-linear display brightness value after brightness mapping:

YtGMM=(Yt/1000)1/γ(47);

c. calculating saturation mapping gain SmGain

(5) Calculation of RtmGtmBtmA signal;

Etm=Es×TmGain (49);

in the formula, EsRepresents RsGsBsAny component of the signal, EtmRepresents RtmGtmBtmAny component in the signal.

(6) Calculation of RtGtBtSignal (gamut mapping);

Figure BDA0002118891480000175

(7) according to RtGtBtSignal calculation RtG`tB`tA signal;

E`t=(Et/100)1/γ(51);

(8) according to RtG`tB`tSignal calculation YtCbtCrtA signal;

Figure BDA0002118891480000176

Figure BDA0002118891480000177

wherein R' istG`tB`tIs a non-linear base color value of [0,1]]An interval. Y obtained by the treatmenttCbtCrtThe signal is a 10-bit limited-range digital coded value, for example, γ in this embodiment may be 2.2 or 2.4, or may take other values, and a value of γ may be selected according to an actual situation, which is not limited in this embodiment of the present application.

(9) Calculating YoCboCroSignal (saturation mapping);

YoCboCrothe signal is a video signal obtained by adjusting a chromatic value according to the video signal processing method provided by the embodiment of the application; y isoCboCroThe signal is a 10-bit limited-range digitally encoded value.

For example, in an implementation of the video signal processing method provided in the embodiment of the present application, luminance mapping in RGB space may also be performed on the video signal YUV0 according to the method shown in fig. 7:

step 701: performing color space conversion on the video signal YUV0 to obtain a linear display optical signal RdGdBd in an RGB space; rd, Gd and Bd respectively represent brightness values of three components of the linear display optical signal RdGdBd, and the value ranges of Rd, Gd and Bd are [0,10000 ];

step 702: calculating the display brightness value Yd of the RdGdBd signal according to the color gamut of the linear display optical signal RdGdBd; wherein Yd ═ (cr × Rd + cg × Gd + cb × Bd), when the color gamut of the RdGdBd signal is bt.2020, the parameter cr may be 0.2627, cg may be 0.6780, cb may be 0.0593; when the color gamut of the RdGdBd signal is other color gamuts, cr, cg and cb can respectively obtain linear brightness calculation parameters under each color gamut;

step 703: using PQ EOTF to display brightness value Yd-1The curve is transformed into a visual linear space,NL _ Yd is obtained; wherein NL _ Yd ═ PQ _ EOTF-1(Yd),PQ_EOTF-1() An expression that is an inverse curve of PQ _ EOTF;

step 704: performing brightness mapping on NL _ Yd by utilizing a nonlinear first original brightness mapping curve to obtain a mapped brightness value NL _ Yt, wherein the first original brightness mapping curve is PQ _ EOTF-1Generated in space;

step 705: converting the mapped brightness value into a linear space to obtain a linear space brightness value Yt; wherein Yt ═ PQ _ EOTF (NL _ Yt);

step 706: calculating linear brightness gain K, wherein K is the ratio of the linear space brightness value Yt to the display brightness value Yd;

step 707: the linear display optical signal RtGtBt after the luminance mapping process is determined from K and the linear display optical signal RdGdBd, where (Rt, Gt, Bt) ═ K (Rd, Gd, Bd) + (BLoffset ) is a black bit level of the display device, that is, a minimum value of the display luminance, and Rd, Gd, and Bd are three components of the linear display optical signal RdGdBd, respectively.

In the implementation of step 704, if the abscissa and the ordinate of the sampling point on the first original luminance mapping curve are represented by the mapping relationship table shown in table 2, the NL _ Yt may be calculated by a table look-up linear interpolation method according to the NL _ Yd, or by other interpolation methods. Wherein, the abscissa values x of the sampling points shown in Table 20、x1……xnThe horizontal coordinate values of a plurality of sampling points on the first original brightness mapping curve and the vertical coordinate values y of the sampling points0、y1……ynThe values are vertical coordinate values of a plurality of sampling points on the first original brightness mapping curve respectively.

Abscissa value of sampling point Ordinate values of sampling points
x0 y0
x1 y1
…… ……
xn yn

TABLE 2 one-dimensional mapping relationship table generated from the first raw luminance mapping curve

Illustratively, NL _ Yt corresponding to NL _ Yd can be determined by the following linear interpolation method:

if x is determined by looking up the table0<NL_Yd<x1Then take the sample point (x) according to Table 20,y0) X of the abscissa0And ordinate value y0And sample point (x)1,y1) X of the abscissa1And ordinate value y1Determining NL _ Yt;

by linear interpolation, the horizontal coordinate of the first original brightness can be mapped on x0And x1The ordinate y corresponding to any abscissa x can be expressed as:

Figure BDA0002118891480000181

let x in the formula be NL _ Yd, and the obtained y is NL _ Yt corresponding to NL _ Yd.

If the video signal to be processed is the YUV signal which is luminance mapped according to the method shown in fig. 7 and converted to the nonlinear NLTF1 space, wherein if the display luminance value Yd of the linear display optical signal RdGdBd is known according to step 702, and the luminance value Yt of the luminance in the linear space after mapping is known according to step 705, the saturation adjustment factor can be determined according to Yd and Yt and the chrominance of the video signal to be processed can be adjusted, the specific method is shown in fig. 8:

step 801: calculating a nonlinear display brightness value NL1_ Yd of a nonlinear NLTF1 space before brightness mapping according to the display brightness value Yd of the linear display optical signal RdGdBd; where NL1_ Yd is NLTF1(Yd), and NLTF1() represents a nonlinear conversion expression in NLTF1 space, which can be referred to as the above formula (9);

step 802: calculating a nonlinear display brightness value NL1_ Yt of the brightness mapped nonlinear NLTF1 space according to the mapped linear brightness value Yt, wherein NL1_ Yt is NLTF1 (Yt);

step 803: determining a saturation mapping factor SMCoef according to the non-linear display luminance value NL1_ Yd and the non-linear display luminance value NL1_ Yt, wherein SMCoef is NL1_ Yt/NL1_ Yd;

step 804: determining a product of a first chrominance component gain coefficient Ka and an SMCoef corresponding to a first chrominance component U of the YUV signal as a first chrominance component gain coefficient SMCoefa, and determining a product of a second chrominance component gain coefficient Kb and an SMCoef corresponding to a second chrominance component V of the YUV signal as a second chrominance component gain coefficient SMCoefb;

step 805: keeping the brightness value of the brightness component of the YUV signal unchanged, taking the product U 'of the first chrominance component adjustment factor SMCoefa and the chrominance value U of the first chrominance component as the chrominance value of the adjusted first chrominance component, and taking the product V' of the second chrominance component adjustment factor SMCoefb and the chrominance value V of the second chrominance component as the chrominance value of the adjusted second chrominance component, and then ending the flow.

As shown in fig. 9, if the video signal to be processed is a YUV signal obtained by performing luminance mapping on an RGB space by using an original luminance mapping curve and converting the RGB space into a nonlinear NLTF1 space, the video signal processing method according to the embodiment of the present application includes the following steps:

step 901: determining a saturation mapping curve belonging to a nonlinear NLTF1 space according to the original brightness mapping curve; the original luminance mapping curve may be a non-linear first original luminance mapping curve provided by the embodiment of the present application, or may be a linear second original luminance mapping curve provided by the embodiment of the present application; the implementation of step 901 may refer to the implementation of embodiments one to four in this application;

step 902: determining a saturation adjustment factor corresponding to the initial brightness value of the video signal to be processed according to the saturation mapping curve; if the saturation mapping curve is represented by the mapping relation table, determining a saturation adjusting factor corresponding to the initial brightness value by a linear interpolation method according to the horizontal coordinate value and the vertical coordinate value of the sampling point in the mapping relation table; if the saturation mapping curve is represented by a curve expression, the initial brightness value of the video signal to be processed can be used as the input of the expression, and the output of the expression is used as a saturation adjusting factor corresponding to the initial brightness value;

step 903: determining a chroma component adjusting factor for adjusting the video signal to be processed based on the saturation adjusting factor and a preset chroma component gain coefficient;

step 904: and adjusting the chromatic value of the video signal to be processed based on the chromatic component adjusting factor, and then ending the flow.

By adopting the method, the saturation mapping curve belonging to the nonlinear NLTF1 space can be determined according to the original brightness mapping curve used when the RGB space carries out brightness mapping on the video signal, the saturation adjusting factor of the video signal converted into the nonlinear NLTF1 space after the brightness mapping is determined according to the saturation mapping curve, and the chromaticity adjustment of the video signal is realized, so that the color of the video signal after the chromaticity value adjustment perceived by human eyes is closer to the color of the video signal before the brightness mapping is carried out. In implementation, the video signal to be processed related to the method shown in fig. 9 may be a video signal subjected to RGB spatial luminance mapping by the luminance mapping method shown in fig. 7, or may be a video signal subjected to RGB spatial luminance mapping by another method.

As shown in fig. 10, if the video signal to be processed is an HDR signal YUV0, the HDR signal needs to be luminance-mapped by an original luminance mapping curve in an RGB space, and needs to be converted into a non-linear NLTF1 YUV signal for display after luminance mapping, an embodiment of the present invention provides a video signal processing method, including the following steps:

step 1001: determining a saturation mapping curve belonging to a nonlinear NLTF1 space according to the original brightness mapping curve; the original luminance mapping curve may be a non-linear first original luminance mapping curve provided by the embodiment of the present application, or may be a linear second original luminance mapping curve provided by the embodiment of the present application; the implementation of step 1001 may refer to the implementation of embodiments one to four in the present application;

step 1002: determining a saturation adjustment factor corresponding to the initial brightness value of the video signal to be processed according to the saturation mapping curve; if the saturation mapping curve is represented by the mapping relation table, determining a saturation adjusting factor corresponding to the initial brightness value by a linear interpolation method according to the horizontal coordinate value and the vertical coordinate value of the sampling point in the mapping relation table; if the saturation mapping curve is represented by a curve expression, the initial brightness value of the video signal to be processed can be used as the input of the expression, and the output of the expression is used as a saturation adjusting factor corresponding to the initial brightness value;

step 1003: determining and adjusting a chrominance component adjusting factor corresponding to the HDR signal YUV0 of the video signal to be processed based on the saturation adjusting factor and a preset chrominance component gain coefficient;

step 1004: based on the chroma component adjustment factor, adjusting the chroma value of the HDR signal YUV0 of the video signal to be processed to obtain a video signal YUV1 with the chroma value adjusted;

step 1005: performing color space conversion on the video signal YUV1 to obtain a video signal RGB1 in an RGB space;

step 1006: performing brightness mapping on the video signal RGB1 according to the original brightness mapping curve in the RGB space to obtain a brightness-mapped video signal RGB 2;

step 1007: and performing color space conversion on the video signal RGB2 after brightness mapping to obtain a YUV signal YUV2 in a nonlinear NLTF1 space.

By adopting the method, the chroma values of the two chroma components of the HDR signal are respectively adjusted in the YCC space, then the brightness mapping of the RGB space is carried out on the obtained video signal, and the chroma of the video signal is adjusted before the brightness mapping of the RGB space is carried out, so that the color of the video signal YUV2 perceived by human eyes is closer to the color of the HDR signal YUV0 before the brightness mapping is carried out.

In the specific implementation of step 1002, a saturation mapping factor SMCoef may be calculated using the luminance component Y0 of the video signal YUV0 to be processed as an initial luminance value; if the luminance component Y0 in YUV0 is already on the non-linear space NLTF1 (SM _ Curve is converted to the non-linear space NLTF1 where the HDR signal YUV0 is), the luminance Y0_ Norm after normalization of the luminance component Y0 of the HDR signal YUV0 can be used as the input of the saturation mapping Curve, so that table lookup can be performed and the saturation mapping factor SMCoef can be obtained by a linear interpolation method;

alternatively, if the expression of the saturation mapping curve is fsmNLTF1(eNLTF1) ═ ftmnlttf 1(eNLTF1)/eNLTF1, the saturation mapping factor SMCoef can be calculated using the luminance Y0_ Norm as an argument, and the SMCoef ═ fsmnlttf 1(Y0_ Norm);

in the above example, the normalized luminance Y0_ Norm is (Y0-minvalue Y)/(maxvaly-minvalue Y), for a 10bit Limited Range YUV signal, minvalue Y is 64, maxvalue is 940; for a 10bit FullRange YUV signal, minValueY is 0 and maxvalue is 1023.

Based on the same inventive concept, embodiments of the present application provide a video signal processing apparatus having a function of implementing the video signal processing method provided by any of the above method embodiments. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above-described functions.

A video signal processing apparatus provided in the embodiment of the present application may have a structure as shown in fig. 3-c, wherein the processing unit 301 may be configured to perform steps S101 and S102 shown in the embodiment of the method side of the present application; illustratively, the processing unit 301 may also be configured to perform the steps illustrated in fig. 7, 8, 9, and 10 in the method-side embodiment.

In an embodiment, a structure of a video signal processing apparatus 102 provided in an embodiment of the present application is shown in fig. 11, and the video signal processing apparatus 102 may include a first determining unit 1101, an adjusting unit 1102; the first determining unit 1101 may be configured to perform the step of measuring S101 according to the method in the embodiment of the present application; the adjusting unit 1102 may be configured to perform the step of S102 according to the method of the embodiment of the present application.

With the above configuration, the first determining unit of the video signal processing apparatus 102 may determine the saturation adjustment factor, and the adjusting unit of the video signal processing apparatus 102 may adjust the chromaticity value of the video signal to be processed according to the saturation adjustment factor.

In one possible design, the saturation mapping curve is a function of the initial luminance value as an independent variable and the ratio as a dependent variable.

In one possible design, the saturation adjustment factor may be determined according to the foregoing equation (29), where eNLTF1 is the initial luminance value, ftmNLTF1() represents a luminance mapping curve, fsmNLTF1() represents the saturation mapping curve, corresponding to ftmNLTF1(eNLTF1) indicates the adjusted luminance value corresponding to the initial luminance value, fsmNLTF1(eNLTF1) represents a saturation adjustment factor corresponding to the initial luminance value.

In one possible design, the saturation adjustment factor may be determined from a mapping table that includes an abscissa value and an ordinate value of at least one sample point on the saturation mapping curve.

In one possible design, the adjusting unit may adjust the chrominance value of the video signal to be processed based on a product of a preset chrominance component gain factor and a saturation adjusting factor.

In one possible design, the chrominance values include a first chrominance value of a first chrominance signal corresponding to the video signal to be processed and a second chrominance value of a second chrominance signal corresponding to the video signal to be processed, the preset chrominance component gain coefficients include a preset first chrominance component gain coefficient and a preset second chrominance component gain coefficient, and the adjusting unit 1102 may be specifically configured to: adjusting the first chrominance value based on a product of a preset first chrominance component gain coefficient and a saturation adjustment factor; and adjusting the second chrominance value based on the product of the preset second chrominance component gain coefficient and the saturation adjustment factor.

In one possible design, the saturation mapping curve belongs to a target non-linear space, the preset first original luminance mapping curve is a non-linear curve, and the video signal processing apparatus 102 may further include a first converting unit 1103, a second converting unit 1104 and a second determining unit 1105; the first conversion unit 1103 is configured to perform conversion from a nonlinear space to a linear space on a first abscissa value and a first ordinate value corresponding to at least one sampling point on the first original luminance mapping curve, respectively, so as to obtain a second abscissa value and a second ordinate value; a second conversion unit 1104 for performing linear-to-nonlinear-space conversion on the second abscissa value and the second ordinate value, respectively, to obtain an initial luminance value and an adjusted luminance value; a second determining unit 1105, configured to determine a luminance mapping curve according to a mapping relationship between the initial luminance value and the adjusted luminance value, where the luminance mapping curve belongs to the target nonlinear space.

In one possible design, if the saturation mapping curve belongs to the target non-linear space and the preset second original luminance mapping curve is a linear curve, the video signal processing apparatus 102 may further include a third converting unit 1106 and a third determining unit 1107: the third converting unit 1106 is configured to perform conversion from a linear space to a nonlinear space on a third abscissa value and a third ordinate value corresponding to at least one sampling point on the second original luminance mapping curve, so as to obtain an initial luminance value and an adjusted luminance value; a third determining unit 1107, configured to determine a luminance mapping curve according to the mapping relationship between the initial luminance value and the adjusted luminance value, where the luminance mapping curve belongs to the target nonlinear space.

In one possible design, the video signal processing apparatus 102 may further include a brightness adjustment unit 1108 for adjusting the initial brightness value according to the brightness mapping curve to obtain an adjusted brightness value.

In one possible design, the brightness adjusting unit 1108 is specifically configured to determine a first ordinate value of the target corresponding to the first abscissa of the target as the adjusted brightness value according to the first abscissa value of the target corresponding to the initial brightness value.

In one possible design, the brightness adjusting unit 1108 is specifically configured to determine, according to the target third abscissa value corresponding to the initial brightness value, the target third ordinate value corresponding to the target third abscissa as the adjusted brightness value.

For example, the video signal processing apparatus 102 shown in fig. 11 may further include a storage unit 1109 for storing a computer program, instructions, and related data to support the first determination unit 1101, the adjustment unit 1102, the first conversion unit 1103, the second conversion unit 1104, the second determination unit 1105, the third conversion unit 1106, the third determination unit 1107, and the brightness adjustment unit 1108 to implement the functions of the above example.

It should be understood that the first determining unit 1101, the adjusting unit 1102, the first converting unit 1103, the second converting unit 1104, the second determining unit 1105, the third converting unit 1106, the third determining unit 1107, and the brightness adjusting unit 1108 in the video signal processing apparatus 102 shown in fig. 11 may be a central processing unit, a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof, which can implement or execute various exemplary logic blocks, modules, and circuits described in connection with the disclosure of the embodiments of the present application. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a digital signal processor and a microprocessor, or the like. In addition, the video signal processing apparatus 102 may include a storage unit, which may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories.

Illustratively, as shown in fig. 12-a, another possible structure of the video signal processing apparatus 102 provided by the embodiment of the present application includes a main processor 1201, a memory 1202, and a video processor 1203. The main processor 1201 may be configured to support the video signal processing apparatus 102 to implement related functions other than video signal processing, for example, the main processor 1201 may be configured to determine a saturation adjustment factor corresponding to an initial brightness value of a video signal to be processed, the steps executed by the main processor 1201 may refer to the method side step S101, and the main processor 1201 may also be configured to determine a saturation mapping curve according to a brightness mapping curve and/or an original brightness mapping curve, where the brightness mapping curve and/or the original brightness mapping curve may be stored in the memory 1202; the video processor 1203 may be configured to support the video signal processing apparatus 102 to implement the related functions of video signal processing, for example, the video processor 1203 may be configured to adjust a chroma value of a video signal to be processed according to a saturation adjustment factor, and the video processor 1203 may also be configured to support the video signal processing apparatus 102 to perform color space conversion and perform luminance mapping of an RGB space on the video signal, for example, the video processor 1203 may perform the method shown in fig. 7 with the video signal processing apparatus 102, and the steps performed by the video processor 1203 may refer to the method side step S102 specifically.

For example, as shown in fig. 12-b, in the RGB spatial luminance mapping of the HDR signal and the adjustment of the chrominance values of the resultant YCC spatial video signal after the luminance mapping, the video signal processing apparatus 102 may be configured to: luminance mapping is performed on the HDR signal in an RGB space according to an original luminance mapping curve (e.g., a non-linear first original luminance mapping curve) stored in the memory 1202, and the luminance-mapped video signal is converted into a YCC space required for display, and chrominance values of chrominance components of the luminance-mapped video signal converted into the YCC space are adjusted according to a saturation mapping curve stored in the memory 1202, and the obtained video signal in the chrominance-adjusted YCC space can be used for display; the main processor 1201 may be configured to generate an original luminance mapping curve required by the video processor 1203 to perform RGB spatial luminance mapping on the HDR signal, and may be configured to generate a saturation mapping curve required by the video processor 1203 to perform chrominance value adjustment on the video signal in the YCC space according to the original luminance mapping curve; the memory 1202 may be used to store the raw brightness mapping curve and/or the saturation mapping curve.

For example, as shown in fig. 12-c, in the process of adjusting the chrominance of the HDR signal and performing RGB spatial luminance mapping and color space conversion on the HDR signal after the chrominance adjustment to obtain the YCC spatial video signal, the video signal processing apparatus 102 may be configured to: adjusting chrominance values of chrominance components of the HDR signal according to a saturation mapping curve stored in the memory 1202, and performing RGB spatial luminance mapping on the HDR signal after chrominance value adjustment according to an original luminance mapping curve (e.g., a non-linear first original luminance mapping curve) stored in the memory 1202, and converting the video signal after luminance mapping into a YCC space, so that the video signal of the YCC space after chrominance adjustment can be used for display; the main processor 1201 may be configured to generate a saturation mapping curve required by the video processor 1203 to perform chrominance value adjustment on the HDR signal, and may be configured to generate an original luminance mapping curve required by the video processor 1203 to perform RGB spatial luminance mapping on the HDR signal; the memory 1202 may be used to store the raw brightness mapping curve and/or the saturation mapping curve.

It should be understood that the video signal processing apparatus 102 shown in fig. 12 only exemplarily embodies the structure required by the video signal processing apparatus 102 to perform the above-mentioned video signal processing method according to the embodiment of the present application, and the embodiment of the present application does not exclude that the video signal processing apparatus 102 has other structures, for example, the video signal processing apparatus 102 may further include a display device for displaying the video signal of the chrominance-adjusted YCC space obtained after the HDR signal is processed by the video processor 1203; as another example, the video signal processing apparatus 102 may further include necessary interfaces to realize input of the video signal to be processed and output of the processed video signal.

It should be further understood that all steps performed by the video signal processing apparatus 102 shown in fig. 12 may be performed by the main processor 1201, and in this case, the video signal processing apparatus 102 may include only the main processor 1201 and the memory 1202.

In particular implementations, the main processor 1201, the video processor 1203 may be a central processing unit, a general purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, transistor logic, hardware components, or any combination thereof that may implement or execute the various illustrative logical blocks, modules, and circuits described in connection with the embodiments disclosed herein. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a digital signal processor and a microprocessor, or the like. In addition, in an implementation, all functions of the video processor 1203 may be realized by software by the main processor 1201.

For example, the video signal processing apparatus 102 provided in the embodiment of the present application may be applied to smart devices such as set top boxes, televisions, mobile phones, and other display devices and image processing devices, and is used to support the above devices to implement the video signal processing method provided in the embodiment of the present application.

Based on the same inventive concept, embodiments of the present application provide a computer program product, which includes a computer program, when the computer program is executed on a computer, the computer will implement the functions of any of the video signal processing method embodiments described above.

Based on the same inventive concept, embodiments of the present application provide a computer program, which when executed on a computer will enable the computer to implement the functions involved in any of the above embodiments of the video signal processing method.

Based on the same inventive concept, embodiments of the present application provide a computer-readable storage medium for storing programs and instructions, which when invoked to execute in a computer, can cause the computer to perform the functions involved in any of the above embodiments of the video signal processing method.

It should be understood that the first original luminance mapping curve provided by the embodiments of the present application may be a 100nits luminance mapping curve, a 150nits luminance mapping curve, a 200nits luminance mapping curve, a 250nits luminance mapping curve, a 300nits luminance mapping curve, a 350nits luminance mapping curvenits luminance mapping curve or 400nits luminance mapping curve. The first original luminance mapping curve can be used for the video signal YdPQIs mapped to obtain a mapped video signal YtPQThe mapping formula can refer to the above formula (39) of the present application.

Specifically, if the first original luminance mapping curve is a 100nits luminance mapping curve, the first original luminance mapping curve may have an expression as shown in equation (9).

If the luminance range before luminance mapping is 0 to 1000nits and the luminance range after luminance mapping is 0 to 150nits, the first original luminance mapping curve may have the following expression:

Figure BDA0002118891480000241

the function hmt () may be defined as follows:

hmt(x)=0.3468×α0(x)+0.5493×α1(x)+β0(x) (57);

wherein the content of the first and second substances,

Figure BDA0002118891480000242

if the luminance range before luminance mapping is 0 to 1000nits and the luminance range after luminance mapping is 0 to 200nits, the first original luminance mapping curve may have the following expression:

the function hmt () may be defined as follows:

hmt(x)=0.4064×α0(x)+0.5791×α1(x)+β0(x) (59);

wherein the content of the first and second substances,

Figure BDA0002118891480000244

if the luminance range before luminance mapping is 0 to 1000nits and the luminance range after luminance mapping is 0 to 250nits, the first original luminance mapping curve may have the following expression:

Figure BDA0002118891480000245

the function hmt () may be defined as follows:

hmt(x)=0.4533×α0(x)+06026×α1(x)+β0(x) (61);

wherein the content of the first and second substances,

Figure BDA0002118891480000251

if the luminance range before luminance mapping is 0 to 1000nits and the luminance range after luminance mapping is 0 to 300nits, the first original luminance mapping curve may have the following expression:

Figure BDA0002118891480000252

the function hmt () may be defined as follows:

hmt(x)=0.4919×α0(x)+0.6219×α1(x)+β0(x) (63);

wherein the content of the first and second substances,

Figure BDA0002118891480000253

if the luminance range before luminance mapping is 0 to 1000nits and the luminance range after luminance mapping is 0 to 350nits, the first original luminance mapping curve may have the following expression:

Figure BDA0002118891480000254

the function hmt () may be defined as follows:

hmt(x)=0.5247×α0(x)+0.6383×α1(x)+β0(x) (65);

wherein the content of the first and second substances,

if the luminance range before luminance mapping is 0 to 1000nits and the luminance range after luminance mapping is 0 to 400nits, the first original luminance mapping curve may have the following expression:

Figure BDA0002118891480000256

the function hmt () may be defined as follows:

hmt(x)=0.5533×α0(x)+0.6526×α1(x)+β0(x) (67);

wherein the content of the first and second substances,

illustratively, a pair Y' is exemplified belowsCbsCrsThe signal is processed by assuming Y ″sCbsCrsThe video signal is a 4:4:4 YCbCr nonlinear video signal recovered by a terminal through AVS2 decoding reconstruction and chroma up-sampling, and each component of the signal is a digital coding value of 10 bits:

(1) calculating a YiCbiCri signal, wherein the YiCbiCri signal is a video signal processed by the chrominance processing method provided by the embodiment of the application:

a) the normalized raw luminance is calculated according to the following formula:

Ynorm=(Y-64)/(940-64) (68);

Ynormshould convert (clip) to [0,1]]Within the range;

b) the saturation mapping gain SmGain is calculated according to the following formula:

SmGain=fsm(Ynorm) (69);

wherein f issm() For saturation mapping curves, according to a luminance mapping curve ftm() The calculation is carried out, and the calculation steps are as follows:

i. transforming the luminance mapping curve ftm () to a linear space to obtain a linear luminance mapping curve:

ftmL(L)=PQ_EOTF(ftm(PQ_EOTF-1(L))) (70);

where L is the input linear luminance, in units nit, ftThe result of m (L) is linear luminance in nit;

mapping the linear luminance to a curve ftmL() Converting to HLG space to obtain a brightness mapping curve on the HLG signal:

Figure BDA0002118891480000262

where e is the normalized HLG signal luminance, ftmHLG(e) The result is a normalized HLG signal brightness;

calculating a saturation mapping curve fsm():

Figure BDA0002118891480000263

Wherein the saturation mapping curve inputs e, fsm(e) Mapping gains for saturation on HLG space;

c) calculating saturation mapped signals:

Figure BDA0002118891480000264

YiCbiCrithe signal is a 10-bit limited-range digitally encoded value, where YiThe value should be [64, 940 ]]Within the interval, the values of Cbi, Cri should be [64, 960 ]]Within the interval.

(2) Calculating nonlinear RsG`sB`sA signal;

Figure BDA0002118891480000265

Figure BDA0002118891480000266

wherein Y' issCbsCrsThe signal is a 10-bit limited range digital code value, R' obtained by the processingsG`sB`sIs a floating pointNon-linear base color values, the values being clip to [0,1]And (4) interval.

(3) Calculating the Linear RsGsBsSignals and calculating linear luminance Y of the input signals

Es=HLG-OETF-1(E`s) (76);

E in the equationsRepresents RsGsBsLinear base colour value of any component of the signal, having a value of 0,1]An interval; e' systemsFinger RsG`sB`sThe nonlinear base color value of any component in the signal. Function HLG _ OETF-1() Defined according to ITU bt.2100 as follows:

Figure BDA0002118891480000271

wherein a is 0.17883277, b is 1-4a, c is 0.5-a ln (4 a);

linear brightness YsThe calculation is as follows:

Ys=0.2627Rs+0.6780Gs+0.0593Bs(78);

Ysis a real number having a value of [0,1]]An interval.

(4) Calculating YtA signal;

a. calculating display brightness Yd

Yd=1000(Ys)1.2(79);

b. Calculating visual linear luminance YdPQ

YdPQ=PQ-EOTF-1(Yd) (80);

Wherein the content of the first and second substances,

Figure BDA0002118891480000272

m1=2610/16384=0.1593017578125;

m2=2523/4096*128=78.84375;

c1=3424/4096=0.8359375=c3-c2+1;

c2=2413/4096*32=18.8515625;

c3=2392/4096*32=18.6875;

c. mapping brightness to obtain YtPQ

YtPQ=ftm(YdPQ) (82);

F in the equationtm() The definition is as follows:

Figure BDA0002118891480000273

where function hmt () is defined as follows:

hmt(x)=0.4064×α0(x)+0.5791×α1(x)+β0(x) (84);

wherein the content of the first and second substances,

d. calculating the linear luminance Y after the normalized luminance mappingt

Yt=PQ_EOTF(TtPQ) (85);

Wherein the content of the first and second substances,

Figure BDA0002118891480000275

therefore, the calculation formula for Yt is:

Yt=PQ_EOTF(ftm(PQ_EOTF-1(1000(Ys)1.2)) (87);

Ytis a real number whose value should be clip to [0, 200 ]]An interval.

(5) Calculating a brightness mapping gain TmGain;

the luminance mapping gain TmGain is calculated as shown in the following equation:

Figure BDA0002118891480000281

(6) calculation of RtmGtmBtmA signal;

Etm=Es×TmGain (89);

es in the equation represents RsGsBsAny component of the signal, EtmRepresents RtmGtmBtmAny component in the signal.

(7) Calculation of RtGtBtSignal (gamut mapping);

Figure BDA0002118891480000282

r obtained by the treatmenttGtBtIs a floating point linear base color value, the value should clip to [0, 200%]And (4) interval.

(8) Calculating RtG`tB`tA signal;

E`t=(Et/200)1/γ(91);

(9) calculating YtCbtCrtA signal;

Figure BDA0002118891480000283

wherein R' istG`tB`tIs a non-linear base color value of [0,1]]An interval. Y' obtained by the treatmenttCbtCrtThe signal is a 10-bit limited range digitally encoded value, where Y ″tThe value should be [64, 940 ]]Within the interval, and Cbt,CrtThe value should be [64, 960 ]]Within the interval. For example, γ in this embodiment may be 2.2 or 2.4, or may be other values, and a value of γ may be selected according to an actual situation, which is not limited in this embodiment of the application.

As an example, the present application provides a method of color gamut conversion, which can be used for bt.2020 color gamut to bt.709 color gamut conversion, and is a link of the process of compatibility adaptation of HLG signal to SDR signal, and since the process is described conceptually in the bt.2407 report, the content of this section refers to the International Telecommunication Union (ITU) report content for data description.

According to the report bt.2407-02 section, the conversion of the bt.2020 wide color gamut signal to the bt.709 signal can be achieved using a linear matrix conversion based method. The method is completely the inverse process of ITU standard BT.2087 except that the output signal is made into hard-clip. The conversion process is shown in fig. 13, and specifically includes the following steps:

1) conversion of non-linear to linear signals (Ntol)

Assume a normalized BT.2020 nonlinear RGB signal as (E')RE`GE`B) The individual component signals being implemented via a transfer function to a linear signal (E)REGEB) The conversion of (1). In the present proposal, the transfer function is an HLG EOTF function (HLG defined with reference to EOTF according to ITU BT.2100-1 Table 5).

2) Matrix (M)

The bt.2020 gamut linear RGB signal is converted to a bt.709 gamut linear RGB signal, which can be calculated by the following matrix:

3) conversion of linear signals into non-linear signals (Lton)

BT.709 color gamut Linear RGB Signal (E) according to ITU-BT.2087-0 StandardREGEB) To be used in a BT.709 display device, the OETF defined by ITU BT.1886 should be used to convert to BT.709 color gamut nonlinear RGB signal (E')RE`GE`B). The present proposal suggests using 2.2 as the conversion curve for linear to nonlinear signals. The formula is as follows:

E`=(E)1/γ,0≤E≤1 (95);

it should be understood that γ in the formula (95) may be 2.2 or 2.4, or may take other values, and a value of γ may be selected according to an actual situation, which is not limited in this embodiment of the application.

Illustratively, the embodiment of the application provides a compatible adaptation process of an HDR HLG signal to an HDR PQTV.

According to the ITU report BT.2390-47.2, the reference peak luminance L from HLG to PQ signal is agreed firstw1000nit, black bit LbIs 0 nit.

According to the report, the process shown in fig. 14 is adopted, and when the HDR content is within a color volume below 1000nit, a PQ image identical to the HLG image can be generated, which specifically includes the following steps:

(1) generating a linear brightness source signal by passing 1000nit of HLG source signal through OETF inverse function of HLG;

(2) the linear brightness source signal can generate a linear brightness display signal through an OOTF function of HLG;

(3) the linear brightness display signal can generate 1000nit PQ display signal through the EOTF inverse function of PQ;

the complete processing flow in this scenario is as follows:

is provided with YsCbsCrsIs the 4:4:4 YCbCr nonlinear video signal restored by the terminal through AVS2 decoding reconstruction and chroma up-sampling. Each component is a 10-bit digitally encoded value.

1) Calculating nonlinear RsG`sB`sA signal;

Figure BDA0002118891480000292

Figure BDA0002118891480000293

wherein Y issCbsCrsThe signal is a 10-bit limited range digital code value, R' obtained by the processingsG`sB`sIs a floating-point non-linear base color value, and the value is from clip to [0, 1%]And (4) interval.

2) Calculating the Linear RsGsBsThe signal(s) is (are) transmitted,and calculates the linear brightness Y of the input signals

Es=HLG_OETF-1(E`s) (98);

E in the equationsRepresents RsGsBsAny component in the signal; e' systemsFinger RsG`sB`sAny component in the signal. Function HLG _ OETF-1() Defined according to ITU bt.2100 as follows:

Figure BDA0002118891480000294

wherein a is 0.17883277, b is 1-4a, and c is 0.5-a.

Linear brightness YsThe calculation is as follows:

Ys=0.2627Rs+0.6780Gs+0.0593Bs(100);

3) calculating YdA signal;

Yd=1000(Ys)1.2(101);

4) calculating the luminance mapping gain TmGain

The luminance mapping gain TmGain is calculated as shown in the following equation:

Figure BDA0002118891480000301

5) calculation of RtGtBtA signal;

Et=Es×TmGain (103);

in equation EsRepresents RsGsBsAny component of the signal, EtRepresents RtGtBtAny component in the signal.

6) Calculating RtG`tB`tA signal;

E`t=PQ_EOTF-1(Et) (104);

function PQ _ EOTF in formula-1() Reference to ITU bt.2100 table 4 is defined as follows:

wherein the content of the first and second substances,

Figure BDA0002118891480000302

m1=2610/16384=0.1593017578125;

m2=2523/4096*128=78.84375;

c1=3424/4096=0.8359375=c3-c2+1;

c2=2413/4096*32=18.8515625;

c3=2392/4096*32=18.6875。

7) calculating YtCbtCrtSignal

Figure BDA0002118891480000303

Wherein R' istG`tB`tIs a floating point nonlinear basic color value with the numerical value of 0,1]An interval. Y obtained by the treatmenttCbtCrtThe signal is a 10-bit limited-range digitally encoded value, where YoThe value should be [64, 940 ]]Within the interval, and Cbo,CroThe value should be [64, 960 ]]Within the interval.

It should be understood that the processor mentioned in the embodiments of the present Application may be a Central Processing Unit (CPU), and may also be other general purpose processors, Digital Signal Processors (DSP), Application Specific Integrated Circuits (ASIC), Field Programmable Gate Arrays (FPGA) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

It will also be appreciated that the memory referred to in the embodiments of the application may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double data rate Synchronous Dynamic random access memory (DDR SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DR RAM).

It should be noted that the memory, storage units described herein include, but are not limited to, these and any other suitable types of memory.

It should also be understood that the reference herein to first, second, and various numerical designations is merely a convenient division to describe and is not intended to limit the scope of the present application.

In the present application, "and/or" describes an association relationship of associated objects, which means that there may be three relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

In the present application, "at least one" means one or more, "a plurality" means two or more. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, "at least one (a), b, or c", or "at least one (a), b, and c", may each represent: a, b, c, a-b (i.e., a and b), a-c, b-c, or a-b-c, wherein a, b, and c may be single or plural, respectively.

It should be understood that, in various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, some or all of the steps may be executed in parallel or executed sequentially, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.

The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, a network device or a terminal device, etc.) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.

Relevant parts among the method embodiments of the application can be mutually referred; the apparatus provided in the respective apparatus embodiments is adapted to perform the method provided in the respective method embodiments, so that the respective apparatus embodiments may be understood with reference to the relevant parts in the relevant method embodiments.

The device structure diagrams given in the device embodiments of the present application only show simplified designs of the corresponding devices. In practical applications, the apparatus may comprise any number of transmitters, receivers, processors, memories, etc. to implement the functions or operations performed by the apparatus in the embodiments of the apparatus of the present application, and all apparatuses that can implement the present application are within the scope of the present application.

The names of the messages/frames/indication information, modules or units, etc. provided in the embodiments of the present application are only examples, and other names may be used as long as the roles of the messages/frames/indication information, modules or units, etc. are the same.

The terminology used in the embodiments of the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The character "/" herein generally indicates that the former and latter associated objects are in an "or" relationship; if the character "/" appears in the formula referred to herein, it generally means that the object appearing before the "/" in the formula is divided by the object appearing after the "/"; if the character "^" appears in the formula referred to herein, it generally represents a power operation.

The word "if" or "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.

It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by instructing the relevant hardware through a program, which may be stored in a storage medium readable by a device and includes all or part of the steps when executed, such as: FLASH, EEPROM, etc.

The above-mentioned embodiments, the purpose, technical solutions and advantages of the present invention have been described in further detail, it should be understood that various embodiments may be combined, and the above-mentioned embodiments are only examples of the present invention and are not intended to limit the scope of the present invention, and any combination, modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.

48页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于三维变换的深度图像噪声标记方法、装置和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类