Image processing apparatus, image processing method, and medium
阅读说明:本技术 图像处理设备、图像处理方法和介质 (Image processing apparatus, image processing method, and medium ) 是由 香川英嗣 小川修平 矢泽真耶 村泽孝大 诹访徹哉 于 2019-06-28 设计创作,主要内容包括:本发明提供一种图像处理设备、图像处理方法和介质。该图像处理设备包括:获得单元,用于获得具有比打印设备的颜色再现范围宽的颜色再现范围的输入图像的亮度;转换单元,用于对所述输入图像进行用于获得所述打印设备的颜色再现范围内所包括的值的转换处理,并获得转换后的图像的亮度;以及校正单元,用于校正所述输入图像的亮度,其中,所述校正单元基于所述获得单元所获得的亮度与所述转换单元所获得的亮度之间的转换特性来进行对所述输入图像的亮度的校正,使得与针对包括在所述打印设备的颜色再现范围内的颜色相比,针对未包括在所述打印设备的颜色再现范围内的颜色而言,所述校正的强度变得更高。(The invention provides an image processing apparatus, an image processing method and a medium. The image processing apparatus includes: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit that corrects the luminance of the input image, wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that intensity of the correction becomes higher for colors that are not included in a color reproduction range of the printing apparatus than for colors that are included in the color reproduction range of the printing apparatus.)
1. An image processing apparatus comprising:
an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus;
a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and
a correction unit configured to correct luminance of the input image,
wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.
2. The image processing apparatus according to claim 1, further comprising an extraction unit configured to extract a high-frequency component from luminance of an image,
wherein the extracting unit extracts a first high-frequency component from the luminance obtained by the obtaining unit and a second high-frequency component from the luminance obtained by the converting unit, an
The correction unit corrects the luminance of the input image based on a conversion characteristic between the first high-frequency component and the second high-frequency component.
3. The image processing apparatus according to claim 2, wherein the extraction unit is configured to:
extracting a first high frequency component by generating the first low frequency component from the luminance of the input image by a filtering unit and subtracting the first low frequency component from the luminance of the input image, an
Extracting the second high frequency component by the filtering unit generating a second low frequency component from the luminance of the image obtained by the converting unit and subtracting the second low frequency component from the luminance of the image obtained by the converting unit.
4. The image processing apparatus according to claim 3, wherein the correction unit decides the strength of the correction by subtracting the second high-frequency component from the first high-frequency component.
5. The image processing apparatus according to claim 2, wherein the extraction unit is configured to:
extracting the first high frequency component by generating a first low frequency component from the luminance of the input image by a filtering unit and dividing the luminance of the input image by the first low frequency component, an
Extracting, by the filtering unit, a second low-frequency component from the luminance of the image obtained by the converting unit and dividing the luminance of the image obtained by the converting unit by the second low-frequency component.
6. The image processing apparatus according to claim 5, wherein the correction unit decides the strength of the correction by dividing the first high-frequency component by the second high-frequency component.
7. The image processing apparatus according to claim 2, wherein the reflected light component is used as a high-frequency component of luminance, and the illumination light component is used as a low-frequency component of luminance.
8. The image processing apparatus according to claim 1, wherein the correction by the correction unit is performed on the input image, and the same conversion processing as that by the conversion unit is applied to the corrected image.
9. The apparatus according to claim 1, wherein the same conversion processing as that of said conversion unit is applied to said input image, and correction by said correction unit is performed on the converted image.
10. The image processing apparatus according to any one of claims 1 to 9, wherein the conversion processing by the conversion unit includes dynamic range compression processing and gamut mapping processing.
11. The image processing apparatus according to claim 1, further comprising:
an input unit configured to input information on an observation condition when observing an image printed on a sheet by the printing apparatus based on data representing the input image; and
a decision unit configured to decide a contrast characteristic relating to a degree of expression of contrast in the printed image based on the information relating to the observation condition input by the input unit,
wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit and the contrast characteristic decided by the decision unit so that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.
12. The apparatus according to claim 1, wherein said correction unit corrects the high-frequency component of the image based on the luminance of the low-frequency component of the image such that the luminance of the high-frequency component of the image after the conversion processing is included in a luminance range of the image after the correction by said correction unit.
13. The apparatus according to claim 1, wherein said correction unit further comprises a determination unit configured to determine whether or not to perform the correction based on a luminance of a low-frequency component of the image after the conversion processing and a luminance of a high-frequency component of the image after the conversion processing.
14. An image processing apparatus comprising:
an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus;
a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and
a correction unit configured to correct luminance of the input image to suppress a decrease in contrast of the input image,
wherein the correction unit corrects the high-frequency component of the image based on the luminance of the low-frequency component of the image so that the luminance of the high-frequency component of the image after the conversion processing is included in the luminance range of the image after the correction by the correction unit.
15. An image processing method comprising:
obtaining a luminance of an input image having a color reproduction range wider than a color reproduction range of a printing apparatus;
performing conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtaining a luminance of the converted image; and
the luminance of the input image is corrected,
wherein in the correction, correction of the luminance of the input image is performed based on a conversion characteristic between the luminance obtained in the obtaining and the luminance obtained when the conversion processing is performed, so that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.
16. A medium storing a program that causes a computer to function as:
an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus;
a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and
a correction unit configured to correct luminance of the input image,
wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.
17. An image processing method comprising:
obtaining a luminance of an input image having a color reproduction range wider than a color reproduction range of a printing apparatus;
performing conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtaining a luminance of the converted image; and
correcting the luminance of the input image to suppress a decrease in contrast of the input image,
wherein in the correction, the high-frequency component of the image is corrected based on the luminance of the low-frequency component of the image so that the luminance of the high-frequency component of the image after the conversion processing is included in the luminance range of the image after the correction in the correction.
18. A medium storing a program that causes a computer to function as:
an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus;
a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and
a correction unit configured to correct luminance of the input image to suppress a decrease in contrast of the input image,
wherein the correction unit corrects the high-frequency component of the image based on the luminance of the low-frequency component of the image so that the luminance of the high-frequency component of the image after the conversion processing is included in the luminance range of the image after the correction by the correction unit.
Technical Field
The invention relates to an image processing apparatus, an image processing method, and a medium.
Background
In recent years, HDR (high dynamic range) content having high luminance and a wide color gamut reproduction range has become popular. Use of 1,000 nit (1,000 cd/m) in the color gamut of BT.2020(Rec.2020)2) Or higher peak luminance, to represent HDR content. When a printing apparatus performs printing using HDR image data, it is necessary to compress a dynamic range of luminance (hereinafter referred to as "D range") into a dynamic range that can be reproduced by the printing apparatus by D range compression using a tone curve or the like. For example, as shown in fig. 1, the contrast of a region having high luminance is reduced, thereby performing D-range compression. For example, japanese patent application laid-open No. 2011-86976 discloses image processing for correcting contrast reduction occurring when D-range compression is performed.
It is necessary to perform gamut mapping to the color gamut of the printing device on the image data subjected to the D range compression to the luminance range of the printing device. Fig. 2A shows the color gamut of bt.2020 within a luminance range of 1,000 nits. Fig. 2B illustrates a color gamut of the printing apparatus. In fig. 2A and 2B, the horizontal axis represents y of xy chromaticity, and the vertical axis represents luminance. When comparing the color gamut of bt.2020 with the color gamut of the printing apparatus, the color gamut shapes are not similar because the color materials used are different. Therefore, when the HDR content is printed by the printing apparatus, instead of uniformly compressing the D range, it is necessary to change the luminance compression degree according to the chromaticity.
At this time, in a case where the shape of the gamut of the input image data and the shape of the gamut of the printing apparatus are greatly different, even when the contrast correction is performed using the method of japanese patent laid-open No. 2011-.
Disclosure of Invention
According to an aspect of the present invention, there is provided an image processing apparatus including: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit configured to correct the luminance of the input image, wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.
According to another aspect of the present invention, there is provided an image processing apparatus comprising: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit configured to correct luminance of the input image to suppress a decrease in contrast of the input image, wherein the correction unit corrects the high-frequency component of the image based on luminance of the low-frequency component of the image so that luminance of the high-frequency component of the image after the conversion processing is included in a luminance range of the image after the correction by the correction unit.
According to another aspect of the present invention, there is provided an image processing method including: obtaining a luminance of an input image having a color reproduction range wider than a color reproduction range of a printing apparatus; performing conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtaining a luminance of the converted image; and correcting the luminance of the input image, wherein in the correction, the correction of the luminance of the input image is performed based on a conversion characteristic between the luminance obtained in the obtaining and the luminance obtained when the conversion processing is performed, so that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.
According to another aspect of the present invention, there is provided a medium storing a program for causing a computer to function as: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit configured to correct the luminance of the input image, wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.
According to another aspect of the present invention, there is provided an image processing method including: obtaining a luminance of an input image having a color reproduction range wider than a color reproduction range of a printing apparatus; performing conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtaining a luminance of the converted image; and correcting the luminance of the input image to suppress a decrease in contrast of the input image, wherein in the correction, the high-frequency component of the image is corrected based on the luminance of the low-frequency component of the image so that the luminance of the high-frequency component of the image after the conversion processing is included in the luminance range of the image after the correction in the correction.
According to another aspect of the present invention, there is provided a medium storing a program for causing a computer to function as: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit configured to correct luminance of the input image to suppress a decrease in contrast of the input image, wherein the correction unit corrects the high-frequency component of the image based on luminance of the low-frequency component of the image so that luminance of the high-frequency component of the image after the conversion processing is included in a luminance range of the image after the correction by the correction unit.
According to the present invention, it is possible to provide contrast correction that takes into account a decrease in contrast due to a difference in color reproduction range between input and output.
Other features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).
Drawings
Fig. 1 is a diagram for explaining D range conversion;
fig. 2A, 2B, 2C, and 2D are diagrams for explaining a difference in color gamut between bt.2020 and a printing apparatus;
FIG. 3 is a block diagram showing an example of a hardware structure of a system according to the present invention;
fig. 4 is a block diagram showing an example of a software structure related to contrast correction according to the present invention;
fig. 5 is a diagram for explaining gamut mapping according to the present invention;
fig. 6 is a diagram for explaining a gaussian filter;
FIG. 7 is a diagram for explaining a visual transfer function according to the present invention;
FIG. 8 is a flow chart illustrating the processing of the output image characteristics acquisition module according to the present invention;
FIG. 9 is a flow chart illustrating the processing of the contrast correction module according to the present invention;
fig. 10 is a flowchart showing a contrast correction method according to the first embodiment;
fig. 11 is a flowchart showing a contrast correction method according to the second embodiment;
fig. 12 is a flowchart showing a contrast correction method according to the third embodiment;
fig. 13 is a flowchart showing a contrast correction method according to the fourth embodiment;
fig. 14 is a diagram for explaining a correction intensity generating method according to the fifth embodiment;
fig. 15 is a schematic diagram of an example of a UI configuration screen according to the sixth embodiment;
fig. 16 is a block diagram showing an example of a software structure relating to contrast correction according to the sixth embodiment;
fig. 17 is a diagram showing an example of a luminance-high sensitivity frequency conversion table according to the sixth embodiment;
fig. 18 is a diagram showing a table of high-sensitivity frequencies for respective luminances according to the sixth embodiment;
fig. 19 is a flowchart showing a processing procedure according to the eighth embodiment;
fig. 20 is an explanatory diagram of correction judgment in the process according to the eighth embodiment;
fig. 21 is a flowchart showing a processing procedure according to the ninth embodiment;
fig. 22 is a diagram for illustrating a modeling method of contrast sensitivity as used in the ninth embodiment; and
fig. 23 is a flowchart showing a processing procedure according to the tenth embodiment.
Detailed Description
< first embodiment >
[ System Structure ]
Fig. 3 is a block diagram showing an example of the structure of a system to which the present invention can be applied. The system includes an
The
For example, the
The
The
The connection method of the data transfer I/F306 of the
[ correction of contrast ]
The contrast correction according to the present embodiment will be described in detail below. The contrast correction according to the present embodiment is processing for performing predetermined image processing when the
Fig. 4 is a block diagram showing an example of a software configuration for performing image processing related to contrast correction when the
The
The D-
For the image data input to the
The
In addition, the out-of-
The input image characteristic obtaining
Y=0.299·R+0.587·G+0.114·B ...(1)
Cb=-0.1687·R-0.3313·G+0.5·B ...(2)
Cr=0.5·R-0.4187·G-0.0813·B ...(3)
Further, the input image characteristic obtaining
p'(x,y)={1/Σf(x,y)}·Σ{f(x,y)×p(x,y)} ...(4)
In the present embodiment, a gaussian shape has been exemplified as the filter characteristic. However, the present invention is not limited thereto. For example, an edge-preserving filter such as a bilateral filter may be used. When the edge-hold filter is used, a halo of an artifact occurring in an edge portion at the time of contrast correction can be reduced.
Fig. 7 is a diagram showing a visual transfer function VTF for spatial frequencies. The visual transfer function VTF shown in fig. 7 represents a change in visual acuity represented by the vertical axis when the spatial frequency represented by the horizontal axis changes. This means that the higher the visual acuity, the higher the transfer characteristic. As can be seen from the visual transfer function VTF, high transfer characteristics of about 0.8 or more can be obtained at a spatial frequency of 0.5cycle/mm or more. Note that, in the example shown in fig. 7, when the spatial frequency is 2 cycles/mm or more, the visual sensitivity is lower than 0.8. The frequency as the object of contrast correction is preferably a frequency having high visual acuity. That is, the high frequency means 0.5cycle/mm or more as the frequency including the peak sensitivity, and the low frequency means 0.5cycle/mm or less. In the present embodiment, the high-frequency component and the low-frequency component are obtained from the luminance based on this premise.
Let I be the luminance, H the high frequency value, and L the low frequency value for each pixel. The high frequency value H is calculated by the following formula.
H=I/L ......(5)
In the present embodiment, the high-frequency value H and the low-frequency value L of the luminance I will be described as values equal to the value Re of the reflected light and the value Li of the illumination light, respectively. Here, the illumination light refers to an illumination light component included in the luminance component, and the reflected light refers to a reflected light component included in the luminance component. That is, description will be made using the high frequency value H as a value representing the intensity of the high frequency component and also using the low frequency value L as a value representing the intensity of the low frequency component.
As with the low frequency value, the value of the illumination light can be generated by performing filter processing. In addition, when the edge-hold filter is used, the value of the illumination light at the edge portion can be generated more accurately. The value Re of the reflected light and the value Li of the illumination light can be given by the following equations
Re=I/Li ......(6)
As shown by equation (5), the high frequency value H is generated by dividing the input image by the low frequency value. However, the present invention is not limited thereto. For example, as shown in equation (7), the high frequency value H may be generated by subtracting the low frequency value from the input image. This also applies to the case where the value of the reflected light and the value of the illumination light are used.
H=I-L ......(7)
The output image characteristic obtaining
The
The
(high frequency Generation treatment)
Details of the processing performed by the output image characteristic obtaining
In step S101, the output image characteristic obtaining
In step S102, the output image characteristic obtaining
In step S103, the output image characteristic obtaining
The D-range compression processing and the gamut mapping processing here have the same contents as those of the D-range conversion and the gamut mapping processing performed in the processing shown in fig. 10 described later, but are performed for different purposes. Note that in the following description, the D-range compression process and the gamut mapping process will sometimes be collectively referred to as a conversion process.
(contrast correction processing)
Details of the contrast processing of the
In step S201, the
In step S202, the
In step S203, the
Hm=Ht/H' ...(8)
Equation (8) represents the inverse deviation when the intensity of the high frequency value changes from the input image to the output image.
The value obtained here is the inverse deviation before and after the conversion. Therefore, in the example shown in fig. 5, the correction strength in the out-of-
Note that, when the high-frequency value Ht and the high-frequency value H' are generated using equation (7), the correction strength Hm can be given by the following equation.
Hm=Ht-H' ...(9)
Equation (9) represents the difference when the intensity of the high frequency value changes from the input image to the output image.
In step S204, the
Hc=Hm×H ...(10)
Note that when the high-frequency value H is generated using equation (7), contrast correction is performed by adding the correction intensity Hm to the high-frequency value H generated in step S202. The contrast-corrected high-frequency value Hc is given by the following equation.
Hc=Hm+H ...(11)
As shown in equations (8) and (9), the reverse deviation amount when the contrast decreases from the input image to the output image, that is, the intensity of the high frequency value decreases, is set as the correction intensity Hm. When the correction is performed by multiplication of the reverse deviation amount of equation (10) or addition of the reverse deviation amount of equation (11), the intensity of the high frequency value of the input image may be maintained in the output image or a value close to the intensity of the high frequency value of the input image may be obtained in the output image.
In step S205, the
I'=Hc×L ...(12)
Note that when equation (7) is used to generate the high frequency value Hc and the low frequency value L, the luminance I' can be given by the following equation.
I'=Hc+L ...(13)
The
[ treatment Process ]
A flowchart of the overall process according to the present embodiment will be described with reference to fig. 10. This processing procedure is realized, for example, when the
In step S301, the
In step S302, the
In step S303, the
In step S304, the D-
In step S305, the
In step S306, the
In the present embodiment, using the high frequency value of the input image and the high frequency value of the output image after gamut mapping, contrast correction is performed by setting the reverse deviation amount corresponding to the decrease amount of the high frequency value as the correction strength. Therefore, even in the case where the high frequency value is reduced due to the D range conversion in step S304 and the gamut mapping in step S305 performed after the correction intensity is set, the reduction amount can be corrected by the contrast correction in advance. As a result, the contrast of the input image can be maintained or can be made close to the contrast of the input image even after the gamut mapping.
In addition, when the high frequency value of the output image after the gamut mapping is used in generating the contrast correction strength, the correction strength may be decided in a state in which the reduction amount of the contrast due to the compression of the gamut mapping is included. Therefore, as the compression ratio of the gamut mapping increases, the contrast correction strength can be set high. In addition, the high-frequency value subjected to contrast correction is close to the high-frequency value of the input image, and the low-frequency value not yet subjected to contrast correction is close to the low-frequency value after gamut mapping.
As is apparent from the above description, according to the present embodiment, a decrease in contrast caused by a difference in color reproduction range between input and output can be suppressed.
Note that in the present embodiment, an example of using the YCbCr color space as luminance is described. However, an xyz color space representing luminance and chrominance may be used.
< second embodiment >
A second embodiment of the present invention will be described with reference to the flowchart of fig. 11. Descriptions of parts overlapping with the first embodiment will be omitted, and only differences will be described. In the present embodiment, contrast correction is performed after D-range conversion, which is different from fig. 10 described in the first embodiment. That is, the order of the processing steps is different from that of the first embodiment.
In step S401, the
In step S402, the D-
In step S403, the
In step S404, the
In step S405, the
In step S406, the
In the present embodiment, using the high frequency value of the input image and the high frequency value of the output image after gamut mapping, contrast correction is performed by setting the reverse deviation amount corresponding to the decrease amount of the high frequency value as the correction strength. Therefore, even in the case where the high frequency value is reduced due to the D range conversion in step S402 and the gamut mapping in step S405, the reduction amount is corrected by the contrast correction. As a result, the contrast of the input image can be maintained or can be made close to the contrast of the input image even after the gamut mapping.
In addition, when the high frequency value of the output image after the gamut mapping is used in generating the contrast correction strength, the correction strength may be decided in a state in which the reduction amount of the contrast due to the compression of the gamut mapping is included. Therefore, as the compression ratio of the gamut mapping increases, the contrast correction strength can be set high. In addition, the high-frequency value subjected to contrast correction is close to the high-frequency value of the input image, and the low-frequency value not yet subjected to contrast correction is close to the low-frequency value after gamut mapping.
Further, since the contrast correction is performed after the D range conversion, the memory for processing can be made small because the D range is small compared to the case where the correction is performed before the D range conversion.
< third embodiment >
A third embodiment of the present invention will be described with reference to the flowchart of fig. 12. Descriptions of parts overlapping with the first embodiment will be omitted, and only differences will be described. In the present embodiment, contrast correction is performed after D-range conversion and gamut mapping, which is different from fig. 10 described in the first embodiment. That is, the order of the processing steps is different from that of the first embodiment.
In step S501, the
In step S502, the D-
In step S503, the
In step S504, the
In step S505, the
In step S506, the
In the present embodiment, using the high frequency value of the input image and the high frequency value of the output image after gamut mapping, contrast correction is performed by setting the reverse deviation amount corresponding to the decrease amount of the high frequency value as the correction strength. Therefore, even in the case where the high frequency value is reduced due to the D range conversion in step S502 and the gamut mapping in step S503, the reduction amount is corrected by the contrast correction. As a result, the contrast of the input image can be maintained or can be made close to the contrast of the input image even after the gamut mapping.
In addition, when the high frequency value of the output image after the gamut mapping is used in generating the contrast correction strength, the correction strength may be decided in a state in which the reduction amount of the contrast due to the compression of the gamut mapping is included. Therefore, as the compression ratio of the gamut mapping increases, the contrast correction strength can be set high. In addition, the high-frequency value subjected to contrast correction is close to the high-frequency value of the input image, and the low-frequency value not yet subjected to contrast correction is close to the low-frequency value after gamut mapping.
Further, since the contrast correction is performed after the gamut mapping, the memory for processing can be made small because the D range is small compared to the case where the correction is performed before the D range conversion.
< fourth embodiment >
A fourth embodiment of the present invention will be described with reference to the flowchart of fig. 13. Descriptions of parts overlapping with the first embodiment will be omitted, and only differences will be described. In the present embodiment, the D range conversion is performed twice, unlike fig. 10 described in the first embodiment.
In step S601, the
In step S602, the D-
In step S603, the
In step S604, the
In step S605, the D-
In step S606, the
In step S607, the
In the present embodiment, using the high frequency value of the input image and the high frequency value of the output image after gamut mapping, contrast correction is performed by setting the reverse deviation amount corresponding to the decrease amount of the high frequency value as the correction strength. Therefore, even in the case where the high frequency value is reduced due to the D range conversion into the standard color space in step S602, the D range conversion in step S605, and the gamut mapping in step S606, the reduction amount is corrected by the contrast correction. As a result, the contrast of the input image can be maintained or can be made close to the contrast of the input image even after the gamut mapping.
In addition, when the high frequency value of the output image after the gamut mapping is used in generating the contrast correction strength, the correction strength may be decided in a state in which the reduction amount of the contrast due to the compression of the gamut mapping is included. Therefore, as the compression ratio of the gamut mapping increases, the contrast correction strength can be set high. In addition, the high-frequency value subjected to contrast correction is close to the high-frequency value of the input image, and the low-frequency value not yet subjected to contrast correction is close to the low-frequency value after gamut mapping.
Further, since the D range is temporarily converted into the D range of the standard color space, an editing operation such as decoration or the like can be performed while confirming an image in an environment independent of the printing apparatus (for example, on an HDR monitor).
< fifth embodiment >
In the above-described embodiments, the description has been made using an example in which the contrast correction strength is generated from the high-frequency value of the input image and the high-frequency value of the output image. In the present embodiment, an example of generating the correction intensity information by the 3D LUT method will be described. Fig. 14 is a diagram for explaining generation of correction intensity information according to the present embodiment.
In the present embodiment, the correction intensity information sets the amount of reduction in contrast between the input image and the output image as the inverse deviation. Assume that the output image is in a state where the input image has undergone D-range compression and gamut mapping. In fig. 14, the input reference color (224,0,0) and contrast object color (232,8,8) are changed to (220,8,8) and (216,12,12) by D-range compression and gamut mapping, respectively. The difference values Δ RGB representing the contrast between the reference color and the contrast target color in the input and output are 13.9 and 6.9, and the inverse deviation of the contrast ratio is calculated by equation (14). In addition, the inverse deviation of the contrast difference can be calculated by equation (15).
13.9/6.9=2.0 ......(14)
13.9-6.9=7.0 ......(15)
By the above method, the correction intensity for the input color is generated. This is calculated for each grid value of the 3D LUT, thereby generating a 3D LUT representing the correction strength Hm of the output for the input (R, G, B). In this way, correction intensity information having the following characteristics can be generated: the correction intensity Hm is made larger for the out-of-gamut colors that are greatly compressed by gamut mapping than for the small in-gamut colors.
A method of performing contrast correction using the correction intensity information will be described. The
In the present embodiment, using the input image and the output image after gamut mapping, contrast correction is performed using correction intensity information of the 3D LUT for setting the reverse deviation amount corresponding to the reduction amount of contrast as correction intensity. Therefore, even if the contrast is reduced due to the D range conversion and the gamut mapping, the reduction amount is corrected. Therefore, even after the gamut mapping, the contrast of the input image can be maintained or can be made close to the contrast of the input image. In addition, since the correction intensity Hm is generated by the 3D LUT method, it is not necessary to calculate the high frequency value of the input image and the high frequency value of the output image, and the contrast correction can be performed in a small memory state.
< sixth embodiment >
As a sixth embodiment of the present invention, a form for maintaining the effect of contrast correction in consideration of the observation condition will be described. Note that description of components overlapping with the above-described embodiment will be omitted as appropriate, and description will be made focusing on differences.
As described above, due to compression by gamut mapping, the contrast intensity is reduced at the time of printing by the printing apparatus. In addition, since the contrast sensitivity characteristic changes depending on the observation condition, it is difficult to maintain the effect of contrast correction. The present embodiment aims to solve this problem.
[ Picture Structure ]
Fig. 15 shows a
[ software Structure ]
Fig. 16 is a block diagram showing an example of a software structure according to the present embodiment. Unlike the configuration shown in fig. 4 described in the first embodiment, the software configuration further includes a contrast
[ Filter treatment ]
In the first embodiment, the filtering process has been described with reference to fig. 6. In the present embodiment, the filter to be used in the above-described filtering process can be set in the following manner by the contrast expression characteristic obtaining
First, the number of pixels PDppd at a predetermined angle of view is calculated from the obtained observation conditions (output sheet size, observation distance). Here, the predetermined angle of view is set to 1 °.
First, the number of pixels per inch PDppi is calculated by the following equation.
Where Hp is the number of pixels of the image in the horizontal direction, Vp is the number of pixels of the image in the vertical direction, and S is the diagonal output sheet size in inches.
Next, the pixel number PDppd of the viewing angle of 1 ° can be calculated using the following equation.
PDppd=1/tan-1((25.4/PDppi)/D) ...(17)
Wherein D is the observation distance [ mm ].
The filter condition is set using the pixel number PDppd of the viewing angle of 1 ° calculated by equation (17). Here, the filtering condition indicates the size of the filter. When the number of pixels PDppd with a viewing angle of 1 ° is used, the angular resolution PDcpd can be calculated by the following equation.
PDcpd=PDppd/2 ...(18)
The calculated angular resolution PDcpd is set to the filter size of the gaussian filter and the filter is defined as filter M. Note that here, the PDcpd is directly set to the filter size of the gaussian filter. However, the present invention is not limited thereto. For example, a table indicating the correspondence between the PDcpd and the filter size may be held in advance, and the filter size may be set by referring to the table. Alternatively, in the case of the above-described edge-hold filter, the filtering process is performed by judging an edge portion and a portion other than the edge. Therefore, in addition to the setting value relating to the filter size, a setting value (for example, luminance difference) relating to whether or not the image is the subject of the filtering process is required. Therefore, in addition to the filter size, a setting value regarding whether or not the image is an object of the filtering process can be set based on the observation condition.
[ contrast correction processing ]
In the first embodiment, the contrast process of the
As a calculation method of ur, Barten model is used. According to Barten's model, the contrast sensitivity can be calculated by equation (19).
Here, the following is assumed: k is 3.3, T is 0.1, η is 0.025, h is 357 × 3600, corresponding to the contrast change Φ of the
Note that σ, Mopt (u), (1-F (u))2D, and IL are calculated by equations (20) to (24).
d=4.6-2.8·tanh(0.4·Log10(0.625·L)) ...(20)
Mopt(u)=e(-π2σ2u2) ...(22)
IL=π/4d2L ...(23)
In equations (19) to (24), when the target luminance value is set to L and the spatial frequency is set to u, the contrast sensitivity of the spatial frequency u at the target luminance L can be calculated. Fig. 17 is a graph depicting contrast sensitivity calculated by Barten's model for each brightness. As the luminance becomes higher, the frequency of the high contrast sensitivity shifts to the high frequency side. In contrast, as can be seen, as the luminance becomes low, the frequency of the high contrast sensitivity shifts to the low frequency side. The contrast sensitivities for a plurality of spatial frequencies can be calculated in advance corresponding to a plurality of luminance values using equations (19) to (24), and a luminance-high sensitivity frequency conversion table in which the spatial frequency of the maximum value is associated with the luminance value can be held. Fig. 18 shows an example of a luminance-high sensitivity frequency conversion table. In the case of luminance values not described in the setting table, as shown in fig. 17, the high-sensitivity frequency can be calculated by defining an approximation function for connecting the high-sensitivity frequencies for the respective luminances.
S (ur, Ls) and S (ur, Lr) are calculated according to equation (19) above. When the calculated S (ur, Ls) and S (ur, Lr) are used, the contrast sensitivity ratio can be calculated by the following formula.
Sr=S(ur,Lr)/S(ur,Ls) ...(25)
The
Hm=Sr×(Hta/H') ...(26)
In addition, in the case where high frequency values are generated by the input image characteristic obtaining
Hm=Sr×(Hta-H') ...(27)
Next, for the contrast ratio calculation process, the contrast sensitivity S (ur, Lr) at the luminance value of the illumination light serving as a reference is calculated, and the contrast sensitivity S (ur, Ls) at the luminance value of the illumination light in the observation environment is calculated. Then, the contrast sensitivity ratio Sr is calculated using the contrast sensitivity S (ur, Lr) of the illumination light used as a reference and the contrast sensitivity S (ur, Ls) of the illumination light in the observation environment.
When the contrast correction processing is performed using the above-described method, the effect of the contrast correction in consideration of the observation condition can be maintained. In the above-described embodiment, the contrast expression characteristic obtaining
[ modified examples ]
In the sixth embodiment described above, as in steps S101 to S103, the high frequency value H 'is generated from the image data subjected to the D range compression and the gamut mapping, and the
In addition, in the sixth embodiment, when contrast correction is performed, instead of obtaining the correction strength Hm from the input image data Ht and the image data subjected to D range compression and gamut mapping, contrast correction may be performed by setting the value Hm to the above-described ratio Sr with respect to the contrast sensitivity value, that is, by setting Hm to Sr. In this case, the low frequency value L and the high frequency value H may be obtained using a filter M generated based on the observation condition. However, the filter M may not be used, but a filter that is not prepared based on the observation condition may be used.
< seventh embodiment >
As a seventh embodiment of the present invention, a form in which highlight detail loss or shadow detail loss at the time of dynamic range compression is considered will be described. Note that description of components overlapping with the above-described embodiment will be omitted as appropriate, and description will be made focusing on differences.
As image processing for correcting the contrast reduction caused when the D range compression is performed as described above, Retinex processing is used. In the Retinex process, first, an image is separated into an illumination light component and a reflected light component. When the illumination light component is compressed by the D range and the reflected light component is held, the D range compression can be performed while maintaining the contrast of the original image.
It can be said that the illumination light component is substantially a low-frequency component, and the reflected light component is substantially a high-frequency component. In the present embodiment, hereinafter, the low-frequency component or the illumination light component will be referred to as a first component, and the high-frequency component or the reflected light component will be referred to as a second component.
At this time, in a case where the shape of the gamut of the input image data and the shape of the gamut of the printing device are greatly different, even if the contrast correction is performed using the conventional method, the contrast obtained at the time of printing may be different from the desired contrast due to compression by gamut mapping. Further, if the pixel value of the second component is large on the high luminance side or the low luminance side, the output image may exceed the D range of the output, and a loss of high luminance detail or a loss of shadow detail occurs. Fig. 2C and 2D show the principle of occurrence of highlight detail loss/shadow detail loss. In fig. 2C and 2D, the vertical axis represents a pixel value, and the horizontal axis represents a coordinate value of an image. Fig. 2C and 2D show a first component of an image before D-range compression and after D-range compression and a pixel value obtained by adding a second component to the first component, respectively. After D-range compressing the first component of the image to obtain the first component, the second component maintains the value before D-range compression. In this case, as shown by pixel values obtained by adding the second component, on the high luminance side and the low luminance side, the values are limited by the upper limit and the lower limit of the D range (broken lines in fig. 2D), and a highlight detail loss or a shadow detail loss occurs. That is, if the value of the low-frequency component is compressed to the high luminance side or the low luminance side by the D range, the loss of high luminance detail/the loss of shadow detail easily occurs.
In view of the above, the present embodiment aims to suppress loss of highlight detail or shadow detail of contrast at the time of dynamic range compression.
[ contrast correction processing ]
In the first embodiment, the contrast process of the
When Hc is present>At 1 hour
When Hc >1, loss of highlight detail may occur on the highlight side. Therefore, correction is made so that the second component becomes close to 1 as the value of the first component L' becomes larger. Here, the second component is corrected using the following correction coefficient P.
Hcb=(1-P(L',L'max,L'min))H+P(L',L'max,L'min)·1 ...(28)
When Hc is present<At 1 hour
When Hc <1, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the second component becomes close to 1 as the value L becomes smaller. The second component is corrected using the following correction coefficient Q.
Hcb=Q(L',L'max,L'min)H+(1-Q(L',L'max,L'min)·1 ...(29)
When Hc is 1
When Hc is 1, the second component is not corrected because the addition of the second component does not cause highlight detail loss/shadow detail loss.
The correction coefficients P and Q are calculated in the following manner.
Wherein, α, β, t1And t2Is a predetermined constant. If the first component after the D range compression has a halftone, the second component is not suppressed. The second component is suppressed only when the first component after the D range compression is on the high luminance side or the low luminance side.
In step S205, the
I'=Hcb×L ...(32)
Note that when the high frequency value Hc and the low frequency value L are generated using equation (7) described in the first embodiment, the second component is corrected in such a manner as to prevent the second component corrected by the
When Hc is present>At 0 time
When Hc >0, loss of highlight detail may occur on the highlight side. Therefore, correction is performed such that the absolute value of the second component becomes smaller as the value of the first component L becomes larger. Here, the second component is corrected using the correction coefficient W.
Hcb=W(L,Lmax,Lmin)Hc ...(33)
When Hc is present<At 0 time
When Hc <0, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the absolute value of the second component becomes smaller as the value L becomes smaller. Here, the second component is corrected using the correction coefficient S.
Hcb=S(L,Lmax,Lmin)Hc ...(34)
Here, the correction coefficients W and S are calculated by the following equation.
When Hc is 0
When Hc is 0, no operation is performed because the addition of the value of the second component Hc does not cause highlight detail loss/shadow detail loss.
Here, α, β, t1And t2Is a predetermined constant. If the first component has a halftone, the second component is not suppressed. The second component is suppressed only when the value of the first component is on the high luminance side or the low luminance side.
In addition, LmaxAnd LminThe maximum and minimum values of the input D range, respectively. Note that the correction coefficients W and S are not always necessarily Sigmoid-type functions as described above. The function is not particularly limited as long as it makes the absolute value of the second component Hcb after correction smaller than the absolute value of the second component Hc before correction.
In addition, equations (35) and (36) may be performed by obtaining W (L ') and S (L ') using LUTs calculated in advance for each value L '. When the LUT prepared in advance is used, the processing load required for the operation can be reduced, and the processing speed can be increased.
In this case, the luminance I' may be represented by the following formula.
I'=Hcb+L ...(37)
Then, the
This processing procedure is the same as that described with reference to fig. 9 in the first embodiment, and a description thereof will be omitted.
As described above, in the present embodiment, the second component of the HDR image is corrected in advance in consideration of the contrast reduction caused by the D range conversion and the gamut mapping. In addition, after the second component correction, processing is performed to prevent occurrence of highlight detail loss/shadow detail loss. When contrast correction considering contrast reduction caused by gamut mapping is performed in advance for an HDR image, the contrast can be easily maintained even after the gamut mapping.
(eighth embodiment)
The eighth embodiment will be described with reference to the flowchart of fig. 19.
Fig. 19 is a flowchart showing a processing procedure of highlight detail loss/shadow detail loss judgment. In this determination, it is determined whether highlight detail loss/shading detail loss correction is performed based on the value of the first component L' after D-range compression and the value of the second component H before D-range compression. When highlight detail loss/shadow detail loss determination is performed based on the values of both the first component L' and the second component H after D-range compression, the pixel causing the highlight detail loss/shadow detail loss can be specified more correctly. Further, by correcting only the pixels causing the loss of highlight detail/the loss of shadow detail, it is possible to prevent the contrast of the pixels not causing the loss of highlight detail/the loss of shadow detail from being lowered. The rest is the same as the seventh embodiment.
In step S1001, the
More specifically, it is determined whether highlight detail loss/shadow detail loss correction is performed according to the addition result of the first component L' and the second component H after D-range compression. Fig. 20 shows an outline of this determination. The highlight detail loss/shadow detail loss correction determination D range in fig. 20 is a D range determined in advance for highlight detail loss/shadow detail loss determination, and the buffer area ΔWAnd ΔSIndicating the luminance interval between the compressed D range and the highlight detail loss/shadow detail loss correction judgment D range.
(1)When L' + H falls within the range of the highlight detail loss/shadow detail loss correction judgment D range (pixel 20)
In this case, since no highlight detail loss/shadow detail loss occurs, no correction is performed.
(2)When L' + H falls outside the range of highlight detail loss/shadow detail loss correction judgment D range and also falls within the range Into the range of D range after compression (pixel 21)
In this case, no loss of highlight detail/shadow detail occurs. However, in order to prevent tone inversion caused as a result of highlight detail loss/shading detail loss correction, the pixel is set as a target pixel of highlight detail loss/shading detail loss correction.
(3)When L' + H falls outside the range of the D range after compression (pixel 23)
In this case, highlight detail loss/shadow detail loss occurs. The pixel is set as the target pixel of highlight detail loss/shadow detail loss correction.
A second component correction module (not shown) in the
In case of loss of highlight detail (L' + H)>Thmax)
In step S1003, the second component correction module corrects the second component to suppress highlight detail loss.
Hcb=Lmax-Δwexp(-αH)-L' ...(38)
In case of loss of shadow detail (L' + H)<Thmin)
In step S1003, the second component correction module corrects the second component to suppress the shadow detail loss.
Hcb=Lmin-Δsexp(αH)-L' ...(39)
In the case other than the above case, no highlight detail loss/shadow detail loss occurs, and in step S1002, the second component correction module does not correct the second component (equation (40)).
Hcb=H ...(40)
Note that the buffer area Δ in equations (38) and (39)WAnd ΔSCalculated from the following equation.
ΔW=Lmax-Thmax ...(41)
ΔS=Thmin-Lmin ...(42)
Note that a constant H much larger than H may be setmaxTo be calculated as follows.
In case of loss of highlight detail (L' + H)>Thmax)
In case of loss of shadow detail (L' + H)<Thmin)
In the case other than the above case, the second component is not corrected as in equation (40).
As described above, in the present embodiment, highlight detail loss/shadow detail loss correction determination is made, and only the second component requiring highlight detail loss/shadow detail loss correction can be corrected. Therefore, in the case where the first component is on the high luminance side or the low luminance side, a decrease in contrast can be suppressed, while a loss of highlight detail/a loss of shadow detail hardly occurs depending on the value of the second component.
(ninth embodiment)
The ninth embodiment will be described with reference to the flowchart of fig. 21. Fig. 21 shows details of highlight detail loss/shadow detail loss correction by the
In the present embodiment, in the loss of highlight detail/loss of shading detail determination described in the eighth embodiment, it is determined whether or not to perform the loss of highlight detail/loss of shading detail correction based on the first component L' after the D range compression, the second component H before the D range compression, and the Just Noticeable Difference (JND) decided in step S900 (S901).
When JND is considered in highlight detail loss/shadow detail loss correction judgment, the contrast after the second component correction can be easily perceived. For example, in FIG. 20, if the buffer area ΔWAnd ΔSIs smaller than JND, it is difficult to perceive the luminance difference between the pixels 21 and 22 after the second component correction. This is because even if the second component is corrected so that it falls into the bufferIntra-domain to prevent loss of highlight detail/shadow detail, since the width of the buffer area is smaller than JND, the contrast is also lost visually. Thus, the buffer area ΔWAnd ΔSIs preferably greater than JND.
The
Note that the value of JND may be calculated at the start of the program and held in memory (the
JND is a threshold that enables a person to recognize a difference. Luminance differences smaller than JND are hardly perceived.
JNDs are obtained, for example, from Barten's model as shown in fig. 22. Barten's model is a physiological model of the visual system formed by a mathematical description. The horizontal axis in fig. 22 represents the luminance value, and the vertical axis in fig. 22 represents the minimum contrast step (step) perceivable by a human with respect to the luminance value. Here, let LjIs a specific brightness value, and Lj+1Is obtained by mixing JND with LjBrightness value obtained by addition, minimum contrast step mtDefined by, for example, the following formula.
Based on equation (45), luminance value LjJND at (b) is represented by the following formula.
This indicates that a human can perceive the luminance difference when the luminance difference is equal to or greater than JND. As a model representing visual characteristics, various mathematical models such as a Weber model and a DeVries-Rose model have been proposed in addition to the Barten model. The JND may be a value obtained experimentally or empirically by sensory evaluation or the like.
In the highlightIn the detail loss/shadow detail loss judgment, the buffer area Δ in fig. 20WAnd ΔSIs determined to be equal to or greater than JND, thereby reducing a reduction in visual contrast after highlight detail loss/shadow detail loss correction. That is, if the width of the buffer area is smaller than JND, it is difficult to perceive the luminance difference in the buffer area. When the width of the buffer area is equal to or greater than JND, the contrast in the buffer area is easily perceived even after the second component is corrected.
Fig. 21 shows a procedure of highlight detail loss/shadow detail loss correction processing according to the ninth embodiment. According to the eighth embodiment, step S900 for deciding the highlight detail loss/shadow detail loss correction judgment D range is added to the process shown in fig. 19, and judgment is made using the decided D range.
In step S900, the
ΔW=Lmax-Thmax≧JDN(Lmax) ...(47)
Δs=Thmin-Lmin≧JDN(Lmin) ...(48)
The processing after highlight detail loss/shadow detail loss correction judgment is the same as that in the eighth embodiment except that the judgment is made using the determined luminance range D1.
As described above, in the ninth embodiment, at the time of highlight detail loss/shadow detail loss correction judgment, the width of the buffer area for correcting the second component is decided in consideration of the visual characteristic JND. When the width of the buffer area is equal to or greater than JND, loss of contrast after the second component correction can be reduced.
(tenth embodiment)
In the seventh to ninth embodiments, highlight detail loss/shadow detail loss correction is performed after contrast correction is performed by the
Fig. 23 is a flowchart showing a procedure of image processing according to the present embodiment. Unlike the process of fig. 9 shown in the first embodiment, steps S203 and S204 in fig. 9 are replaced with the second component correction of step S1201 in fig. 10.
Steps S201 and S202 are the same as in fig. 9 described in the first embodiment.
Next, in step S1201, the second component is corrected in the following manner.
When H is present>At 0 time
When H >0, loss of highlight detail may occur on the highlight side. Therefore, correction is performed such that the absolute value of the second component H becomes smaller as the value of the first component L' after D-range compression becomes larger. Here, the second component H is corrected using the following correction coefficient W, thereby obtaining Hcb.
Hcb=W(L',L'max,L'min)H ...(49)
When H is present<At 0 time
When H <0, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the absolute value of the second component becomes smaller as the value L' becomes smaller. The second component H is corrected using the following correction coefficient S.
Hcb=S(L',L'max,L'min)H ...(50)
When H is 0
When H is 0, the second component is not corrected since the addition of the second component does not cause highlight detail loss/shadow detail loss.
Here, the correction coefficients W and S are calculated by the following equation.
Here, α, β, t1And t2Is a predetermined constant. As the position of the first component moves to the high luminance side or the low luminance side, the second component is suppressed.
And, L'maxAnd L'minRespectively, the maximum and minimum values of the luminance of the compressed D range.
When the nonlinear function is applied to the correction coefficient in this way, the second component can be strongly suppressed as the position of the first component moves to the high luminance side or the low luminance side.
Note that the correction coefficients W and S are not always necessarily Sigmoid-type functions as described above. Any function may be used for the decision as long as it is a function for strongly suppressing the second component as the position of the first component moves to the high luminance side or the low luminance side.
Note that equations (49) and (50) can be executed by obtaining W (L ') and S (L ') using the LUT calculated in advance for each value L '. When the LUT prepared in advance is used, the processing load required for the operation can be reduced, and the processing speed can be increased.
Note that the correction coefficients W and S may be calculated using the value of the first component L before D-range compression. When the first component L before D-range compression is used, the D-range compression and the second component correction processing can be performed in parallel, and the calculation efficiency improves. Is provided with LmaxAnd LminWhich are the maximum and minimum values of the luminance of the D range before compression, the correction of the second component in this case is performed in the following manner.
When H is present>At 0 time
Hcb=W(L,Lmax,Lmin)H ...(53)
When H is present<At 0 time
Hcb=S(L,Lmax,Lmin)H ...(54)
When H is 0
No operation is performed.
Step S205 is performed as in the first embodiment.
(eleventh embodiment)
An eleventh embodiment of the present invention will be described. The processing procedure is the same as that in fig. 11 described in the second embodiment, and will be described using this. In addition, descriptions of parts overlapping with the above-described embodiment will be omitted, and only differences will be described. The processing of steps S401 to S403 is the same as that of the second embodiment.
In step S404, the
Note that, in the case where the high frequency value Hc and the low frequency value K are generated using equation (7),
when Hc is present>At 1 hour
When Hc >1, loss of highlight detail may occur on the highlight side. Therefore, correction is made so that the second component becomes close to 1 as the value of the first component L' becomes larger. Here, the second component is corrected using the following correction coefficient P.
Hcb=(1-P(L',L'max,L'min))H+P(L',L'max,L'min)·1 ...(55)
When Hc is present<At 1 hour
When Hc <1, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the second component becomes close to 1 as the value L becomes smaller. The second component is corrected using the following correction coefficient Q.
Hcb=Q(L',L'max,L'min)H+(1-Q(L',L'max,L'min)·1 ...(56)
When Hc is 1
When Hc is 1, the second component is not corrected because the addition of the second component does not cause highlight detail loss/shadow detail loss.
The correction coefficients P and Q are calculated in the following manner.
Wherein, α, β, t1And t2Is a predetermined constant. If the first component after the D range compression has a halftone, the second component is not suppressed. The second component is suppressed only when the first component after the D range compression is on the high luminance side or the low luminance side.
In step S405, the
I'=Hcb×L ...(59)
Note that when equation (7) is used to generate the high frequency value Hc and the low frequency value L, the second component is corrected in the following manner to prevent the second component corrected by the
When Hc is present>At 0 time
When Hc >0, loss of highlight detail may occur on the highlight side. Therefore, correction is performed such that the absolute value of the second component becomes smaller as the value of the first component L becomes larger. Here, the second component is corrected using the correction coefficient W.
Hcb=W(L,Lmax,Lmin)Hc ...(60)
When Hc is present<At 0 time
When Hc <0, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the absolute value of the second component becomes smaller as the value L becomes smaller. Here, the second component is corrected using the correction coefficient S.
Hcb=S(L,Lmax,Lmin)Hc ...(61)
Here, the correction coefficients W and S are calculated by the following equation.
When Hc is 0
In this case, since the addition of the value of the second component Hc does not cause highlight detail loss/shadow detail loss, no operation is performed.
Here, α, β, t1And t2Is a predetermined constant. If the first component has a halftone, the second component is not suppressed. The second component is suppressed only when the value of the first component is on the high luminance side or the low luminance side.
In addition, LmaxAnd LminThe maximum and minimum values of the input D range, respectively. Note that the correction coefficients W and S are not always necessarily Sigmoid-type functions as described above. The function is not particularly limited as long as it makes the absolute value of the second component Hcb after correction smaller than the absolute value of the second component Hc before correction.
In addition, equations (62) and (63) can be performed by obtaining W (L ') and S (L ') using the LUT calculated in advance for each value L '. When the LUT prepared in advance is used, the processing load required for the operation can be reduced, and the processing speed can be increased.
In this case, the luminance I' may be represented by the following formula.
I'=Hcb+L ...(64)
Then, the
OTHER EMBODIMENTS
The embodiments of the present invention can also be realized by a method in which software (programs) that perform the functions of the above-described embodiments are supplied to a system or an apparatus through a network or various storage media, and a computer or a Central Processing Unit (CPU), a Micro Processing Unit (MPU) of the system or the apparatus reads out and executes the methods of the programs.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:相机模组