Image processing apparatus, image processing method, and medium

文档序号:1601551 发布日期:2020-01-07 浏览:19次 中文

阅读说明:本技术 图像处理设备、图像处理方法和介质 (Image processing apparatus, image processing method, and medium ) 是由 香川英嗣 小川修平 矢泽真耶 村泽孝大 诹访徹哉 于 2019-06-28 设计创作,主要内容包括:本发明提供一种图像处理设备、图像处理方法和介质。该图像处理设备包括:获得单元,用于获得具有比打印设备的颜色再现范围宽的颜色再现范围的输入图像的亮度;转换单元,用于对所述输入图像进行用于获得所述打印设备的颜色再现范围内所包括的值的转换处理,并获得转换后的图像的亮度;以及校正单元,用于校正所述输入图像的亮度,其中,所述校正单元基于所述获得单元所获得的亮度与所述转换单元所获得的亮度之间的转换特性来进行对所述输入图像的亮度的校正,使得与针对包括在所述打印设备的颜色再现范围内的颜色相比,针对未包括在所述打印设备的颜色再现范围内的颜色而言,所述校正的强度变得更高。(The invention provides an image processing apparatus, an image processing method and a medium. The image processing apparatus includes: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit that corrects the luminance of the input image, wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that intensity of the correction becomes higher for colors that are not included in a color reproduction range of the printing apparatus than for colors that are included in the color reproduction range of the printing apparatus.)

1. An image processing apparatus comprising:

an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus;

a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and

a correction unit configured to correct luminance of the input image,

wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.

2. The image processing apparatus according to claim 1, further comprising an extraction unit configured to extract a high-frequency component from luminance of an image,

wherein the extracting unit extracts a first high-frequency component from the luminance obtained by the obtaining unit and a second high-frequency component from the luminance obtained by the converting unit, an

The correction unit corrects the luminance of the input image based on a conversion characteristic between the first high-frequency component and the second high-frequency component.

3. The image processing apparatus according to claim 2, wherein the extraction unit is configured to:

extracting a first high frequency component by generating the first low frequency component from the luminance of the input image by a filtering unit and subtracting the first low frequency component from the luminance of the input image, an

Extracting the second high frequency component by the filtering unit generating a second low frequency component from the luminance of the image obtained by the converting unit and subtracting the second low frequency component from the luminance of the image obtained by the converting unit.

4. The image processing apparatus according to claim 3, wherein the correction unit decides the strength of the correction by subtracting the second high-frequency component from the first high-frequency component.

5. The image processing apparatus according to claim 2, wherein the extraction unit is configured to:

extracting the first high frequency component by generating a first low frequency component from the luminance of the input image by a filtering unit and dividing the luminance of the input image by the first low frequency component, an

Extracting, by the filtering unit, a second low-frequency component from the luminance of the image obtained by the converting unit and dividing the luminance of the image obtained by the converting unit by the second low-frequency component.

6. The image processing apparatus according to claim 5, wherein the correction unit decides the strength of the correction by dividing the first high-frequency component by the second high-frequency component.

7. The image processing apparatus according to claim 2, wherein the reflected light component is used as a high-frequency component of luminance, and the illumination light component is used as a low-frequency component of luminance.

8. The image processing apparatus according to claim 1, wherein the correction by the correction unit is performed on the input image, and the same conversion processing as that by the conversion unit is applied to the corrected image.

9. The apparatus according to claim 1, wherein the same conversion processing as that of said conversion unit is applied to said input image, and correction by said correction unit is performed on the converted image.

10. The image processing apparatus according to any one of claims 1 to 9, wherein the conversion processing by the conversion unit includes dynamic range compression processing and gamut mapping processing.

11. The image processing apparatus according to claim 1, further comprising:

an input unit configured to input information on an observation condition when observing an image printed on a sheet by the printing apparatus based on data representing the input image; and

a decision unit configured to decide a contrast characteristic relating to a degree of expression of contrast in the printed image based on the information relating to the observation condition input by the input unit,

wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit and the contrast characteristic decided by the decision unit so that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.

12. The apparatus according to claim 1, wherein said correction unit corrects the high-frequency component of the image based on the luminance of the low-frequency component of the image such that the luminance of the high-frequency component of the image after the conversion processing is included in a luminance range of the image after the correction by said correction unit.

13. The apparatus according to claim 1, wherein said correction unit further comprises a determination unit configured to determine whether or not to perform the correction based on a luminance of a low-frequency component of the image after the conversion processing and a luminance of a high-frequency component of the image after the conversion processing.

14. An image processing apparatus comprising:

an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus;

a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and

a correction unit configured to correct luminance of the input image to suppress a decrease in contrast of the input image,

wherein the correction unit corrects the high-frequency component of the image based on the luminance of the low-frequency component of the image so that the luminance of the high-frequency component of the image after the conversion processing is included in the luminance range of the image after the correction by the correction unit.

15. An image processing method comprising:

obtaining a luminance of an input image having a color reproduction range wider than a color reproduction range of a printing apparatus;

performing conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtaining a luminance of the converted image; and

the luminance of the input image is corrected,

wherein in the correction, correction of the luminance of the input image is performed based on a conversion characteristic between the luminance obtained in the obtaining and the luminance obtained when the conversion processing is performed, so that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.

16. A medium storing a program that causes a computer to function as:

an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus;

a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and

a correction unit configured to correct luminance of the input image,

wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.

17. An image processing method comprising:

obtaining a luminance of an input image having a color reproduction range wider than a color reproduction range of a printing apparatus;

performing conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtaining a luminance of the converted image; and

correcting the luminance of the input image to suppress a decrease in contrast of the input image,

wherein in the correction, the high-frequency component of the image is corrected based on the luminance of the low-frequency component of the image so that the luminance of the high-frequency component of the image after the conversion processing is included in the luminance range of the image after the correction in the correction.

18. A medium storing a program that causes a computer to function as:

an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus;

a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and

a correction unit configured to correct luminance of the input image to suppress a decrease in contrast of the input image,

wherein the correction unit corrects the high-frequency component of the image based on the luminance of the low-frequency component of the image so that the luminance of the high-frequency component of the image after the conversion processing is included in the luminance range of the image after the correction by the correction unit.

Technical Field

The invention relates to an image processing apparatus, an image processing method, and a medium.

Background

In recent years, HDR (high dynamic range) content having high luminance and a wide color gamut reproduction range has become popular. Use of 1,000 nit (1,000 cd/m) in the color gamut of BT.2020(Rec.2020)2) Or higher peak luminance, to represent HDR content. When a printing apparatus performs printing using HDR image data, it is necessary to compress a dynamic range of luminance (hereinafter referred to as "D range") into a dynamic range that can be reproduced by the printing apparatus by D range compression using a tone curve or the like. For example, as shown in fig. 1, the contrast of a region having high luminance is reduced, thereby performing D-range compression. For example, japanese patent application laid-open No. 2011-86976 discloses image processing for correcting contrast reduction occurring when D-range compression is performed.

It is necessary to perform gamut mapping to the color gamut of the printing device on the image data subjected to the D range compression to the luminance range of the printing device. Fig. 2A shows the color gamut of bt.2020 within a luminance range of 1,000 nits. Fig. 2B illustrates a color gamut of the printing apparatus. In fig. 2A and 2B, the horizontal axis represents y of xy chromaticity, and the vertical axis represents luminance. When comparing the color gamut of bt.2020 with the color gamut of the printing apparatus, the color gamut shapes are not similar because the color materials used are different. Therefore, when the HDR content is printed by the printing apparatus, instead of uniformly compressing the D range, it is necessary to change the luminance compression degree according to the chromaticity.

At this time, in a case where the shape of the gamut of the input image data and the shape of the gamut of the printing apparatus are greatly different, even when the contrast correction is performed using the method of japanese patent laid-open No. 2011-.

Disclosure of Invention

According to an aspect of the present invention, there is provided an image processing apparatus including: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit configured to correct the luminance of the input image, wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.

According to another aspect of the present invention, there is provided an image processing apparatus comprising: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit configured to correct luminance of the input image to suppress a decrease in contrast of the input image, wherein the correction unit corrects the high-frequency component of the image based on luminance of the low-frequency component of the image so that luminance of the high-frequency component of the image after the conversion processing is included in a luminance range of the image after the correction by the correction unit.

According to another aspect of the present invention, there is provided an image processing method including: obtaining a luminance of an input image having a color reproduction range wider than a color reproduction range of a printing apparatus; performing conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtaining a luminance of the converted image; and correcting the luminance of the input image, wherein in the correction, the correction of the luminance of the input image is performed based on a conversion characteristic between the luminance obtained in the obtaining and the luminance obtained when the conversion processing is performed, so that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.

According to another aspect of the present invention, there is provided a medium storing a program for causing a computer to function as: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit configured to correct the luminance of the input image, wherein the correction unit performs correction of the luminance of the input image based on a conversion characteristic between the luminance obtained by the obtaining unit and the luminance obtained by the conversion unit such that the intensity of the correction becomes higher for colors not included in the color reproduction range of the printing apparatus than for colors included in the color reproduction range of the printing apparatus.

According to another aspect of the present invention, there is provided an image processing method including: obtaining a luminance of an input image having a color reproduction range wider than a color reproduction range of a printing apparatus; performing conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtaining a luminance of the converted image; and correcting the luminance of the input image to suppress a decrease in contrast of the input image, wherein in the correction, the high-frequency component of the image is corrected based on the luminance of the low-frequency component of the image so that the luminance of the high-frequency component of the image after the conversion processing is included in the luminance range of the image after the correction in the correction.

According to another aspect of the present invention, there is provided a medium storing a program for causing a computer to function as: an obtaining unit configured to obtain a luminance of an input image having a color reproduction range wider than a color reproduction range of the printing apparatus; a conversion unit configured to perform conversion processing for obtaining a value included in a color reproduction range of the printing apparatus on the input image, and obtain a luminance of the converted image; and a correction unit configured to correct luminance of the input image to suppress a decrease in contrast of the input image, wherein the correction unit corrects the high-frequency component of the image based on luminance of the low-frequency component of the image so that luminance of the high-frequency component of the image after the conversion processing is included in a luminance range of the image after the correction by the correction unit.

According to the present invention, it is possible to provide contrast correction that takes into account a decrease in contrast due to a difference in color reproduction range between input and output.

Other features of the present invention will become apparent from the following description of exemplary embodiments (with reference to the attached drawings).

Drawings

Fig. 1 is a diagram for explaining D range conversion;

fig. 2A, 2B, 2C, and 2D are diagrams for explaining a difference in color gamut between bt.2020 and a printing apparatus;

FIG. 3 is a block diagram showing an example of a hardware structure of a system according to the present invention;

fig. 4 is a block diagram showing an example of a software structure related to contrast correction according to the present invention;

fig. 5 is a diagram for explaining gamut mapping according to the present invention;

fig. 6 is a diagram for explaining a gaussian filter;

FIG. 7 is a diagram for explaining a visual transfer function according to the present invention;

FIG. 8 is a flow chart illustrating the processing of the output image characteristics acquisition module according to the present invention;

FIG. 9 is a flow chart illustrating the processing of the contrast correction module according to the present invention;

fig. 10 is a flowchart showing a contrast correction method according to the first embodiment;

fig. 11 is a flowchart showing a contrast correction method according to the second embodiment;

fig. 12 is a flowchart showing a contrast correction method according to the third embodiment;

fig. 13 is a flowchart showing a contrast correction method according to the fourth embodiment;

fig. 14 is a diagram for explaining a correction intensity generating method according to the fifth embodiment;

fig. 15 is a schematic diagram of an example of a UI configuration screen according to the sixth embodiment;

fig. 16 is a block diagram showing an example of a software structure relating to contrast correction according to the sixth embodiment;

fig. 17 is a diagram showing an example of a luminance-high sensitivity frequency conversion table according to the sixth embodiment;

fig. 18 is a diagram showing a table of high-sensitivity frequencies for respective luminances according to the sixth embodiment;

fig. 19 is a flowchart showing a processing procedure according to the eighth embodiment;

fig. 20 is an explanatory diagram of correction judgment in the process according to the eighth embodiment;

fig. 21 is a flowchart showing a processing procedure according to the ninth embodiment;

fig. 22 is a diagram for illustrating a modeling method of contrast sensitivity as used in the ninth embodiment; and

fig. 23 is a flowchart showing a processing procedure according to the tenth embodiment.

Detailed Description

< first embodiment >

[ System Structure ]

Fig. 3 is a block diagram showing an example of the structure of a system to which the present invention can be applied. The system includes an image processing apparatus 300 and a printing apparatus 310. The image processing apparatus 300 is formed of a host PC or the like serving as an information processing apparatus. The image processing apparatus 300 includes a CPU 301, a RAM 302, an HDD 303, a display I/F304, an operation unit I/F305, and a data transfer I/F306, and these components are communicably connected through an internal bus.

The CPU 301 executes various processes using the RAM 302 as a work area according to programs held by the HDD 303. The RAM 302 is a volatile storage area, and is used as a work memory or the like. The HDD 303 is a nonvolatile storage area, and holds a program and an OS (operating system) and the like according to the present embodiment. The display I/F304 is an interface configured to perform data transmission/reception between the display 307 and the main body of the image processing apparatus 300. The operation unit I/F305 is an interface configured to input an instruction input using an operation unit 308 such as a keyboard or a mouse to the main body of the image processing apparatus 300. The data transfer I/F306 is an interface configured to transmit/receive data with respect to an external device.

For example, the CPU 301 generates image data printable by the printing apparatus 310 according to an instruction (command or the like) input by the user using the operation unit 308 or a program held by the HDD 303, and transfers the image data to the printing apparatus 310. In addition, the CPU 301 performs predetermined processing on image data received from the printing apparatus 310 via the data transfer I/F306 according to a program stored in the HDD 303, and displays the result or various information on the display 307.

The printing apparatus 310 includes an image processing accelerator 311, a data transfer I/F312, a CPU 313, a RAM314, a ROM 315, and a printing unit 316, and these components are communicably connected via an internal bus. Note that the printing method of the printing apparatus 310 is not particularly limited. For example, an inkjet printing apparatus may be used, or an electrophotographic printing apparatus may be used. The following description will be made using an inkjet printing apparatus as an example.

The CPU 313 executes various processes using the RAM314 as a work area according to programs held by the ROM 315. The RAM314 is a volatile storage area, and is used as a work memory or the like. The ROM 315 is a nonvolatile storage area, and holds programs according to the present embodiment, an OS (operating system), and the like. The data transfer I/F312 is an interface configured to transmit/receive data with respect to an external device. The image processing accelerator 311 is hardware capable of performing image processing at a higher speed than the CPU 313. When the CPU 313 writes parameters and data necessary for image processing to a predetermined address of the RAM314, the image processing accelerator 311 is started. After the parameters and data are loaded, predetermined image processing is performed on the data. However, the image processing accelerator 311 is not an indispensable element, and equivalent processing may be performed by the CPU 313. The printing unit 316 executes a printing operation based on an instruction from the image processing apparatus 300.

The connection method of the data transfer I/F306 of the image processing apparatus 300 and the data transfer I/F312 of the printing apparatus 310 is not particularly limited. For example, USB (universal serial bus), IEEE 1394, or the like can be used. Further, the connection may be wired or wireless.

[ correction of contrast ]

The contrast correction according to the present embodiment will be described in detail below. The contrast correction according to the present embodiment is processing for performing predetermined image processing when the printing apparatus 310 prints HDR image data. As described above, in the present embodiment, the color reproduction range of the input image (e.g., HDR image data) and the color reproduction range of the printing apparatus 310 for printing are different, and the range of reproducible colors in the input image is wider.

Fig. 4 is a block diagram showing an example of a software configuration for performing image processing related to contrast correction when the printing apparatus 310 prints HDR image data. In the present embodiment, when the CPU 301 reads out a program stored in the HDD 303 and executes the program, the respective modules illustrated in fig. 4 are realized. The image processing apparatus 300 includes an image input module 401, a D range conversion module 402, a gamut mapping module 403, an image output module 404, an input image characteristic obtaining module 405, an output image characteristic obtaining module 406, and a contrast correction module 407. Note that the modules shown here represent modules regarding processing related to contrast correction, and the image processing apparatus 300 may also include modules configured to perform other image processing.

The image input module 401 obtains HDR image data. As for the obtaining method, image data held by the HDD 303 may be obtained, or image data may be obtained from an external apparatus via the data transfer I/F306. In the present embodiment, as HDR image data, RGB data whose D range has a peak luminance of 1,000 nits and whose color space is bt.2020 will be described as an example.

The D-range conversion module 402 performs D-range compression to a predetermined luminance range on the luminance of the image data input to the D-range conversion module 402 using means such as a one-dimensional lookup table (hereinafter referred to as 1 DLUT). In the present embodiment, the D range compression is performed using the graph shown in fig. 1. In fig. 1, the horizontal axis represents the luminance of an input to be subjected to D-range compression, and the vertical axis represents the luminance after compression. Based on the compression characteristics shown in fig. 1, HDR image data having a luminance range of 1,000 nits is compressed to a luminance range of 100 nits that the printing apparatus 310 can handle.

For the image data input to the gamut mapping module 403, the gamut mapping module 403 performs gamut mapping to the gamut of the printing device 310 using a method such as a three-dimensional LUT (hereinafter referred to as 3 DLUT). Fig. 5 is a diagram for explaining gamut mapping according to the present embodiment. In fig. 5, the horizontal axis represents Cr of the YCbCr color space, and the vertical axis represents luminance Y. The input gamut 501 of the image data input to the gamut mapping module 403 undergoes gamut mapping to the output gamut 502, which is the gamut of the printing device 310. When the input colors are (Y, Cb, Cr), they are converted into (Y ', Cb ', Cr '). If the input color has a color space different from YCbCr, the color space is converted into YCbCr color space and then gamut mapping is performed. In the example shown in fig. 5, the input gamut 501 and the output gamut 502 do not have similar shapes.

The primaries 503 and 506 of the input gamut 501 are mapped to the primaries 504 and 505, respectively, of the output gamut 502. Although the primary colors 503 and 506 have the same luminance value, the primary colors 504 and 505 after gamut mapping have different luminance values. In this way, in the case where the input and output gamuts in gamut mapping do not have similar shapes, even if the input luminance values are the same, the colors are mapped to different output luminance values according to the hues.

In addition, the out-of-gamut region 507 indicated by hatching in fig. 5 is a gamut that cannot be expressed by the printing device 310. The out-of-gamut region 507 is a region that is included in the input gamut 501 but is not included in the output gamut 502. On the other hand, the in-gamut region 508 is a region included in both the input gamut 501 and the output gamut 502. The out-of-gamut region 507 is compressed more than the in-gamut region 508 and is mapped in the output gamut 502. For example, in the input color, the contrast 509 of two colors is mapped to the contrast 511, and the contrast 510 is mapped to the same contrast as the input even after the mapping. That is, in contrast 510, the change before and after mapping is smaller than the change in contrast 511. In other words, the conversion characteristics differ between the conversion in the in-gamut area 508 and the conversion from the out-gamut area 507 to the in-gamut area 508. Since the colors outside the output gamut are more compressed and mapped than the colors in the output gamut, the contrast of the colors outside the output gamut becomes lower.

The input image characteristic obtaining module 405 generates (extracts) a high frequency value of the image data input to the image input module 401. First, the input image characteristic obtaining module 405 calculates the luminance of the input image data. If the input image data is RGB data (R: red, G: green, B: blue), it can be converted into YCbCr by the method represented by equations (1) to (3). Note that the RGB-YCbCr conversion formulas shown below are merely examples, and other conversion formulas may be used. In the following formula, "·" represents multiplication.

Y=0.299·R+0.587·G+0.114·B ...(1)

Cb=-0.1687·R-0.3313·G+0.5·B ...(2)

Cr=0.5·R-0.4187·G-0.0813·B ...(3)

Further, the input image characteristic obtaining module 405 generates a high frequency value from the calculated luminance (Y value). To generate the high frequency value, the low frequency value is first calculated. The low frequency value is generated by performing a filtering process on the luminance. The filtering process will be described using a gaussian filter for performing the smoothing process as an example with reference to fig. 6. In fig. 6, the filter size is 5 × 5, and a coefficient 601 is set for each of 25 pixels. Let x be the horizontal direction of the image and y be the vertical direction. The pixel value at the coordinates (x, y) is p (x, y), and the filter coefficient is f (x, y). For each pixel of interest p' (x, y), the filtering process is performed by the method represented by equation (4). The calculation of equation (4) is performed every time the filter scans the image data with the pixel of interest 602 as the center. When the scanning of all the pixels is completed, a low frequency value is obtained.

p'(x,y)={1/Σf(x,y)}·Σ{f(x,y)×p(x,y)} ...(4)

In the present embodiment, a gaussian shape has been exemplified as the filter characteristic. However, the present invention is not limited thereto. For example, an edge-preserving filter such as a bilateral filter may be used. When the edge-hold filter is used, a halo of an artifact occurring in an edge portion at the time of contrast correction can be reduced.

Fig. 7 is a diagram showing a visual transfer function VTF for spatial frequencies. The visual transfer function VTF shown in fig. 7 represents a change in visual acuity represented by the vertical axis when the spatial frequency represented by the horizontal axis changes. This means that the higher the visual acuity, the higher the transfer characteristic. As can be seen from the visual transfer function VTF, high transfer characteristics of about 0.8 or more can be obtained at a spatial frequency of 0.5cycle/mm or more. Note that, in the example shown in fig. 7, when the spatial frequency is 2 cycles/mm or more, the visual sensitivity is lower than 0.8. The frequency as the object of contrast correction is preferably a frequency having high visual acuity. That is, the high frequency means 0.5cycle/mm or more as the frequency including the peak sensitivity, and the low frequency means 0.5cycle/mm or less. In the present embodiment, the high-frequency component and the low-frequency component are obtained from the luminance based on this premise.

Let I be the luminance, H the high frequency value, and L the low frequency value for each pixel. The high frequency value H is calculated by the following formula.

H=I/L ......(5)

In the present embodiment, the high-frequency value H and the low-frequency value L of the luminance I will be described as values equal to the value Re of the reflected light and the value Li of the illumination light, respectively. Here, the illumination light refers to an illumination light component included in the luminance component, and the reflected light refers to a reflected light component included in the luminance component. That is, description will be made using the high frequency value H as a value representing the intensity of the high frequency component and also using the low frequency value L as a value representing the intensity of the low frequency component.

As with the low frequency value, the value of the illumination light can be generated by performing filter processing. In addition, when the edge-hold filter is used, the value of the illumination light at the edge portion can be generated more accurately. The value Re of the reflected light and the value Li of the illumination light can be given by the following equations

Re=I/Li ......(6)

As shown by equation (5), the high frequency value H is generated by dividing the input image by the low frequency value. However, the present invention is not limited thereto. For example, as shown in equation (7), the high frequency value H may be generated by subtracting the low frequency value from the input image. This also applies to the case where the value of the reflected light and the value of the illumination light are used.

H=I-L ......(7)

The output image characteristic obtaining module 406 generates a high frequency value of a color system to be output by the printing apparatus 310. That is, the output image characteristic obtaining module 406 obtains high-frequency values in the color family range reproducible by the printing apparatus 310. The generation method will be described later with reference to the flowchart of fig. 8.

The contrast correction module 407 decides contrast correction intensity based on the high frequency values generated by the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406, and performs contrast correction processing on the high frequency value of the image data input to the contrast correction module 407. In the present embodiment, description will be made assuming that the contrast of an image is corrected by correcting the intensity of a high-frequency value. The correction method will be described later with reference to the flowchart of fig. 9.

The image output module 404 performs image processing for output with the printing apparatus 310. The image data subjected to gamut mapping by the gamut mapping module 403 is separated into ink colors to be printed by the printing device 310. The image output module 404 also performs desired image processing required for output with the printing apparatus 310, such as quantization processing for converting image data into binary data representing ink discharge/non-discharge using dither or error diffusion processing.

(high frequency Generation treatment)

Details of the processing performed by the output image characteristic obtaining module 406 for generating a high frequency of the color system to be output by the printing apparatus 310 will be described with reference to fig. 8.

In step S101, the output image characteristic obtaining module 406 causes the D range conversion module 402 to perform D range conversion on the image data input to the image input module 401.

In step S102, the output image characteristic obtaining module 406 causes the gamut mapping module 403 to perform gamut mapping on the image data subjected to the D range compression in step S101.

In step S103, the output image characteristic obtaining module 406 generates a high frequency value H' from the image data subjected to the gamut mapping in step S102. To generate a high frequency value, the output image characteristic obtaining module 406 calculates luminance, and further calculates a low frequency value of the calculated luminance as the input image characteristic obtaining module 405. The output image characteristic obtaining module 406 calculates a high frequency value according to equation (5) based on the low frequency value and the input luminance. The process then ends.

The D-range compression processing and the gamut mapping processing here have the same contents as those of the D-range conversion and the gamut mapping processing performed in the processing shown in fig. 10 described later, but are performed for different purposes. Note that in the following description, the D-range compression process and the gamut mapping process will sometimes be collectively referred to as a conversion process.

(contrast correction processing)

Details of the contrast processing of the contrast correction module 407 will be described with reference to fig. 9.

In step S201, the contrast correction module 407 converts the input image data into a YCbCr color space. If the input color space is an RGB color space, the RGB color space is converted into a YCbCr color space according to equations (1) to (3).

In step S202, the contrast correction module 407 obtains a luminance value I from the data of the YCbCr color space generated in step S201, and calculates a high-frequency value H and a low-frequency value L based on the luminance value. Here, the calculation methods of the high frequency value H and the low frequency value L are similar to those of the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406 described above. That is, the contrast correction module 407 calculates a low frequency value L of luminance, and calculates a high frequency value H according to equation (5) based on the calculated low frequency value L and the input luminance value I.

In step S203, the contrast correction module 407 generates contrast correction intensity based on the high-frequency values generated by the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406. Here, the target value of the contrast intensity is set as the high frequency value of the input image. Let Hm be the correction intensity calculated as the correction coefficient used when performing contrast correction, Ht be the high-frequency value generated by the input image characteristic obtaining module 405, and H' be the high-frequency value generated by the output image characteristic obtaining module 406. At this time, the correction intensity calculation method may be represented by the following formula.

Hm=Ht/H' ...(8)

Equation (8) represents the inverse deviation when the intensity of the high frequency value changes from the input image to the output image.

The value obtained here is the inverse deviation before and after the conversion. Therefore, in the example shown in fig. 5, the correction strength in the out-of-gamut area 507 is set higher than the correction strength in the in-gamut area 508. This is because the degree of change (degree of compression) in the conversion is different as described using the contrast 510 and the contrasts 509 and 511.

Note that, when the high-frequency value Ht and the high-frequency value H' are generated using equation (7), the correction strength Hm can be given by the following equation.

Hm=Ht-H' ...(9)

Equation (9) represents the difference when the intensity of the high frequency value changes from the input image to the output image.

In step S204, the contrast correction module 407 performs contrast correction by multiplying the high-frequency value H generated in step S202 by the correction intensity Hm. That is, contrast correction is performed on the high frequency value of the input image data. Let Hc be the high frequency value after the contrast correction, the contrast correction can be represented by the following equation.

Hc=Hm×H ...(10)

Note that when the high-frequency value H is generated using equation (7), contrast correction is performed by adding the correction intensity Hm to the high-frequency value H generated in step S202. The contrast-corrected high-frequency value Hc is given by the following equation.

Hc=Hm+H ...(11)

As shown in equations (8) and (9), the reverse deviation amount when the contrast decreases from the input image to the output image, that is, the intensity of the high frequency value decreases, is set as the correction intensity Hm. When the correction is performed by multiplication of the reverse deviation amount of equation (10) or addition of the reverse deviation amount of equation (11), the intensity of the high frequency value of the input image may be maintained in the output image or a value close to the intensity of the high frequency value of the input image may be obtained in the output image.

In step S205, the contrast correction module 407 synthesizes the high-frequency value Hc after the contrast correction in step S204, the low-frequency value L calculated in step S202, and the values Cb and Cr generated in step S201 to obtain the original RGB data. First, the contrast correction module 407 integrates the contrast-corrected high frequency value Hc and the low frequency value L by equation (12), thereby obtaining the contrast-corrected luminance I' by synthesizing the frequency values.

I'=Hc×L ...(12)

Note that when equation (7) is used to generate the high frequency value Hc and the low frequency value L, the luminance I' can be given by the following equation.

I'=Hc+L ...(13)

The contrast correction module 407 then performs plane synthesis on the luminance I 'and the color difference values (Cb, Cr) to generate color image values (I', Cb, Cr). An image subjected to contrast correction according to the present embodiment is thereby obtained. The process then ends.

[ treatment Process ]

A flowchart of the overall process according to the present embodiment will be described with reference to fig. 10. This processing procedure is realized, for example, when the CPU 301 reads out and executes a program stored in the HDD 303 and thus functions as each processing unit shown in fig. 4.

In step S301, the image input module 401 obtains HDR image data. As for the obtaining method, image data held by the HDD 303 may be obtained, or image data may be obtained from an external apparatus via the data transfer I/F306. In addition, it may be decided to obtain HDR image data of an object based on a selection or an instruction of a user.

In step S302, the contrast correction module 407 generates a contrast correction intensity Hm by the method described above with reference to fig. 9 using the high-frequency values generated by the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406.

In step S303, the contrast correction module 407 performs contrast correction by the method described above with reference to fig. 9, using the contrast correction intensity Hm generated in step S302 for the high frequency value of the image data input in step S301. That is, steps S303 and S304 of the processing procedure correspond to the processing shown in fig. 9.

In step S304, the D-range conversion module 402 performs D-range conversion (dynamic range compression processing) on the image data subjected to contrast correction in step S303 by the method described above with reference to fig. 1 and the like. In the present embodiment, the D range conversion module 402 converts the D range from 1,000 nits of the input image to 100 nits as a D range for gamut mapping.

In step S305, the gamut mapping module 403 performs gamut mapping processing on the image data subjected to D range conversion in step S304 by the method described above with reference to fig. 5 and the like.

In step S306, the image output module 404 performs output processing for output of the printing device 310 on the image data subjected to the gamut mapping in step S305 by the above-described method. The process then ends.

In the present embodiment, using the high frequency value of the input image and the high frequency value of the output image after gamut mapping, contrast correction is performed by setting the reverse deviation amount corresponding to the decrease amount of the high frequency value as the correction strength. Therefore, even in the case where the high frequency value is reduced due to the D range conversion in step S304 and the gamut mapping in step S305 performed after the correction intensity is set, the reduction amount can be corrected by the contrast correction in advance. As a result, the contrast of the input image can be maintained or can be made close to the contrast of the input image even after the gamut mapping.

In addition, when the high frequency value of the output image after the gamut mapping is used in generating the contrast correction strength, the correction strength may be decided in a state in which the reduction amount of the contrast due to the compression of the gamut mapping is included. Therefore, as the compression ratio of the gamut mapping increases, the contrast correction strength can be set high. In addition, the high-frequency value subjected to contrast correction is close to the high-frequency value of the input image, and the low-frequency value not yet subjected to contrast correction is close to the low-frequency value after gamut mapping.

As is apparent from the above description, according to the present embodiment, a decrease in contrast caused by a difference in color reproduction range between input and output can be suppressed.

Note that in the present embodiment, an example of using the YCbCr color space as luminance is described. However, an xyz color space representing luminance and chrominance may be used.

< second embodiment >

A second embodiment of the present invention will be described with reference to the flowchart of fig. 11. Descriptions of parts overlapping with the first embodiment will be omitted, and only differences will be described. In the present embodiment, contrast correction is performed after D-range conversion, which is different from fig. 10 described in the first embodiment. That is, the order of the processing steps is different from that of the first embodiment.

In step S401, the image input module 401 obtains HDR image data. As for the obtaining method, image data held by the HDD 303 may be obtained, or image data may be obtained from an external apparatus via the data transfer I/F306. In addition, it may be decided to obtain HDR image data of an object based on a selection or an instruction of a user.

In step S402, the D-range conversion module 402 performs D-range conversion on the image data input in step S401 by the method described above with reference to fig. 1 and the like. In the present embodiment, the D range conversion module 402 converts the D range from 1,000 nits of the input image to 100 nits as a D range for gamut mapping.

In step S403, the contrast correction module 407 generates a contrast correction intensity Hm by the method described above with reference to fig. 9 using the high-frequency values generated by the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406.

In step S404, the contrast correction module 407 performs contrast correction by the method described above with reference to fig. 9, using the contrast correction intensity Hm generated in step S403, for the high-frequency value of the image data subjected to D range conversion in step S402. That is, steps S403 and S404 of this processing procedure correspond to the processing shown in fig. 9 described in the first embodiment.

In step S405, the gamut mapping module 403 performs gamut mapping on the image data subjected to contrast correction in step S404 by the method described above with reference to fig. 5 and the like.

In step S406, the image output module 404 performs output processing for output of the printing device 310 on the image data subjected to the gamut mapping in step S405 by the above-described method. The process then ends.

In the present embodiment, using the high frequency value of the input image and the high frequency value of the output image after gamut mapping, contrast correction is performed by setting the reverse deviation amount corresponding to the decrease amount of the high frequency value as the correction strength. Therefore, even in the case where the high frequency value is reduced due to the D range conversion in step S402 and the gamut mapping in step S405, the reduction amount is corrected by the contrast correction. As a result, the contrast of the input image can be maintained or can be made close to the contrast of the input image even after the gamut mapping.

In addition, when the high frequency value of the output image after the gamut mapping is used in generating the contrast correction strength, the correction strength may be decided in a state in which the reduction amount of the contrast due to the compression of the gamut mapping is included. Therefore, as the compression ratio of the gamut mapping increases, the contrast correction strength can be set high. In addition, the high-frequency value subjected to contrast correction is close to the high-frequency value of the input image, and the low-frequency value not yet subjected to contrast correction is close to the low-frequency value after gamut mapping.

Further, since the contrast correction is performed after the D range conversion, the memory for processing can be made small because the D range is small compared to the case where the correction is performed before the D range conversion.

< third embodiment >

A third embodiment of the present invention will be described with reference to the flowchart of fig. 12. Descriptions of parts overlapping with the first embodiment will be omitted, and only differences will be described. In the present embodiment, contrast correction is performed after D-range conversion and gamut mapping, which is different from fig. 10 described in the first embodiment. That is, the order of the processing steps is different from that of the first embodiment.

In step S501, the image input module 401 obtains HDR image data. As for the obtaining method, image data held by the HDD 303 may be obtained, or image data may be obtained from an external apparatus via the data transfer I/F306. In addition, it may be decided to obtain HDR image data of an object based on a selection or an instruction of a user.

In step S502, the D-range conversion module 402 performs D-range conversion on the image data input in step S501 by the method described above with reference to fig. 1 and the like. In the present embodiment, the D range conversion module 402 converts the D range from 1,000 nits of the input image to 100 nits as a D range for gamut mapping.

In step S503, the gamut mapping module 403 performs gamut mapping on the image data subjected to D range conversion in step S502 by the method described above with reference to fig. 5 and the like.

In step S504, the contrast correction module 407 generates a contrast correction intensity Hm by the method described above with reference to fig. 9 using the high-frequency values generated by the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406.

In step S505, the contrast correction module 407 performs contrast correction by the method described above with reference to fig. 9, using the contrast correction intensity Hm generated in step S504, for the high frequency value of the image data subjected to the gamut mapping in step S503. That is, steps S504 and S505 of the processing procedure correspond to the processing shown in fig. 9 described in the first embodiment.

In step S506, the image output module 404 performs output processing for output by the printing apparatus 310 on the image data subjected to the contrast correction in step S505 by the above-described method. The process then ends.

In the present embodiment, using the high frequency value of the input image and the high frequency value of the output image after gamut mapping, contrast correction is performed by setting the reverse deviation amount corresponding to the decrease amount of the high frequency value as the correction strength. Therefore, even in the case where the high frequency value is reduced due to the D range conversion in step S502 and the gamut mapping in step S503, the reduction amount is corrected by the contrast correction. As a result, the contrast of the input image can be maintained or can be made close to the contrast of the input image even after the gamut mapping.

In addition, when the high frequency value of the output image after the gamut mapping is used in generating the contrast correction strength, the correction strength may be decided in a state in which the reduction amount of the contrast due to the compression of the gamut mapping is included. Therefore, as the compression ratio of the gamut mapping increases, the contrast correction strength can be set high. In addition, the high-frequency value subjected to contrast correction is close to the high-frequency value of the input image, and the low-frequency value not yet subjected to contrast correction is close to the low-frequency value after gamut mapping.

Further, since the contrast correction is performed after the gamut mapping, the memory for processing can be made small because the D range is small compared to the case where the correction is performed before the D range conversion.

< fourth embodiment >

A fourth embodiment of the present invention will be described with reference to the flowchart of fig. 13. Descriptions of parts overlapping with the first embodiment will be omitted, and only differences will be described. In the present embodiment, the D range conversion is performed twice, unlike fig. 10 described in the first embodiment.

In step S601, the image input module 401 obtains HDR image data. As for the obtaining method, image data held by the HDD 303 may be obtained, or image data may be obtained from an external apparatus via the data transfer I/F306. In addition, it may be decided to obtain HDR image data of an object based on a selection or an instruction of a user.

In step S602, the D-range conversion module 402 performs D-range conversion on the image data input in step S601 by the method described above with reference to fig. 1 and the like. In the present embodiment, the D range conversion module 402 converts the D range from 1,000 nits of the input image to a D range of a color space serving as a standard. For example, in the case of AdobeRGB, the D range of the input image is converted to 120 nits.

In step S603, the contrast correction module 407 generates a contrast correction intensity Hm by the method described above with reference to fig. 9 using the high-frequency values generated by the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406.

In step S604, the contrast correction module 407 performs contrast correction by the method described above with reference to fig. 9, using the contrast correction intensity Hm generated in step S603, for the high-frequency value of the image data converted into the D range of the standard color space in step S602. That is, steps S603 and S604 of the processing procedure correspond to the processing shown in fig. 9 described in the first embodiment.

In step S605, the D-range conversion module 402 performs D-range conversion on the image data subjected to contrast correction in step S604 by the method described above with reference to fig. 1 and the like. In the present embodiment, the D range of the image is converted from 120 nits of the standard color space converted in step S602 to 100 nits which is the D range for the color gamut mapping.

In step S606, the gamut mapping module 403 performs gamut mapping on the image data subjected to D range conversion in step S605 by the method described above with reference to fig. 5 and the like.

In step S607, the image output module 404 performs output processing for output of the printing device 310 on the image data subjected to the gamut mapping in step S606 by the above-described method. The process then ends.

In the present embodiment, using the high frequency value of the input image and the high frequency value of the output image after gamut mapping, contrast correction is performed by setting the reverse deviation amount corresponding to the decrease amount of the high frequency value as the correction strength. Therefore, even in the case where the high frequency value is reduced due to the D range conversion into the standard color space in step S602, the D range conversion in step S605, and the gamut mapping in step S606, the reduction amount is corrected by the contrast correction. As a result, the contrast of the input image can be maintained or can be made close to the contrast of the input image even after the gamut mapping.

In addition, when the high frequency value of the output image after the gamut mapping is used in generating the contrast correction strength, the correction strength may be decided in a state in which the reduction amount of the contrast due to the compression of the gamut mapping is included. Therefore, as the compression ratio of the gamut mapping increases, the contrast correction strength can be set high. In addition, the high-frequency value subjected to contrast correction is close to the high-frequency value of the input image, and the low-frequency value not yet subjected to contrast correction is close to the low-frequency value after gamut mapping.

Further, since the D range is temporarily converted into the D range of the standard color space, an editing operation such as decoration or the like can be performed while confirming an image in an environment independent of the printing apparatus (for example, on an HDR monitor).

< fifth embodiment >

In the above-described embodiments, the description has been made using an example in which the contrast correction strength is generated from the high-frequency value of the input image and the high-frequency value of the output image. In the present embodiment, an example of generating the correction intensity information by the 3D LUT method will be described. Fig. 14 is a diagram for explaining generation of correction intensity information according to the present embodiment.

In the present embodiment, the correction intensity information sets the amount of reduction in contrast between the input image and the output image as the inverse deviation. Assume that the output image is in a state where the input image has undergone D-range compression and gamut mapping. In fig. 14, the input reference color (224,0,0) and contrast object color (232,8,8) are changed to (220,8,8) and (216,12,12) by D-range compression and gamut mapping, respectively. The difference values Δ RGB representing the contrast between the reference color and the contrast target color in the input and output are 13.9 and 6.9, and the inverse deviation of the contrast ratio is calculated by equation (14). In addition, the inverse deviation of the contrast difference can be calculated by equation (15).

13.9/6.9=2.0 ......(14)

13.9-6.9=7.0 ......(15)

By the above method, the correction intensity for the input color is generated. This is calculated for each grid value of the 3D LUT, thereby generating a 3D LUT representing the correction strength Hm of the output for the input (R, G, B). In this way, correction intensity information having the following characteristics can be generated: the correction intensity Hm is made larger for the out-of-gamut colors that are greatly compressed by gamut mapping than for the small in-gamut colors.

A method of performing contrast correction using the correction intensity information will be described. The contrast correction module 407 finds a 3D LUT of correction intensity information using RGB values of the input image data, thereby generating a correction intensity Hm for the input color. Further, the contrast correction module 407 performs contrast correction using the generated correction intensity Hm.

In the present embodiment, using the input image and the output image after gamut mapping, contrast correction is performed using correction intensity information of the 3D LUT for setting the reverse deviation amount corresponding to the reduction amount of contrast as correction intensity. Therefore, even if the contrast is reduced due to the D range conversion and the gamut mapping, the reduction amount is corrected. Therefore, even after the gamut mapping, the contrast of the input image can be maintained or can be made close to the contrast of the input image. In addition, since the correction intensity Hm is generated by the 3D LUT method, it is not necessary to calculate the high frequency value of the input image and the high frequency value of the output image, and the contrast correction can be performed in a small memory state.

< sixth embodiment >

As a sixth embodiment of the present invention, a form for maintaining the effect of contrast correction in consideration of the observation condition will be described. Note that description of components overlapping with the above-described embodiment will be omitted as appropriate, and description will be made focusing on differences.

As described above, due to compression by gamut mapping, the contrast intensity is reduced at the time of printing by the printing apparatus. In addition, since the contrast sensitivity characteristic changes depending on the observation condition, it is difficult to maintain the effect of contrast correction. The present embodiment aims to solve this problem.

[ Picture Structure ]

Fig. 15 shows a UI configuration screen 1301 provided by the contrast correction application according to the present embodiment, which is displayed on the display 307. The user can set contrast correction conditions to be described later via the UI configuration screen 1301 as a display screen. The user specifies a storage position (path) of an image to be subjected to contrast correction in a path box 1302 of the UI configuration screen 1301. The image specified by the route box 1302 is displayed in the input image display section 1303. In the output device setting block 1304, a device for outputting the image designated by the path block 1302 is selected from the pull-down menu and set. In the output paper size setting block 1305, a paper size to be output is selected from a pull-down menu and set. Note that, in addition to a predetermined size, the user can input an arbitrary size from the operation unit 308 and perform setting. In the observation distance setting block 1306, a distance to observe the output printed product is input from the operation unit 308 and set. An appropriate observation distance may be automatically calculated and set based on the output paper size set in the output paper size setting block 1305. In contrast, an appropriate output paper size can be automatically calculated and set based on the observation distance set in the observation distance setting block 1306. In the illumination light setting block 1307, the luminance value of illumination light with which the output printed product is irradiated is selected from the pull-down menu and set. A luminance value can be input from the operation unit 308.

[ software Structure ]

Fig. 16 is a block diagram showing an example of a software structure according to the present embodiment. Unlike the configuration shown in fig. 4 described in the first embodiment, the software configuration further includes a contrast characteristics obtaining module 408. The image input module 401 according to the present embodiment also obtains an output device (printer in the present embodiment) specified in an output device setting box 1304 of the UI configuration screen 1301 and an output paper size specified in an output paper size setting box 1305. The image input module 401 obtains the observation distance specified in the observation distance setting block 1306 and the luminance value of the illumination light set in the illumination light setting block 1307. The image input module 401 also obtains HDR image data specified in the path box 1302 of the UI configuration screen 1301.

[ Filter treatment ]

In the first embodiment, the filtering process has been described with reference to fig. 6. In the present embodiment, the filter to be used in the above-described filtering process can be set in the following manner by the contrast expression characteristic obtaining module 408 shown in fig. 16 in consideration of the observation condition.

First, the number of pixels PDppd at a predetermined angle of view is calculated from the obtained observation conditions (output sheet size, observation distance). Here, the predetermined angle of view is set to 1 °.

First, the number of pixels per inch PDppi is calculated by the following equation.

Figure BDA0002111879000000221

Where Hp is the number of pixels of the image in the horizontal direction, Vp is the number of pixels of the image in the vertical direction, and S is the diagonal output sheet size in inches.

Next, the pixel number PDppd of the viewing angle of 1 ° can be calculated using the following equation.

PDppd=1/tan-1((25.4/PDppi)/D) ...(17)

Wherein D is the observation distance [ mm ].

The filter condition is set using the pixel number PDppd of the viewing angle of 1 ° calculated by equation (17). Here, the filtering condition indicates the size of the filter. When the number of pixels PDppd with a viewing angle of 1 ° is used, the angular resolution PDcpd can be calculated by the following equation.

PDcpd=PDppd/2 ...(18)

The calculated angular resolution PDcpd is set to the filter size of the gaussian filter and the filter is defined as filter M. Note that here, the PDcpd is directly set to the filter size of the gaussian filter. However, the present invention is not limited thereto. For example, a table indicating the correspondence between the PDcpd and the filter size may be held in advance, and the filter size may be set by referring to the table. Alternatively, in the case of the above-described edge-hold filter, the filtering process is performed by judging an edge portion and a portion other than the edge. Therefore, in addition to the setting value relating to the filter size, a setting value (for example, luminance difference) relating to whether or not the image is the subject of the filtering process is required. Therefore, in addition to the filter size, a setting value regarding whether or not the image is an object of the filtering process can be set based on the observation condition.

[ contrast correction processing ]

In the first embodiment, the contrast process of the contrast correction module 407 has been described with reference to fig. 9. At this time, the above-described correction intensity Hm may be calculated based on the observation condition as well. Using the luminance value of the illumination light obtained by the image input module 401, the contrast expression characteristic obtaining module 408 calculates the ratio Sr with respect to the contrast sensitivity value at the luminance value of the illumination light serving as a reference in the following manner. Then, the calculated ratio Sr is used to obtain the correction strength Hm. Here, the luminance value of the illumination light used as a reference refers to a luminance value used as a reference for making the expression of the effect of the contrast correction uniform. The luminance value of the illumination light serving as a reference may be set as a setting value (not shown) in the UI configuration screen 1301 by the user on the image input module 401. Alternatively, the brightness value may be held internally at a predetermined value. The contrast sensitivity ratio Sr is calculated using the contrast sensitivity value S (ur, Ls) at the luminance value Ls of the illumination light in the observation environment and the contrast sensitivity value S (ur, Lr) at the luminance value of the illumination light serving as a reference. Note that ur is a high-sensitivity frequency at the luminance value of the illumination light serving as a reference.

As a calculation method of ur, Barten model is used. According to Barten's model, the contrast sensitivity can be calculated by equation (19).

Here, the following is assumed: k is 3.3, T is 0.1, η is 0.025, h is 357 × 3600, corresponding to the contrast change Φ of the external noise ext(u)0, and a contrast change Φ corresponding to neural noise0=3×10-8[sec deg2]. In addition, XE is 12[ deg ]]And NE 15[ cycle ═](0 and 90[ deg. ]]For a frequency of 2[ c/deg. ]]Above 45[ deg. ]]And NE ═ 7.5[ cycle [ ]]). The following is assumed: sigma0=0.0133[deg]And Csph is 0.0001[ deg/mm [ ]3]。

Figure BDA0002111879000000241

Note that σ, Mopt (u), (1-F (u))2D, and IL are calculated by equations (20) to (24).

d=4.6-2.8·tanh(0.4·Log10(0.625·L)) ...(20)

Figure BDA0002111879000000242

Mopt(u)=e(-π2σ2u2) ...(22)

IL=π/4d2L ...(23)

Figure BDA0002111879000000243

In equations (19) to (24), when the target luminance value is set to L and the spatial frequency is set to u, the contrast sensitivity of the spatial frequency u at the target luminance L can be calculated. Fig. 17 is a graph depicting contrast sensitivity calculated by Barten's model for each brightness. As the luminance becomes higher, the frequency of the high contrast sensitivity shifts to the high frequency side. In contrast, as can be seen, as the luminance becomes low, the frequency of the high contrast sensitivity shifts to the low frequency side. The contrast sensitivities for a plurality of spatial frequencies can be calculated in advance corresponding to a plurality of luminance values using equations (19) to (24), and a luminance-high sensitivity frequency conversion table in which the spatial frequency of the maximum value is associated with the luminance value can be held. Fig. 18 shows an example of a luminance-high sensitivity frequency conversion table. In the case of luminance values not described in the setting table, as shown in fig. 17, the high-sensitivity frequency can be calculated by defining an approximation function for connecting the high-sensitivity frequencies for the respective luminances.

S (ur, Ls) and S (ur, Lr) are calculated according to equation (19) above. When the calculated S (ur, Ls) and S (ur, Lr) are used, the contrast sensitivity ratio can be calculated by the following formula.

Sr=S(ur,Lr)/S(ur,Ls) ...(25)

The contrast correction module 407 generates a contrast correction intensity. In the case of using the contrast sensitivity ratio Sr calculated by the contrast expression characteristic obtaining module 408, the target high frequency value Hta as the correction target, and the output high frequency value H' after gamut mapping, the contrast correction strength Hm can be represented by the following expression.

Hm=Sr×(Hta/H') ...(26)

In addition, in the case where high frequency values are generated by the input image characteristic obtaining module 405 and the output image characteristic obtaining module 406 by the method of equation (7), the contrast correction intensity Hm may be represented by the following equation.

Hm=Sr×(Hta-H') ...(27)

Next, for the contrast ratio calculation process, the contrast sensitivity S (ur, Lr) at the luminance value of the illumination light serving as a reference is calculated, and the contrast sensitivity S (ur, Ls) at the luminance value of the illumination light in the observation environment is calculated. Then, the contrast sensitivity ratio Sr is calculated using the contrast sensitivity S (ur, Lr) of the illumination light used as a reference and the contrast sensitivity S (ur, Ls) of the illumination light in the observation environment.

When the contrast correction processing is performed using the above-described method, the effect of the contrast correction in consideration of the observation condition can be maintained. In the above-described embodiment, the contrast expression characteristic obtaining module 408 sets the filter M in consideration of the observation condition, obtains the low-frequency component L using the filter M, and sets the contrast correction strength using the contrast sensitivity value calculated based on the observation condition. However, only one of them is sufficient.

[ modified examples ]

In the sixth embodiment described above, as in steps S101 to S103, the high frequency value H 'is generated from the image data subjected to the D range compression and the gamut mapping, and the contrast correction module 407 obtains the contrast correction intensity Hm using H' and the input image data Ht, and corrects the high frequency value using the contrast correction intensity Hm. However, instead of correcting the high-frequency component H using the correction intensity Hm, the following processing may be performed. That is, in step S202 of the sixth embodiment, the low-frequency value L and the high-frequency value H are obtained using the filter M generated by the contrast expression characteristic obtaining module 408 based on the observation condition, and the obtained low-frequency value L is subjected to D-range compression to generate the low-frequency value L'. Then, the luminance I 'can be obtained by integrating the high frequency value H and the low frequency value L'.

In addition, in the sixth embodiment, when contrast correction is performed, instead of obtaining the correction strength Hm from the input image data Ht and the image data subjected to D range compression and gamut mapping, contrast correction may be performed by setting the value Hm to the above-described ratio Sr with respect to the contrast sensitivity value, that is, by setting Hm to Sr. In this case, the low frequency value L and the high frequency value H may be obtained using a filter M generated based on the observation condition. However, the filter M may not be used, but a filter that is not prepared based on the observation condition may be used.

< seventh embodiment >

As a seventh embodiment of the present invention, a form in which highlight detail loss or shadow detail loss at the time of dynamic range compression is considered will be described. Note that description of components overlapping with the above-described embodiment will be omitted as appropriate, and description will be made focusing on differences.

As image processing for correcting the contrast reduction caused when the D range compression is performed as described above, Retinex processing is used. In the Retinex process, first, an image is separated into an illumination light component and a reflected light component. When the illumination light component is compressed by the D range and the reflected light component is held, the D range compression can be performed while maintaining the contrast of the original image.

It can be said that the illumination light component is substantially a low-frequency component, and the reflected light component is substantially a high-frequency component. In the present embodiment, hereinafter, the low-frequency component or the illumination light component will be referred to as a first component, and the high-frequency component or the reflected light component will be referred to as a second component.

At this time, in a case where the shape of the gamut of the input image data and the shape of the gamut of the printing device are greatly different, even if the contrast correction is performed using the conventional method, the contrast obtained at the time of printing may be different from the desired contrast due to compression by gamut mapping. Further, if the pixel value of the second component is large on the high luminance side or the low luminance side, the output image may exceed the D range of the output, and a loss of high luminance detail or a loss of shadow detail occurs. Fig. 2C and 2D show the principle of occurrence of highlight detail loss/shadow detail loss. In fig. 2C and 2D, the vertical axis represents a pixel value, and the horizontal axis represents a coordinate value of an image. Fig. 2C and 2D show a first component of an image before D-range compression and after D-range compression and a pixel value obtained by adding a second component to the first component, respectively. After D-range compressing the first component of the image to obtain the first component, the second component maintains the value before D-range compression. In this case, as shown by pixel values obtained by adding the second component, on the high luminance side and the low luminance side, the values are limited by the upper limit and the lower limit of the D range (broken lines in fig. 2D), and a highlight detail loss or a shadow detail loss occurs. That is, if the value of the low-frequency component is compressed to the high luminance side or the low luminance side by the D range, the loss of high luminance detail/the loss of shadow detail easily occurs.

In view of the above, the present embodiment aims to suppress loss of highlight detail or shadow detail of contrast at the time of dynamic range compression.

[ contrast correction processing ]

In the first embodiment, the contrast process of the contrast correction module 407 has been described with reference to fig. 9. In the present embodiment, processing is performed in the following manner in steps S204 and S205 of fig. 9. The processing of steps S201 to S203 is the same as the first embodiment. In step S204, in addition to the process of step S204 described in the first embodiment, a second component correction module (not shown) in the contrast correction module 407 corrects the second component to prevent the high-frequency component (i.e., the second component) corrected by the contrast correction module 407 from exceeding the D range as the input of the input luminance range and causing a loss of highlight detail/a loss of shadow detail. Note that since the output from the contrast correction module 407 has the same D range as the input, the second component is also corrected not to exceed the luminance range of the output. Here, based on the value of the first component L before the D range conversion, the second component is corrected in the following manner. When L is on the high luminance side or the low luminance side, loss of high luminance detail/loss of shadow detail easily occurs. Therefore, the larger or smaller the value L becomes, the higher the degree of correction of the second component becomes.

When Hc is present>At 1 hour

When Hc >1, loss of highlight detail may occur on the highlight side. Therefore, correction is made so that the second component becomes close to 1 as the value of the first component L' becomes larger. Here, the second component is corrected using the following correction coefficient P.

Hcb=(1-P(L',L'max,L'min))H+P(L',L'max,L'min)·1 ...(28)

When Hc is present<At 1 hour

When Hc <1, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the second component becomes close to 1 as the value L becomes smaller. The second component is corrected using the following correction coefficient Q.

Hcb=Q(L',L'max,L'min)H+(1-Q(L',L'max,L'min)·1 ...(29)

When Hc is 1

When Hc is 1, the second component is not corrected because the addition of the second component does not cause highlight detail loss/shadow detail loss.

The correction coefficients P and Q are calculated in the following manner.

Figure BDA0002111879000000281

Figure BDA0002111879000000282

Wherein, α, β, t1And t2Is a predetermined constant. If the first component after the D range compression has a halftone, the second component is not suppressed. The second component is suppressed only when the first component after the D range compression is on the high luminance side or the low luminance side.

In step S205, the contrast correction module 407 synthesizes the contrast correction in step S204 and the corrected high-frequency value Hcb of the second component, the low-frequency value L calculated in step S202, and the values Cb and Cr generated in step S201 to obtain original RGB data. First, the contrast correction module 407 integrates the contrast-corrected high frequency value Hc and the low frequency value L by equation (32), thereby obtaining the contrast-corrected luminance I' by synthesizing the frequency values.

I'=Hcb×L ...(32)

Note that when the high frequency value Hc and the low frequency value L are generated using equation (7) described in the first embodiment, the second component is corrected in such a manner as to prevent the second component corrected by the contrast correction module 407 from exceeding the input D range and causing loss of highlight detail/loss of shadow detail.

When Hc is present>At 0 time

When Hc >0, loss of highlight detail may occur on the highlight side. Therefore, correction is performed such that the absolute value of the second component becomes smaller as the value of the first component L becomes larger. Here, the second component is corrected using the correction coefficient W.

Hcb=W(L,Lmax,Lmin)Hc ...(33)

When Hc is present<At 0 time

When Hc <0, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the absolute value of the second component becomes smaller as the value L becomes smaller. Here, the second component is corrected using the correction coefficient S.

Hcb=S(L,Lmax,Lmin)Hc ...(34)

Here, the correction coefficients W and S are calculated by the following equation.

Figure BDA0002111879000000291

Figure BDA0002111879000000292

When Hc is 0

When Hc is 0, no operation is performed because the addition of the value of the second component Hc does not cause highlight detail loss/shadow detail loss.

Here, α, β, t1And t2Is a predetermined constant. If the first component has a halftone, the second component is not suppressed. The second component is suppressed only when the value of the first component is on the high luminance side or the low luminance side.

In addition, LmaxAnd LminThe maximum and minimum values of the input D range, respectively. Note that the correction coefficients W and S are not always necessarily Sigmoid-type functions as described above. The function is not particularly limited as long as it makes the absolute value of the second component Hcb after correction smaller than the absolute value of the second component Hc before correction.

In addition, equations (35) and (36) may be performed by obtaining W (L ') and S (L ') using LUTs calculated in advance for each value L '. When the LUT prepared in advance is used, the processing load required for the operation can be reduced, and the processing speed can be increased.

In this case, the luminance I' may be represented by the following formula.

I'=Hcb+L ...(37)

Then, the contrast correction module 407 performs plane synthesis on the luminance I 'and the color difference values (Cb, Cr) to generate color image values (I', Cb, Cr). An image subjected to contrast correction according to the present embodiment is thus obtained.

This processing procedure is the same as that described with reference to fig. 9 in the first embodiment, and a description thereof will be omitted.

As described above, in the present embodiment, the second component of the HDR image is corrected in advance in consideration of the contrast reduction caused by the D range conversion and the gamut mapping. In addition, after the second component correction, processing is performed to prevent occurrence of highlight detail loss/shadow detail loss. When contrast correction considering contrast reduction caused by gamut mapping is performed in advance for an HDR image, the contrast can be easily maintained even after the gamut mapping.

(eighth embodiment)

The eighth embodiment will be described with reference to the flowchart of fig. 19.

Fig. 19 is a flowchart showing a processing procedure of highlight detail loss/shadow detail loss judgment. In this determination, it is determined whether highlight detail loss/shading detail loss correction is performed based on the value of the first component L' after D-range compression and the value of the second component H before D-range compression. When highlight detail loss/shadow detail loss determination is performed based on the values of both the first component L' and the second component H after D-range compression, the pixel causing the highlight detail loss/shadow detail loss can be specified more correctly. Further, by correcting only the pixels causing the loss of highlight detail/the loss of shadow detail, it is possible to prevent the contrast of the pixels not causing the loss of highlight detail/the loss of shadow detail from being lowered. The rest is the same as the seventh embodiment.

In step S1001, the contrast correction module 407 determines whether to correct highlight detail loss/shadow detail loss based on the second component H before the D-range compression and the first component L' after the D-range compression.

More specifically, it is determined whether highlight detail loss/shadow detail loss correction is performed according to the addition result of the first component L' and the second component H after D-range compression. Fig. 20 shows an outline of this determination. The highlight detail loss/shadow detail loss correction determination D range in fig. 20 is a D range determined in advance for highlight detail loss/shadow detail loss determination, and the buffer area ΔWAnd ΔSIndicating the luminance interval between the compressed D range and the highlight detail loss/shadow detail loss correction judgment D range.

(1)When L' + H falls within the range of the highlight detail loss/shadow detail loss correction judgment D range (pixel 20)

In this case, since no highlight detail loss/shadow detail loss occurs, no correction is performed.

(2)When L' + H falls outside the range of highlight detail loss/shadow detail loss correction judgment D range and also falls within the range Into the range of D range after compression (pixel 21)

In this case, no loss of highlight detail/shadow detail occurs. However, in order to prevent tone inversion caused as a result of highlight detail loss/shading detail loss correction, the pixel is set as a target pixel of highlight detail loss/shading detail loss correction.

(3)When L' + H falls outside the range of the D range after compression (pixel 23)

In this case, highlight detail loss/shadow detail loss occurs. The pixel is set as the target pixel of highlight detail loss/shadow detail loss correction.

A second component correction module (not shown) in the contrast correction module 407 corrects the second component according to the following equation based on the result of the above-described highlight detail loss/shadow detail loss correction determination. In the formula, α is a predetermined constant, ThmaxIs the maximum value of the predetermined D range for highlight detail loss/shadow detail loss judgment, and ThminAre minimum values, which are predetermined to prevent adverse effects in the image after correction of the second component.

In case of loss of highlight detail (L' + H)>Thmax)

In step S1003, the second component correction module corrects the second component to suppress highlight detail loss.

Hcb=Lmaxwexp(-αH)-L' ...(38)

In case of loss of shadow detail (L' + H)<Thmin)

In step S1003, the second component correction module corrects the second component to suppress the shadow detail loss.

Hcb=Lminsexp(αH)-L' ...(39)

In the case other than the above case, no highlight detail loss/shadow detail loss occurs, and in step S1002, the second component correction module does not correct the second component (equation (40)).

Hcb=H ...(40)

Note that the buffer area Δ in equations (38) and (39)WAnd ΔSCalculated from the following equation.

ΔW=Lmax-Thmax ...(41)

ΔS=Thmin-Lmin ...(42)

Note that a constant H much larger than H may be setmaxTo be calculated as follows.

In case of loss of highlight detail (L' + H)>Thmax)

Figure BDA0002111879000000321

In case of loss of shadow detail (L' + H)<Thmin)

Figure BDA0002111879000000322

In the case other than the above case, the second component is not corrected as in equation (40).

As described above, in the present embodiment, highlight detail loss/shadow detail loss correction determination is made, and only the second component requiring highlight detail loss/shadow detail loss correction can be corrected. Therefore, in the case where the first component is on the high luminance side or the low luminance side, a decrease in contrast can be suppressed, while a loss of highlight detail/a loss of shadow detail hardly occurs depending on the value of the second component.

(ninth embodiment)

The ninth embodiment will be described with reference to the flowchart of fig. 21. Fig. 21 shows details of highlight detail loss/shadow detail loss correction by the contrast correction module 407.

In the present embodiment, in the loss of highlight detail/loss of shading detail determination described in the eighth embodiment, it is determined whether or not to perform the loss of highlight detail/loss of shading detail correction based on the first component L' after the D range compression, the second component H before the D range compression, and the Just Noticeable Difference (JND) decided in step S900 (S901).

When JND is considered in highlight detail loss/shadow detail loss correction judgment, the contrast after the second component correction can be easily perceived. For example, in FIG. 20, if the buffer area ΔWAnd ΔSIs smaller than JND, it is difficult to perceive the luminance difference between the pixels 21 and 22 after the second component correction. This is because even if the second component is corrected so that it falls into the bufferIntra-domain to prevent loss of highlight detail/shadow detail, since the width of the buffer area is smaller than JND, the contrast is also lost visually. Thus, the buffer area ΔWAnd ΔSIs preferably greater than JND.

The contrast correction module 407 holds the value of the Just Noticeable Difference (JND) for luminance by a JND holding module (not shown).

Note that the value of JND may be calculated at the start of the program and held in memory (the RAM 302 or the RAM314 shown in fig. 3) until the end of the program, or the LUT may be held in an external file and loaded as needed. Alternatively, the value of JND may be calculated each time.

JND is a threshold that enables a person to recognize a difference. Luminance differences smaller than JND are hardly perceived.

JNDs are obtained, for example, from Barten's model as shown in fig. 22. Barten's model is a physiological model of the visual system formed by a mathematical description. The horizontal axis in fig. 22 represents the luminance value, and the vertical axis in fig. 22 represents the minimum contrast step (step) perceivable by a human with respect to the luminance value. Here, let LjIs a specific brightness value, and Lj+1Is obtained by mixing JND with LjBrightness value obtained by addition, minimum contrast step mtDefined by, for example, the following formula.

Based on equation (45), luminance value LjJND at (b) is represented by the following formula.

Figure BDA0002111879000000342

This indicates that a human can perceive the luminance difference when the luminance difference is equal to or greater than JND. As a model representing visual characteristics, various mathematical models such as a Weber model and a DeVries-Rose model have been proposed in addition to the Barten model. The JND may be a value obtained experimentally or empirically by sensory evaluation or the like.

In the highlightIn the detail loss/shadow detail loss judgment, the buffer area Δ in fig. 20WAnd ΔSIs determined to be equal to or greater than JND, thereby reducing a reduction in visual contrast after highlight detail loss/shadow detail loss correction. That is, if the width of the buffer area is smaller than JND, it is difficult to perceive the luminance difference in the buffer area. When the width of the buffer area is equal to or greater than JND, the contrast in the buffer area is easily perceived even after the second component is corrected.

Fig. 21 shows a procedure of highlight detail loss/shadow detail loss correction processing according to the ninth embodiment. According to the eighth embodiment, step S900 for deciding the highlight detail loss/shadow detail loss correction judgment D range is added to the process shown in fig. 19, and judgment is made using the decided D range.

In step S900, the contrast correction module 407 determines the maximum value Th of the highlight detail loss/shadow detail loss correction determination D rangemaxAnd a minimum value ThminSo as to satisfy the following equation.

ΔW=Lmax-Thmax≧JDN(Lmax) ...(47)

Δs=Thmin-Lmin≧JDN(Lmin) ...(48)

The processing after highlight detail loss/shadow detail loss correction judgment is the same as that in the eighth embodiment except that the judgment is made using the determined luminance range D1.

As described above, in the ninth embodiment, at the time of highlight detail loss/shadow detail loss correction judgment, the width of the buffer area for correcting the second component is decided in consideration of the visual characteristic JND. When the width of the buffer area is equal to or greater than JND, loss of contrast after the second component correction can be reduced.

(tenth embodiment)

In the seventh to ninth embodiments, highlight detail loss/shadow detail loss correction is performed after contrast correction is performed by the contrast correction module 407. In the tenth embodiment, the contrast correction module corrects the second component and corrects highlight detail loss or shadow detail loss without performing contrast correction. Note that, in the present embodiment, the image processing apparatus 300 may not include the contrast correction module 407, and may have a function of a correction module configured to correct the second component in the following manner. The rest is the same as the seventh embodiment.

Fig. 23 is a flowchart showing a procedure of image processing according to the present embodiment. Unlike the process of fig. 9 shown in the first embodiment, steps S203 and S204 in fig. 9 are replaced with the second component correction of step S1201 in fig. 10.

Steps S201 and S202 are the same as in fig. 9 described in the first embodiment.

Next, in step S1201, the second component is corrected in the following manner.

When H is present>At 0 time

When H >0, loss of highlight detail may occur on the highlight side. Therefore, correction is performed such that the absolute value of the second component H becomes smaller as the value of the first component L' after D-range compression becomes larger. Here, the second component H is corrected using the following correction coefficient W, thereby obtaining Hcb.

Hcb=W(L',L'max,L'min)H ...(49)

When H is present<At 0 time

When H <0, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the absolute value of the second component becomes smaller as the value L' becomes smaller. The second component H is corrected using the following correction coefficient S.

Hcb=S(L',L'max,L'min)H ...(50)

When H is 0

When H is 0, the second component is not corrected since the addition of the second component does not cause highlight detail loss/shadow detail loss.

Here, the correction coefficients W and S are calculated by the following equation.

Figure BDA0002111879000000361

Here, α, β, t1And t2Is a predetermined constant. As the position of the first component moves to the high luminance side or the low luminance side, the second component is suppressed.

And, L'maxAnd L'minRespectively, the maximum and minimum values of the luminance of the compressed D range.

When the nonlinear function is applied to the correction coefficient in this way, the second component can be strongly suppressed as the position of the first component moves to the high luminance side or the low luminance side.

Note that the correction coefficients W and S are not always necessarily Sigmoid-type functions as described above. Any function may be used for the decision as long as it is a function for strongly suppressing the second component as the position of the first component moves to the high luminance side or the low luminance side.

Note that equations (49) and (50) can be executed by obtaining W (L ') and S (L ') using the LUT calculated in advance for each value L '. When the LUT prepared in advance is used, the processing load required for the operation can be reduced, and the processing speed can be increased.

Note that the correction coefficients W and S may be calculated using the value of the first component L before D-range compression. When the first component L before D-range compression is used, the D-range compression and the second component correction processing can be performed in parallel, and the calculation efficiency improves. Is provided with LmaxAnd LminWhich are the maximum and minimum values of the luminance of the D range before compression, the correction of the second component in this case is performed in the following manner.

When H is present>At 0 time

Hcb=W(L,Lmax,Lmin)H ...(53)

When H is present<At 0 time

Hcb=S(L,Lmax,Lmin)H ...(54)

When H is 0

No operation is performed.

Step S205 is performed as in the first embodiment.

(eleventh embodiment)

An eleventh embodiment of the present invention will be described. The processing procedure is the same as that in fig. 11 described in the second embodiment, and will be described using this. In addition, descriptions of parts overlapping with the above-described embodiment will be omitted, and only differences will be described. The processing of steps S401 to S403 is the same as that of the second embodiment.

In step S404, the contrast correction module 407 performs contrast correction using the contrast correction intensity Hm generated in step S403 by the method described above with reference to fig. 9 for the high-frequency value of the image data subjected to D range conversion in step S402. That is, steps S403 and S404 of this processing procedure correspond to the processing shown in fig. 9 described in the first embodiment. Further, as in the seventh embodiment, highlight detail loss/shadow detail loss correction is performed in the following manner.

Note that, in the case where the high frequency value Hc and the low frequency value K are generated using equation (7),

when Hc is present>At 1 hour

When Hc >1, loss of highlight detail may occur on the highlight side. Therefore, correction is made so that the second component becomes close to 1 as the value of the first component L' becomes larger. Here, the second component is corrected using the following correction coefficient P.

Hcb=(1-P(L',L'max,L'min))H+P(L',L'max,L'min)·1 ...(55)

When Hc is present<At 1 hour

When Hc <1, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the second component becomes close to 1 as the value L becomes smaller. The second component is corrected using the following correction coefficient Q.

Hcb=Q(L',L'max,L'min)H+(1-Q(L',L'max,L'min)·1 ...(56)

When Hc is 1

When Hc is 1, the second component is not corrected because the addition of the second component does not cause highlight detail loss/shadow detail loss.

The correction coefficients P and Q are calculated in the following manner.

Figure BDA0002111879000000381

Figure BDA0002111879000000382

Wherein, α, β, t1And t2Is a predetermined constant. If the first component after the D range compression has a halftone, the second component is not suppressed. The second component is suppressed only when the first component after the D range compression is on the high luminance side or the low luminance side.

In step S405, the contrast correction module 407 synthesizes the high-frequency value Hcb after the contrast correction and the second component correction in step S404, the low-frequency value L calculated in step S202 of fig. 9, and the values Cb and Cr generated in step S201 of fig. 9 to obtain the original RGB data. First, the contrast correction module 407 integrates the contrast-corrected high frequency value Hc and the low frequency value L by equation (59), thereby obtaining the contrast-corrected luminance I' by synthesizing the frequency values.

I'=Hcb×L ...(59)

Note that when equation (7) is used to generate the high frequency value Hc and the low frequency value L, the second component is corrected in the following manner to prevent the second component corrected by the contrast correction module 407 from exceeding the input D range and causing highlight detail loss/shadow detail loss.

When Hc is present>At 0 time

When Hc >0, loss of highlight detail may occur on the highlight side. Therefore, correction is performed such that the absolute value of the second component becomes smaller as the value of the first component L becomes larger. Here, the second component is corrected using the correction coefficient W.

Hcb=W(L,Lmax,Lmin)Hc ...(60)

When Hc is present<At 0 time

When Hc <0, shadow detail loss may occur on the low luminance side. Therefore, correction is performed so that the absolute value of the second component becomes smaller as the value L becomes smaller. Here, the second component is corrected using the correction coefficient S.

Hcb=S(L,Lmax,Lmin)Hc ...(61)

Here, the correction coefficients W and S are calculated by the following equation.

Figure BDA0002111879000000391

Figure BDA0002111879000000392

When Hc is 0

In this case, since the addition of the value of the second component Hc does not cause highlight detail loss/shadow detail loss, no operation is performed.

Here, α, β, t1And t2Is a predetermined constant. If the first component has a halftone, the second component is not suppressed. The second component is suppressed only when the value of the first component is on the high luminance side or the low luminance side.

In addition, LmaxAnd LminThe maximum and minimum values of the input D range, respectively. Note that the correction coefficients W and S are not always necessarily Sigmoid-type functions as described above. The function is not particularly limited as long as it makes the absolute value of the second component Hcb after correction smaller than the absolute value of the second component Hc before correction.

In addition, equations (62) and (63) can be performed by obtaining W (L ') and S (L ') using the LUT calculated in advance for each value L '. When the LUT prepared in advance is used, the processing load required for the operation can be reduced, and the processing speed can be increased.

In this case, the luminance I' may be represented by the following formula.

I'=Hcb+L ...(64)

Then, the contrast correction module 407 performs plane synthesis on the luminance I 'and the color difference values (Cb, Cr) to generate color image values (I', Cb, Cr). An image subjected to contrast correction according to the present embodiment is thus obtained.

OTHER EMBODIMENTS

The embodiments of the present invention can also be realized by a method in which software (programs) that perform the functions of the above-described embodiments are supplied to a system or an apparatus through a network or various storage media, and a computer or a Central Processing Unit (CPU), a Micro Processing Unit (MPU) of the system or the apparatus reads out and executes the methods of the programs.

While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

44页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:相机模组

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类