Image processing device, endoscope device, method for operating image processing device, and image processing program

文档序号:1509358 发布日期:2020-02-07 浏览:11次 中文

阅读说明:本技术 图像处理装置、内窥镜装置、图像处理装置的工作方法及图像处理程序 (Image processing device, endoscope device, method for operating image processing device, and image processing program ) 是由 森田惠仁 于 2017-06-21 设计创作,主要内容包括:一种图像处理装置,该图像处理装置包括:图像取得部,其取得包括通过对被摄体照射来自光源部的照明光而得到的被摄体像的摄像图像;以及可视性强调部18,其通过对摄像图像的黄色以外的区域进行颜色的衰减处理,从而使摄像图像的黄色区域的可视性相对提高。(An image processing apparatus, comprising: an image acquisition unit that acquires a captured image including a subject image obtained by irradiating the subject with illumination light from the light source unit; and a visibility emphasizing unit 18 that performs color attenuation processing on a region other than yellow of the captured image, thereby relatively improving the visibility of the yellow region of the captured image.)

1. An image processing apparatus characterized by comprising:

an image acquisition unit that acquires a captured image including a subject image obtained by irradiating a subject with illumination light from a light source unit; and

and a visibility emphasizing unit that relatively increases visibility of a yellow region of the captured image by performing color attenuation processing on a region other than yellow of the captured image.

2. The image processing apparatus according to claim 1, wherein said image processing apparatus includes a detection unit that detects a blood region, which is a region of blood in said captured image, based on color information of said captured image,

the visibility emphasizing unit suppresses or stops the attenuation processing for the blood region based on a detection result of the detecting unit.

3. The image processing apparatus according to claim 2,

the detection section includes:

a blood vessel region detection unit that detects a blood vessel region, which is a region of a blood vessel in the captured image, based on the color information and the structure information of the captured image,

the visibility emphasizing unit suppresses or stops the attenuation processing for the blood vessel region based on a detection result of the blood vessel region detecting unit.

4. The image processing apparatus according to claim 1, characterized in that the image processing apparatus comprises:

a blood vessel region detection unit that detects a blood vessel region, which is a region of a blood vessel in the captured image, based on color information and structure information of the captured image,

the visibility emphasizing unit emphasizes a structure of the blood vessel region of the captured image based on a detection result of the blood vessel region detecting unit, and performs the attenuation processing on the emphasized captured image.

5. The image processing apparatus according to claim 2,

the detection section includes:

a blood region detection unit that detects the blood region based on at least one of the color information and the luminance information of the captured image,

the visibility emphasizing unit suppresses or stops the attenuation processing for the blood region based on a detection result of the blood region detecting unit.

6. The image processing apparatus according to claim 5,

the blood region detection unit divides the captured image into a plurality of local regions, and determines whether or not each of the local regions is the blood region, based on at least one of the color information and the luminance information of the local region.

7. The image processing apparatus according to claim 1,

the visibility emphasizing unit performs the attenuation processing of colors for regions other than the yellow color of the captured image, based on the captured image.

8. The image processing apparatus according to claim 2,

the visibility emphasizing unit performs the attenuation process by performing: the processing is processing for obtaining a color signal corresponding to the blood for a pixel or a region of the captured image, and multiplying a coefficient, the value of which changes according to the signal value of the color signal, by a color signal of a region other than the yellow.

9. The image processing apparatus according to claim 1,

the visibility emphasizing unit performs a color conversion process of performing a rotation conversion in a green side direction in a color space with respect to pixel values of the pixels in the yellow region.

10. The image processing apparatus according to claim 1,

the color of the yellow region is the color of carotene, bilirubin, or fecal bile pigment.

11. The image processing apparatus according to claim 2, characterized in that the image processing apparatus comprises:

and a notification processing unit that performs notification processing based on a detection result of the blood region detected by the detection unit.

12. An endoscopic apparatus, characterized in that,

the endoscope apparatus includes the image processing apparatus according to claim 1.

13. The endoscopic device of claim 12, wherein the endoscopic device comprises:

the light source unit emits the illumination light having a wavelength band of normal light.

14. The endoscopic device of claim 13,

the light source part includes 1 or more light emitting diodes,

the light source unit emits the normal light generated by the light emission of the 1 or more light emitting diodes as the illumination light.

15. An operating method of an image processing apparatus, characterized in that,

acquiring a captured image including an object image obtained by irradiating an object with illumination light from a light source unit; and

by performing color attenuation processing on a region other than yellow of the captured image, visibility of the yellow region of the captured image is relatively improved.

16. An image processing program for causing a computer to execute the steps of:

acquiring a captured image including an object image obtained by irradiating an object with illumination light from a light source unit; and

by performing color attenuation processing on a region other than yellow of the captured image, visibility of the yellow region of the captured image is relatively improved.

Technical Field

The present invention relates to an image processing apparatus, an endoscope apparatus, a method of operating an image processing apparatus, an image processing program, and the like.

Background

Patent document 1 discloses a method as follows: that is, reflected light of 1 st to 3 rd wavelength bands corresponding to absorption characteristics of carotene and hemoglobin are separately captured, 1 st to 3 rd reflected light images are acquired, and a composite image obtained by combining the 1 st to 3 rd reflected light images with different colors is displayed, thereby improving visibility of a subject having a specific color (here, carotene) in a body cavity.

Further, patent document 2 discloses a method as follows: a plurality of spectral images are acquired, a separation target component amount is calculated using the plurality of spectral images, and an RGB color image is subjected to emphasis processing based on the separation target component amount. In the emphasis processing, as the number of components to be separated, which are the component amounts of the subject whose visibility is to be improved, is smaller, the luminance signal and the color difference signal are attenuated, and the visibility of the subject of the specific color is improved.

Disclosure of Invention

Problems to be solved by the invention

As described above, a method is known in which a specific color in a body is emphasized, or the color is attenuated as the amount of a component of the specific color is smaller, thereby improving the visibility of an object of the specific color. However, in the conventional methods such as patent documents 1 and 2, it is necessary to separately capture reflected light in the 1 st to 3 rd wavelength bands or to acquire a plurality of spectral images, and a light source having a complicated configuration and complicated imaging control are required.

According to some aspects of the present invention, it is possible to provide an image processing apparatus, an endoscope apparatus, an operation method of the image processing apparatus, an image processing program, and the like, which can relatively improve the visibility of a subject of a specific color by control with a simple configuration.

Means for solving the problems

An aspect of the present invention relates to an image processing apparatus including: an image acquisition unit that acquires a captured image including a subject image obtained by irradiating a subject with illumination light from a light source unit; and a visibility emphasizing unit that performs color attenuation processing on a region other than yellow of the captured image, thereby relatively improving visibility of the yellow region of the captured image.

This makes it possible to attenuate the color of a region other than yellow in the subject captured in the captured image, compared with the color of the yellow region. As a result, the yellow region is highlighted, and the visibility of the yellow region can be relatively improved compared to the regions other than yellow.

Another aspect of the present invention relates to an endoscope apparatus including the image processing apparatus described above.

Another aspect of the present invention relates to an operating method of an image processing apparatus for acquiring a captured image including a subject image obtained by irradiating a subject with illumination light from a light source unit; and performing color attenuation processing on a region other than yellow of the captured image, thereby relatively improving visibility of the yellow region of the captured image.

Another aspect of the present invention relates to an image processing program for causing a computer to execute: acquiring a captured image including an object image obtained by irradiating an object with illumination light from a light source unit; and performing color attenuation processing on a region other than yellow of the captured image, thereby relatively improving visibility of the yellow region of the captured image.

Drawings

Fig. 1 (a) and 1 (B) show an example of an in-vivo image captured by an endoscope (hard endoscope) during a surgical procedure.

Fig. 2 is a configuration example of the endoscope apparatus of the present embodiment.

Fig. 3 (a) shows the absorption characteristics of hemoglobin and the absorption characteristics of carotene. Fig. 3 (B) shows transmittance characteristics of the color filter of the image sensor. Fig. 3 (C) shows an intensity spectrum of white light.

Fig. 4 shows a detailed configuration example of the image processing unit 1.

Fig. 5 is a diagram illustrating an operation of the blood region detection unit.

Fig. 6 is a diagram illustrating an operation of the visibility emphasizing portion.

Fig. 7 is a diagram illustrating an operation of the visibility emphasizing portion.

Fig. 8 is a diagram illustrating an operation of the visibility emphasizing portion.

Fig. 9 is a detailed configuration example of the 2 nd image processing unit.

Fig. 10 shows a1 st modification of the endoscope apparatus according to the present embodiment.

Fig. 11 (a) shows the absorption characteristics of hemoglobin and the absorption characteristics of carotene. Fig. 11 (B) shows an intensity spectrum of light emitted from the light emitting diode.

Fig. 12 is a modification 2 of the endoscope apparatus according to the present embodiment.

Fig. 13 shows a detailed configuration example of the Filter turret (Filter turret).

Fig. 14 (a) shows the absorption characteristics of hemoglobin and the absorption characteristics of carotene. Fig. 14 (B) shows transmittance characteristics of the filter bank of the filter turret.

Fig. 15 shows a modification 3 of the endoscope apparatus according to the present embodiment.

Fig. 16 (a) shows the absorption characteristics of hemoglobin and the absorption characteristics of carotene. Fig. 16 (B) shows spectral transmittance characteristics of the dichroic prism 34.

Fig. 17 shows a detailed configuration example of the image processing unit 3.

Fig. 18 shows an example of the configuration of the operation support system.

Detailed Description

The present embodiment will be described below. The embodiments described below are not intended to unduly limit the scope of the present invention set forth in the claims. It is to be noted that the entire structure described in the present embodiment is not necessarily an essential constituent element of the present invention.

For example, the following description will be given taking as an example a case where the present invention is applied to a rigid endoscope used in a surgical operation or the like, but the present invention can also be applied to a flexible endoscope used in an endoscope for an alimentary canal or the like.

1. Endoscope device and image processing unit

Fig. 1 (a) shows an example of an in-vivo image captured by an endoscope (rigid endoscope) during a surgical procedure. In such an in vivo image, since the nerve is transparent, it is difficult to directly see the nerve. Therefore, by observing fat located around the nerve (through which the nerve passes), the position of the nerve that cannot be directly seen is estimated. Fat in the body contains carotene, and the fat looks yellowish due to the absorption characteristics (spectral characteristics) of carotene.

Thus, in the present embodiment, as shown in fig. 6, the captured image is subjected to a process of reducing the color difference of a color other than yellow (specific color), so that the visibility of the yellow object is relatively improved (the yellow object is emphasized). This can improve the visibility of fat through which nerves are likely to pass.

As indicated by BR in fig. 1 (a), blood may be present (or internally hemorrhaged) in the subject due to bleeding during the operation. Furthermore, blood vessels exist in the subject. The amount of absorbed light increases as the amount of blood on the subject increases, and the wavelength of absorption depends on the absorption characteristics of hemoglobin. As shown in fig. 3 (a), the absorption characteristics of hemoglobin are different from those of carotene. Therefore, as shown in BR' of fig. 1B, when a process of attenuating colors other than yellow is performed, there is attenuation of color difference (saturation) in a region of blood (bleeding blood, blood vessel). For example, in a region where blood stagnates, the region may become dark due to absorption of blood, and when the saturation of such a region decreases, the region is imaged as a dark region with low saturation. Alternatively, in a blood vessel with low contrast, when the saturation thereof is reduced, the contrast may be further reduced.

Thus, in the present embodiment, a region where blood exists is detected from the captured image, and the display mode of the display image (for example, the process of attenuating colors other than yellow) is controlled based on the detection result. Hereinafter, an image processing apparatus according to the present embodiment and an endoscope apparatus including the image processing apparatus will be described.

Fig. 2 is a configuration example of the endoscope apparatus of the present embodiment. The endoscope apparatus 1 (endoscope system, living body observation apparatus) of fig. 2 includes: an insertion section 2 (scope) inserted into a living body; a control device 5 (main body portion) having a light source portion 3 (light source device) connected to the insertion portion 2, a signal processing portion 4, and a control portion 17; an image display unit 6 (display, display device) that displays the image generated by the signal processing unit 4; and an external I/F section 13 (interface).

The insertion portion 2 has: an illumination optical system 7 that irradiates the subject with light input from the light source unit 3; and a photographing optical system 8 (image pickup device, image pickup section) for photographing the reflected light from the subject. The illumination optical system 7 is a light guide cable that is disposed over the entire length of the insertion portion 2 in the longitudinal direction and guides light incident from the light source portion 3 on the proximal end side to the distal end.

The photographing optical system 8 includes: an objective lens 9 that condenses reflected light from the subject among the light irradiated by the illumination optical system 7; and an image pickup device 10 that picks up an image of the light condensed by the objective lens 9. The image pickup device 10 is, for example, a single-plate color image pickup device, such as a CCD image sensor or a CMOS image sensor. As shown in fig. 3B, the image sensor 10 includes a color filter (not shown) having transmittance characteristics for each of RGB colors (red, green, and blue).

The light source unit 3 includes a xenon lamp 11 (light source) that emits white light (normal light) in a wide wavelength band. As shown in FIG. 3 (C), the xenon lamp 11 emits white light having an intensity spectrum in a wavelength band of 400 to 700nm, for example. The light source of the light source unit 3 is not limited to a xenon lamp, and may be any light source that can emit white light.

The signal processing section 4 includes: an interpolation unit 15 that processes the image signal acquired by the imaging device 10; and an image processing unit 16 (image processing device) that processes the image signal processed by the interpolation unit 15. The interpolation section 15 performs a channelization process (generates a color image having pixel values of RGB in each pixel) of a color image (a so-called bayer array image) 3 obtained by pixels corresponding to each color of the image pickup device 10 by a known demosaicing process.

The control unit 17 synchronizes the timing of image capturing by the image sensor 10 and the timing of image processing by the image processing unit 16, based on an instruction signal from the external I/F unit 13.

Fig. 4 shows a detailed configuration example of the image processing unit 1. The image processing unit 16 includes a preprocessing unit 14, a visibility emphasizing unit 18 (yellow emphasizing unit), a detecting unit 19 (blood detecting unit), and a post-processing unit 20.

Here, a case where carotene is contained in fat as an object to be improved in visibility will be described. As shown in FIG. 3 (A), carotene contained in the living tissue has a high absorption property in the region of 400 to 500 nm. Hemoglobin (HbO) as a component in blood2Hb) has a high absorption characteristic in a wavelength band of 450nm or less and a wavelength band of 500 to 600 nm. Therefore, in the case of irradiating white light, carotene looks yellow, and blood looks red. More specifically, when white light shown in fig. 3 (C) is irradiated and an image is captured by the image pickup device having spectral characteristics shown in fig. 3 (B), the yellow component of the pixel value of the subject including carotene is large, and the red component of the pixel value of the subject including blood is large.

In the image processing unit 16 of fig. 4, using the absorption characteristics of carotene and blood, the detection unit 19 detects blood from the captured image, and the visibility emphasizing unit 18 performs a process of enhancing the visibility of the color (in a broad sense, yellow) of carotene. The visibility emphasizing unit 18 controls a process for improving visibility using the detection result of blood. The details of each part of the image processing unit 16 will be described below.

The preprocessing unit 14 performs OB (Optical Black) clamp processing, gain correction processing, and WB (White Balance) correction processing on the 3-channel image signal input from the interpolation unit 15, using OB clamp values, gain correction values, and WB (White Balance) coefficient values held in advance in the control unit 17. Hereinafter, an image (RGB color image) processed and output by the preprocessing unit 14 is referred to as a captured image.

The detection section 19 includes: a blood image generating unit 23 that generates a blood image from the captured image from the preprocessing unit 14; and a blood area detection unit 22 (bleeding blood area detection unit) that detects a blood area (bleeding blood area in a narrow sense) from the blood image.

As described above, the preprocessed image signals include 3 kinds (3 channels) of image signals of blue, green, and red. The blood image generating unit 23 generates 1-channel image signals from 2 types (2 channels) of green and red image signals, and constructs a blood image using the image signals. In a blood image, a pixel having a larger amount of hemoglobin included in a subject has a higher pixel value (signal value). For example, a difference between a red pixel value and a green pixel value is obtained for each pixel to generate a blood image. Alternatively, a value obtained by dividing a red pixel value by a green pixel value is obtained for each pixel to generate a blood image.

In addition, although the above description has been made of the example of generating the blood image from the 2-channel signal, the present invention is not limited to this, and for example, the blood image may be generated by calculating the luminance (Y) and the color difference (Cr, Cb) from the 3-channel signal of RGB. In this case, a blood image is generated by using, as an area where blood exists, an area where the saturation of red is sufficiently high or an area where the luminance signal is low to some extent, based on the color difference signal. For example, an index value corresponding to the saturation of red is obtained from the color difference signal for each pixel, and a blood image is generated using the index value. Alternatively, an index value whose value increases as the luminance signal decreases is obtained for each pixel from the luminance signal, and a blood image is generated by the index value.

The blood region detection unit 22 sets a plurality of local regions (divided regions, blocks) in the blood image. For example, a blood image is divided into a plurality of rectangular regions, and each of the divided rectangular regions is set as a local region. The size of the rectangular region can be set appropriately, and for example, 1 local region is set to 16 × 16 pixels. For example, as shown in fig. 5, a blood image is divided into M × N local regions, and the coordinates of each local region are expressed by (M, N). M is an integer of 1 to M inclusive, and N is an integer of 1 to N inclusive. The local area of coordinates (m, n) is denoted as a (m, n). In fig. 5, the coordinates of the local area located at the upper left of the image are represented as (1, 1), the rightward direction is represented as the positive direction of m, and the downward direction is represented as the positive direction of n.

The local region does not necessarily have to be a rectangle, and the blood image may be divided into arbitrary polygons, and each divided region may be set as a local region. In addition, the local area may be set arbitrarily according to an instruction from the operator. In the present embodiment, in order to reduce the amount of subsequent calculation and remove noise, a region composed of a plurality of adjacent pixel groups is set as 1 local region, but 1 pixel may be set as 1 local region. In this case, the same applies to the subsequent processing.

The blood region detection unit 22 sets a blood region in which blood is present on the blood image. That is, a region having a large amount of hemoglobin is set as a blood region. For example, threshold processing is performed on all local regions, a local region having a sufficiently large value of a blood image signal is extracted, and each region obtained by performing integration processing on adjacent local regions is set as a blood region. In the threshold processing, for example, a value obtained by averaging pixel values in a local region is compared with a predetermined threshold value, and a local region having a value obtained by averaging larger than the predetermined threshold value is extracted. The blood region detecting unit 22 calculates the positions of all pixels included in the blood region based on the coordinates a (m, n) of the local regions included in the blood region and the information of the pixels included in each local region, and outputs the calculated information to the visibility emphasizing unit 18 as blood region information indicating the blood region.

The visibility emphasizing unit 18 performs a process of reducing the saturation of the region other than yellow in the color difference space with respect to the captured image from the preprocessing unit 14. Specifically, image signals of RGB of pixels of a captured image are converted into YCbCr signals of luminance color differences. The conversion formula is the following numerical formulas (1) to (3).

Y=0.2126×R+0.7152×G+0.0722×B……(1)

Cb=-0.114572×R-0.385428×G+0.5×B……(2)

Cr=0.5×R-0.454153×G-0.045847×B……(3)

Next, as shown in fig. 6, the visibility emphasizing unit 18 attenuates the color difference in the region other than yellow in the color difference space. For example, a yellow range in the color difference space is defined by a range of an angle with respect to the Cb axis, and the attenuation of the color difference signal is not performed for pixels in the range of the angle at which the color difference signal enters.

Specifically, as shown in the following expressions (4) to (6), the visibility emphasizing section 18 controls the attenuation amount in the blood region detected by the blood region detecting section 22 based on the signal value of the blood image, and in the region other than the blood region (region other than the yellow region), for example, the coefficients α, β, and γ are fixed to a value smaller than 1, or the attenuation amount may be controlled by the following expressions (4) to (6) in the region other than the blood region (region other than the yellow region).

Y'=α(SHb)×Y……(4)

Cb'=β(SHb)×Cb……(5)

Cr'=γ(SHb)×Cr……(6)

SHb is a signal value (pixel value) of a blood image, and as shown in fig. 7, α (SHb), β (SHb), and γ (SHb) are coefficients that change according to the signal value SHb of the blood image, and take values of 0 to 1 inclusive, for example, as shown in KA1 of fig. 7, coefficients that are proportional to the signal value SHb, or, as shown in KA2, coefficients that are 0 when the signal value SHb is SA or less, coefficients that are proportional to the signal value SHb when the signal value SHb is greater than SA and less than SB, and coefficients that are 1.0 < SA < SB < Smax when the signal value SHb is greater than SB, and Smax are the maximum values that the signal value SHb can take, fig. 7 shows a case where the coefficients change linearly with respect to the signal value SHb, but the coefficients may change curvedly with respect to the signal value SHb, for example, curves that protrude upward or downward from 1 may be generated, and α (SHb), β (SHb), and γ (SHb) may be the same coefficients that change with respect to the signal value.

According to the above expressions (4) to (6), the attenuation amount is reduced because the coefficient is close to 1 in the region where blood exists. That is, in the blood image, the larger the signal value, the more difficult the color (color difference) is to be attenuated. Alternatively, in the blood region detected by the blood region detecting unit 22, the amount of attenuation is smaller than that outside the blood region, and therefore the color (color difference) is less likely to be attenuated.

Further, as shown in fig. 8, the yellow region may be rotated in the green direction in the color difference space. This can emphasize the contrast between the yellow region and the blood region. As described above, yellow is defined by the range of angles with the Cb axis as a reference. Then, the color difference signal belonging to the yellow angular range is rotated counterclockwise by a predetermined angle in the color difference space, and is rotated in the green direction.

The visibility emphasizing unit 18 converts the YCbCr signal after the attenuation process into an RGB signal by the following expressions (7) to (9). The visibility emphasizing section 18 outputs the converted RGB signals (color image) to the post-processing section 20.

R=Y'+1.5748×Cr'……(7)

G=Y'-0.187324×Cb'-0.468124×Cr'……(8)

B=Y'+1.8556×Cb'……(9)

In addition, although the above description has been given by taking as an example the case where the color difference signal and the luminance signal in the region other than yellow are attenuated, only the color difference signal in the region other than yellow may be attenuated. In this case, the above numerical expression (4) is not executed, and Y' is Y in the above numerical expressions (7) to (9).

In addition, the above description has been given by taking as an example the case where the process of attenuating the color other than yellow is suppressed in the blood region, but the control method of the process of attenuating the color other than yellow is not limited to this. For example, in the case where the blood region exceeds a certain ratio of the image (i.e., the number of pixels of the blood region/the total number of pixels exceeds the threshold), the process of attenuating colors other than yellow can be suppressed in the entire image.

The post-processing unit 20 performs post-processing such as gradation conversion processing, color processing, and contour enhancement processing on the image (image in which colors other than yellow are attenuated) from the visibility enhancing unit 18 using the gradation conversion coefficient, the color conversion coefficient, and the contour enhancement coefficient stored in the control unit 17, and generates a color image to be displayed on the image display unit 6.

According to the above embodiment, the image processing apparatus (image processing unit 16) includes the image acquisition unit (for example, the preprocessing unit 14) and the visibility emphasizing unit 18. The image acquisition unit acquires a captured image including an object image obtained by irradiating the object with illumination light from the light source unit 3. As described with reference to fig. 6 and the like, the visibility emphasizing unit 18 performs color attenuation processing on the region other than yellow of the captured image to relatively improve the visibility of the yellow region of the captured image (performs yellow emphasis).

In this way, the saturation of the tissue having a color other than yellow in the subject captured in the captured image can be attenuated as compared with the saturation of the tissue having yellow (for example, fat containing carotene). As a result, the tissue having yellow is highlighted, and the visibility of the tissue having yellow can be relatively improved as compared with the tissue having a color other than yellow. Further, since the attenuation processing is performed using the captured image (for example, RGB color image) acquired by the image acquiring unit, the configuration and processing are simplified as compared with the case where a plurality of spectral images are prepared or the attenuation processing is performed using the plurality of spectral images.

Here, yellow is a color belonging to a predetermined region corresponding to yellow in a color space. For example, the color is a color in which a range of angles with reference to the Cb axis having the origin as the center in the CbCr plane of the YCbCr space belongs to a predetermined angle range. Alternatively, the color belongs to a predetermined angular range in a hue (H) plane of the HSV space. Further, yellow is a color between red and green in a color space, for example, counterclockwise in red and clockwise in green in a CbCr plane. The color of yellow can be defined by the spectral characteristics of a substance having yellow color (e.g., carotene, bilirubin, coprocetin, etc.) and the region occupied in the color space, without being limited to the above definition. The color other than yellow is, for example, a color that does not belong to a predetermined region corresponding to yellow (a region other than the predetermined region) in the color space.

The color fading process is a process for reducing the saturation of the color. For example, as shown in fig. 6, the processing is to attenuate the color difference signals (Cb signal, Cr signal) in the YCbCr space. Alternatively, the saturation signal (S signal) is attenuated in the HSV space. The color space used for the attenuation process is not limited to the YCbCr space and the HSV space.

In the present embodiment, the image processing apparatus (image processing unit 16) includes a detection unit 19, and the detection unit 19 detects a blood region, which is a region of blood in the captured image, based on the color information of the captured image. Then, the visibility emphasizing unit 18 suppresses or stops the attenuation processing for the blood region based on the detection result of the detecting unit 19.

As illustrated in fig. 3 (a), the absorption characteristics of hemoglobin, which is a component of blood, are different from the absorption characteristics of yellow substances such as carotene. Therefore, as illustrated in fig. 1 (B), when the color attenuation processing is performed on the blood region for a region other than yellow, the saturation of the blood region may be reduced. In this regard, in the present embodiment, since the color attenuation processing for the region other than yellow is suppressed or stopped in the blood region, it is possible to suppress or prevent the saturation of the color of the blood region from decreasing.

Here, the blood region refers to a region where blood is estimated to be present in the captured image. Specifically, it has hemoglobin (HbO)2HbO) spectral characteristics (color). For example, as illustrated in fig. 5, the blood region is determined for each local region. This corresponds to the detection of a blood region having a certain (at least local regional) size. The blood region may be (or include) a blood vessel region as described later in fig. 9, for example. I.e. as the object of detectionThe blood region of (2) may be a region located at an arbitrary position of the subject within a range detectable from the image, and may be a region of an arbitrary shape or area. For example, a blood vessel (intravascular blood), a region where a large number of blood vessels (e.g., capillaries) exist, blood that has accumulated on the surface of a subject (tissue, treatment instrument, etc.) due to extravascular bleeding, blood that has accumulated in a tissue due to extravascular bleeding (internal bleeding), and the like can be assumed.

The color information of the captured image is information indicating a color of a pixel or a region (for example, a local region shown in fig. 5) of the captured image. Further, the color information may be acquired from an image (image based on the captured image) after the captured image is subjected to, for example, filtering processing or the like. The color information is a signal obtained by performing an inter-channel operation (for example, subtraction or division) on a pixel value or a signal value of a region (for example, an average value of pixel values in the region). Alternatively, the pixel value or the component of the signal value of the region (channel signal) itself may be used. Alternatively, the signal value may be a signal value obtained by converting a signal value of a pixel value or a region into a signal value of a predetermined color space. For example, the Cb signal and the Cr signal in the YCbCr space may be used, and the hue (H) signal and the saturation (S) signal in the HSV space may be used.

In the present embodiment, the detection unit 19 includes a blood region detection unit 22, and the blood region detection unit 22 detects a blood region based on at least one of color information and luminance information of the captured image, the visibility emphasis unit 18 suppresses or stops the attenuation process for the blood region based on the detection result of the blood region detection unit 22, and the suppression of the attenuation process means that the attenuation amount is greater than zero (for example, the coefficients β and γ of the expressions (5) and (6) are less than 1), and the stop of the attenuation process means that the attenuation process is not performed or the attenuation amount is zero (for example, the coefficients β and γ of the expressions (5) and (6) are 1).

Blood that stagnates on the surface of the subject becomes dark due to its light absorption (for example, the deeper the stagnant blood is, the darker the blood is imaged). Therefore, by using the luminance information of the captured image, the blood remaining on the surface of the subject can be detected, and the decrease in saturation of the remaining blood can be suppressed or prevented.

Here, the luminance information of the captured image is information indicating the luminance of a pixel or a region (for example, a local region shown in fig. 5) of the captured image. Further, the luminance information may be acquired from an image (image based on the captured image) after the captured image is subjected to, for example, filtering processing or the like. The luminance information may be, for example, a pixel value or a component of a signal value of the region (channel signal, for example, G signal of RGB image) itself. Alternatively, the signal value may be a signal value obtained by converting a signal value of a pixel value or a region into a signal value of a predetermined color space. For example, the luminance (Y) signal in the YCbCr space may be used, and the luminance (V) signal in the HSV space may be used.

In the present embodiment, the blood region detection unit 22 divides the captured image into a plurality of local regions (for example, local regions in fig. 5), and determines whether or not each of the plurality of local regions is a blood region based on at least one of color information and luminance information of the local region.

This makes it possible to determine whether or not the local region in the captured image is a blood region. For example, a region obtained by combining adjacent local regions among the local regions determined to be blood regions can be set as a final blood region. Further, by determining whether or not the blood region is present in the local region, the influence of noise can be reduced, and the accuracy of determining whether or not the blood region is present can be improved.

In the present embodiment, the visibility emphasizing unit 18 performs color attenuation processing for a region other than yellow in the captured image based on the captured image. Specifically, an attenuation amount (calculation of an attenuation coefficient) is determined from color information (color information of a pixel or a region) of the captured image, and color attenuation processing is performed for a region other than yellow in accordance with the attenuation amount.

Thus, since the attenuation processing (the amount of attenuation is controlled) is controlled based on the captured image, the configuration and processing can be simplified as compared with a case where, for example, a plurality of spectral images are captured and the attenuation processing is controlled based on the plurality of spectral images.

In the present embodiment, the visibility emphasizing unit 18 obtains a color signal corresponding to blood for a pixel or a region of the captured image, and multiplies a coefficient whose value changes according to a signal value of the color signal by a color signal of a region other than yellow to perform attenuation processing. Specifically, when the color signal corresponding to blood is a color signal having a large signal value in a region where blood is present, a coefficient whose value is large (close to 1) as the signal value is large is multiplied by the color signal in a region other than yellow.

For example, in the above expressions (5) and (6), the color signal corresponding to blood is the difference or division value of the R signal and the G signal, i.e., the signal value SHb, the coefficients are β (SHb), γ (SHb), and the color signal multiplied by the coefficients is the color difference signal (Cb signal, Cr signal).

In this way, the higher the possibility that blood is present (for example, the larger the signal value of the color signal corresponding to blood), the larger the value of the coefficient can be set. By multiplying the coefficient by the color signal of the region other than yellow, the amount of color attenuation can be suppressed as the probability of blood being present is higher.

In the present embodiment, the visibility emphasizing unit 18 performs a color conversion process of performing a rotation conversion of the pixel values of the pixels in the yellow region in the green side direction in the color space.

For example, the color conversion processing is processing of performing counterclockwise rotation conversion in a CbCr plane of a YCbCr space. Or a process of performing counterclockwise rotation conversion in the hue (H) plane of the HSV space. For example, the rotation conversion is performed at an angle smaller than the angular difference between yellow and green in the CbCr plane or the hue plane.

In this way, the yellow region of the captured image is converted to near green. Since the color of blood is red and the complementary color thereof is green, the yellow region is made close to green, thereby improving the contrast between the colors of the blood region and the yellow region and further improving the visibility of the yellow region.

In the present embodiment, the color of the yellow region is the color of carotene, bilirubin, or fecal bile pigment.

Carotenes are substances contained in, for example, fats, cancers, and the like. Bilirubin is a substance contained in bile and the like. Fecal bile pigments are substances contained in feces, urine, and the like.

Thus, a region estimated to have carotene, bilirubin, or fecal bile pigment present is detected as a yellow region, and colors other than the region can be attenuated. This makes it possible to relatively improve the visibility of the region in which fat, cancer, bile, stool, urine, and the like are present in the captured image.

The image processing apparatus according to the present embodiment may be configured as follows. That is, the image processing apparatus includes: a memory that stores information (e.g., programs, various data); and a processor (including a hardware processor) that operates according to information stored in the memory. The processor performs an image acquisition process of acquiring a captured image including an object image obtained by irradiating the object with illumination light from the light source unit 3, and a visibility enhancement process of relatively improving the visibility of a yellow region of the captured image by performing a color attenuation process on a region other than yellow of the captured image.

For the processor, for example, the functions of the respective components may be realized by separate hardware, or may be realized by integrated hardware. For example, the processor includes hardware that may include at least one of circuitry to process digital signals and circuitry to process analog signals. For example, the processor may be constituted by 1 or more circuit devices (for example, an IC or the like) and 1 or more circuit elements (for example, a resistor, a capacitor or the like) mounted on the circuit substrate. The processor may be, for example, a CPU (Central Processing Unit). However, the Processor is not limited to the CPU, and various processors such as a GPU (Graphics Processing Unit) and a DSP (digital signal Processor) may be used. Further, the processor may be a hardware circuit constituted by an ASIC. In addition, the processor may include an amplifier circuit, a filter circuit, which processes the analog signal. The memory may be a semiconductor memory such as an SRAM or a DRAM, a register, a magnetic storage device such as a hard disk device, or an optical storage device such as an optical disk device. For example, the memory stores a computer-readable command, and the processor executes the command to realize the functions of each unit of the image processing apparatus. The command may be a command constituting a command set of a program or a command instructing an operation to a hardware circuit of the processor.

For example, the operation of the present embodiment is implemented as follows. The image captured by the image pickup device 10 is processed by the preprocessing section 14 and stored in the memory as a captured image. The processor reads the captured image from the memory, performs attenuation processing on the captured image, and stores the image after the attenuation processing in the memory.

Further, each part of the image processing apparatus of the present embodiment may be implemented as a module of a program that operates on a processor. For example, the image acquisition unit is realized as an image acquisition module that acquires a captured image including a subject image obtained by irradiating the subject with illumination light from the light source unit 3. The visibility emphasizing section 18 is implemented as a visibility emphasizing means that performs a color attenuation process on a region other than yellow of the captured image to relatively improve the visibility of the yellow region of the captured image.

2. Detailed configuration example 2 of image processing unit

Fig. 9 is a detailed configuration example of the 2 nd image processing unit. In fig. 9, the detection unit 19 includes a blood image generation unit 23 and a blood vessel region detection unit 21. The structure of the endoscope apparatus is the same as that of fig. 2. In the following, the same reference numerals are given to the already-described components, and the description of the components is appropriately omitted.

The blood vessel region detection unit 21 detects a blood vessel region from the blood vessel structure information and the blood image. The manner in which the blood image generating unit 23 generates a blood image is the same as in the detailed configuration example 1. The structure information of the blood vessels is detected from the captured image from the preprocessing section 14. Specifically, the B channel (channel having a high hemoglobin absorption rate) of the pixel value (image signal) is subjected to directional smoothing processing (noise suppression) and high-pass filtering processing. In the direction smoothing processing, the edge direction of the captured image is determined. The edge direction is determined to be, for example, a horizontal direction or any one of a vertical direction and an oblique direction. Next, the detected edge direction is smoothed. The smoothing process is, for example, a process of averaging pixel values of pixels arranged in the edge direction. The blood vessel region detection unit 21 extracts structural information of the blood vessel by applying high-pass filtering processing to the smoothed image. The region in which both the extracted structural information and the pixel value of the blood image are high is set as a blood vessel region. For example, a pixel in which the signal value of the structural information is greater than the 1 st predetermined threshold and the pixel value of the blood image is greater than the 2 nd predetermined threshold is determined as a pixel of the blood vessel region. The blood vessel region detecting unit 21 outputs information of the detected blood vessel region (coordinates of pixels belonging to the blood vessel region) to the visibility emphasizing unit 18.

The visibility emphasizing unit 18 controls the attenuation amount in the blood vessel region detected by the blood vessel region detecting unit 21 based on the signal value of the blood image. The method of controlling the attenuation amount is the same as in the detailed configuration example 1.

According to the above embodiment, the detection unit 19 includes the blood vessel region detection unit 21, and the blood vessel region detection unit 21 detects a blood vessel region, which is a region of a blood vessel in the captured image, based on the color information and the structure information of the captured image. Then, the visibility emphasizing unit 18 suppresses or stops the attenuation processing for the blood vessel region based on the detection result of the blood vessel region detecting unit 21.

Since the blood vessel is located in the tissue, the contrast may be low depending on the thickness, depth in the tissue, position, and the like. When the color attenuation processing is performed for the region other than yellow, the contrast of the blood vessel with low contrast may be further reduced. In this regard, according to the present embodiment, since attenuation processing for a blood vessel region can be suppressed or stopped, a decrease in contrast of the blood vessel region can be suppressed or prevented.

Here, the structure information of the captured image is information for extracting the structure of the blood vessel. For example, the structure information is an edge amount of an image, and is an edge amount extracted by performing high-pass filtering processing and band-pass filtering processing on the image. The blood vessel region is a region where a blood vessel is estimated to be present in the captured image. Specifically, the region has spectral characteristics (color) of hemoglobin (HbO2, HbO) and structural information (e.g., edge amount) is present. In addition, as described above, the blood vessel region is one of the blood regions.

In the present embodiment, the visibility emphasizing unit 18 may emphasize the structure of the blood vessel region of the captured image based on the detection result of the blood vessel region detecting unit 21, and may perform the attenuation process on the emphasized captured image.

For example, the structure enhancement and attenuation processing of the blood vessel region may be performed without suppressing or stopping the attenuation processing for the blood region (blood vessel region). Alternatively, the attenuation process for the blood region (blood vessel region) may be suppressed or stopped, and the structure enhancement and attenuation process for the blood vessel region may be performed.

Here, for example, the processing for emphasizing the structure of the blood vessel region may be realized by processing such as adding the edge amount (edge image) extracted from the image to the captured image. In addition, the structural emphasis is not limited thereto.

Thus, the contrast of the blood vessel can be enhanced by the structural emphasis, and the color attenuation processing for the region other than yellow can be executed for the blood vessel region with the enhanced contrast. This can suppress or prevent a decrease in contrast in the blood vessel region.

3. Modification example

Fig. 10 shows a1 st modification of the endoscope apparatus according to the present embodiment. In fig. 10, the light source unit 3 includes a plurality of light emitting diodes 31a, 31b, 31c, and 31d (leds), a reflecting mirror 32, and 3 dichroic mirrors 33 that emit light in wavelength bands different from each other.

As shown in FIG. 11B, the light emitting diodes 31a, 31B, 31c, 31d emit light in wavelength bands of 400 to 450nm, 450 to 500nm, 520 to 570nm, 600 to 650 nm. For example, as shown in fig. 11 (a) and 11 (B), the wavelength band of the light emitting diode 31a is a wavelength band in which the absorbance of hemoglobin and carotene are high. The wavelength band of the light emitting diode 31b is a wavelength band in which the absorbance of hemoglobin is low and the absorbance of carotene is high. The wavelength band of the light emitting diode 31c is a wavelength band in which the absorbance of hemoglobin and carotene are low. The wavelength band of the led 31d is a wavelength band in which the absorbance of hemoglobin and carotene are close to zero. These 4 wavelength bands almost cover the wavelength band of white light (400nm to 700 nm).

The light from the light emitting diodes 31a, 31b, 31c, and 31d is incident on the illumination optical system 7 (light guide cable) via the reflecting mirror 32 and the 3 dichroic mirrors 33. The light emitting diodes 31a, 31b, 31c, and 31d emit light simultaneously, and emit white light to the subject. The image pickup device 10 is, for example, a single-plate color image pickup device. The wavelength bands of 400nm to 500nm of the light emitting diodes 31a and 31b correspond to the blue wavelength band, the wavelength bands of 520 nm to 570nm of the light emitting diode 31c correspond to the green wavelength band, and the wavelength bands of 600nm to 650nm of the light emitting diode 31d correspond to the red wavelength band.

In addition, the structure of the light emitting diode and the wavelength band thereof is not limited thereto. That is, the light source unit 3 may include 1 or a plurality of light emitting diodes, and the 1 or the plurality of light emitting diodes may emit light to generate white light. The wavelength band of each light emitting diode is arbitrary, and when 1 or a plurality of light emitting diodes emit light, the wavelength band of white light is covered as a whole. For example, the red, green, and blue bands may be included.

Fig. 12 is a modification 2 of the endoscope apparatus according to the present embodiment. In fig. 12, the light source unit 3 includes a filter turret 12, a motor 29 for rotating the filter turret 12, and a xenon lamp 11. The signal processing unit 4 includes a memory 28 and an image processing unit 16. The image pickup device 27 is a monochrome image pickup device.

As shown in fig. 13, the filter turret 12 has a filter group arranged in the circumferential direction around the rotation center a. As shown in FIG. 14B, the filter group includes filters B2, G2, and R2 which transmit blue (B2: 400 to 490nm), green (G2: 500 to 570nm), and red (R2: 590 to 650 nm). As shown in fig. 14 (a) and 14 (B), the wavelength band of the filter B2 is a wavelength band in which the absorbance of both hemoglobin and carotene is high. The band of the filter G2 is a band in which the absorbance of hemoglobin and carotene are low. The band of the filter R2 is a band where the absorbance of hemoglobin and carotene are both almost zero.

The white light emitted from the xenon lamp 11 passes through the filters B2, G2, and R2 of the rotating filter turret 12 in order, and the illumination light of the blue color B2, the green color G2, and the red color R2 is irradiated to the subject in a time division manner.

The control unit 17 synchronizes the imaging timing of the imaging device 27, the rotation of the filter turret 12, and the timing of image processing by the image processing unit 16. The memory 28 stores the image signal acquired by the imaging element 27 for each wavelength of the illumination light to be irradiated. The image processing unit 16 synthesizes the image signals for each wavelength stored in the memory 28 to generate a color image.

Specifically, when the illumination light of blue B2 is irradiated to the subject, the image pickup device 27 picks up an image, which is stored in the memory 28 as an image of blue (B channel), and when the illumination light of green G2 is irradiated to the subject, the image pickup device 27 picks up an image, which is stored in the memory 28 as an image of green (G channel), and when the illumination light of red R2 is irradiated to the subject, the image pickup device 27 picks up an image, which is stored in the memory 28 as an image of red (R channel). Then, at the time of acquiring the images corresponding to the illumination light of 3 colors, these images are sent from the memory 28 to the image processing section 16. The image processing unit 16 performs image processing in the preprocessing unit 14, and synthesizes images corresponding to the illumination light of 3 colors to obtain 1 RGB color images. Thereby, an image of normal light (white light image) is obtained, and this image of normal light is output to the visibility emphasizing unit 18 as a captured image.

Fig. 15 shows a modification 3 of the endoscope apparatus according to the present embodiment. In fig. 15, a so-called 3CCD system is employed. That is, the photographing optical system 8 includes a dichroic prism 34 that splits reflected light from the subject for each wavelength band, and 3 monochromatic image pickup devices 35a, 35b, and 35c that pick up images of light in each wavelength band. The signal processing unit 4 includes a combining unit 37 and an image processing unit 16.

The dichroic prism 34 splits the reflected light from the object in each of the blue, green, and red wavelength bands by the transmittance characteristics shown in fig. 16 (B). Fig. 16 (a) shows the absorption characteristics of hemoglobin and carotene. The light beams in the blue, green, and red wavelength bands split by the dichroic prism 34 are incident on the monochromatic image sensors 35a, 35b, and 35c, respectively, and are captured as blue, green, and red images. The combining section 37 combines 3 images captured by the monochrome image sensors 35a, 35b, and 35c, and outputs the combined image as an RGB color image to the image processing section 16.

4. Notification processing

Fig. 17 shows a detailed configuration example of the image processing unit 3. In fig. 17, the image processing unit 16 further includes a notification processing unit 25, and the notification processing unit 25 performs notification processing based on the detection result of the blood region by the detection unit 19. The blood region may be a blood region (bleeding blood region in a narrow sense) detected by the blood region detecting unit 22 in fig. 4, or may be a blood vessel region detected by the blood vessel region detecting unit 21 in fig. 9.

Specifically, when the blood region is detected by the detection unit 19, the notification processing unit 25 performs notification processing for notifying the user of the detection of the blood region. For example, the notification processing unit 25 superimposes the alarm display on the display image and outputs the display image to the image display unit 6. For example, the display image includes a region in which the captured image is displayed and a peripheral region thereof, and the alarm display is displayed in the peripheral region. Examples of the alert display are a blinking icon or the like.

Alternatively, the notification processing unit 25 performs notification processing for notifying the user that a blood vessel region exists in the vicinity of the treatment instrument, based on positional relationship information (for example, distance) indicating the positional relationship between the treatment instrument and the blood vessel region. The notification processing is, for example, processing for displaying an alarm display as in the above.

Note that the notification process is not limited to the alarm display, and may be a process of highlighting a blood region (blood vessel region) or a process of displaying a text (text or the like) for warning attention. Alternatively, the notification is not limited to the notification based on the image display, and may be performed by light, sound, or vibration. In this case, the notification processing section 25 may be provided as a different component from the image processing section 16. Alternatively, the notification process may be not only a notification process for the user but also a notification process for a device (for example, a robot of a surgical assistance system described later). For example, an alarm signal may be output to the device.

As described above, the visibility emphasizing portion 18 suppresses the process of attenuating the color other than yellow in the blood region (blood vessel region). Therefore, the saturation of the color of the blood region may be lower than that in the case where the process of attenuating the color other than yellow is not performed. According to the present embodiment, processing for notifying the presence of blood in an imaged image, processing for notifying the approach of a treatment instrument to a blood vessel, and the like can be performed based on the detection result of a blood region (blood vessel region).

5. Operation auxiliary system

As an endoscope apparatus (endoscope system) of the present embodiment, for example, as shown in fig. 2, there is assumed an endoscope apparatus of a type in which an insertion portion (scope) is connected to a control apparatus, and a user operates the scope to take an image of the inside of a body. However, the present invention is not limited to this, and for example, the present invention can be applied to a surgical support system using a robot.

Fig. 18 shows an example of the configuration of the operation support system. The surgical assistance system 100 includes a control device 110, a robot 120 (robot main body), and a scope 130 (e.g., a hard endoscope). The control device 110 is a device that controls the robot 120. That is, the user operates the robot by operating the operation unit of the control device 110, and the operation is performed on the patient via the robot. Further, by operating the operation unit of the control device 110, the scope 130 is operated via the robot 120, and the surgical field can be imaged. The control device 110 includes an image processing unit 112 (image processing device) that processes an image from the scope 130. The user operates the robot while viewing an image displayed on a display device (not shown) by the image processing unit 112. The present invention can be applied to the image processing unit 112 (image processing apparatus) in the operation support system 100. The scope 130 and the control device 110 (or the robot 120 as well) correspond to an endoscope apparatus (endoscope system) including the image processing apparatus of the present embodiment.

While the embodiments and the modifications thereof to which the present invention is applied have been described above, the present invention is not limited to the embodiments and the modifications thereof, and constituent elements may be modified and embodied at the implementation stage within a range not departing from the gist of the invention. Further, various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the above embodiments and modifications. For example, some of the components described in the embodiments and modifications may be deleted from all the components. Further, the constituent elements described in the different embodiments and modifications may be appropriately combined. Thus, various modifications and applications can be made without departing from the spirit and scope of the invention. In the description and the drawings, at least terms described together with different terms having broader meanings or the same meaning may be replaced with terms different from those described at arbitrary positions in the description and the drawings.

Description of reference numerals:

1 endoscope device, 2 insertion part, 3 light source part, 4 signal processing part,

5a control device, 6 an image display unit, 7 an illumination optical system, 8 a photographing optical system,

9 objective lens, 10 image pickup element, 11 xenon lamp,

12 filter turret, 13 external I/F section, 14 preprocessing section,

15 interpolation unit, 16 image processing unit, 17 control unit,

18 a visibility emphasizing unit, 19 a detecting unit, 20 a post-processing unit,

21 a blood vessel region detecting part, 22 a blood region detecting part,

23 a blood image generating section, 25 a notification processing section, 27 an imaging element,

28 memory, 29 motor, 31 a-31 d light emitting diodes,

32 mirror, 33 dichroic mirror, 34 dichroic prism,

35a to 35c monochrome image pickup elements, 37 a combining section,

100 surgical support system, 110 control device, 112 image processing unit,

120 robot, 130 mirror.

The claims (modification according to treaty clause 19)

(modified) an image processing apparatus, characterized by comprising:

an image acquisition unit that acquires a captured image including a subject image obtained by irradiating a subject with illumination light from a light source unit;

a visibility emphasizing unit that performs color attenuation processing on a region other than yellow of the captured image to relatively improve visibility of the yellow region of the captured image; and

a detection unit that detects a blood region, which is a region of blood in the captured image, based on color information of the captured image,

the visibility emphasizing unit suppresses or stops the attenuation processing for the blood region based on a detection result of the detecting unit.

(deletion)

(modified) the image processing apparatus according to claim 1,

the detection section includes:

a blood vessel region detection unit that detects a blood vessel region, which is a region of a blood vessel in the captured image, based on the color information and the structure information of the captured image,

the visibility emphasizing unit suppresses or stops the attenuation processing for the blood vessel region based on a detection result of the blood vessel region detecting unit.

(deletion)

(modified) the image processing apparatus according to claim 1,

the detection section includes:

a blood region detection unit that detects the blood region based on at least one of the color information and the luminance information of the captured image,

the visibility emphasizing unit suppresses or stops the attenuation processing for the blood region based on a detection result of the blood region detecting unit.

6. The image processing apparatus according to claim 5,

the blood region detection unit divides the captured image into a plurality of local regions, and determines whether or not each of the local regions is the blood region, based on at least one of the color information and the luminance information of the local region.

(deletion)

(modified) the image processing apparatus according to claim 1,

the visibility emphasizing unit performs the attenuation processing by performing processing of obtaining a color signal corresponding to the blood for a pixel or a region of the captured image and multiplying a coefficient, the value of which changes according to a signal value of the color signal, by a color signal of a region other than the yellow color.

9. The image processing apparatus according to claim 1,

the visibility emphasizing unit performs a color conversion process of performing a rotation conversion in a green side direction in a color space with respect to pixel values of the pixels in the yellow region.

10. The image processing apparatus according to claim 1,

the color of the yellow region is the color of carotene, bilirubin, or fecal bile pigment.

(modified) the image processing apparatus according to claim 1, characterized by comprising:

and a notification processing unit that performs notification processing based on a detection result of the blood region detected by the detection unit.

12. An endoscopic apparatus, characterized in that,

the endoscope apparatus includes the image processing apparatus according to claim 1.

13. The endoscopic device of claim 12, wherein the endoscopic device comprises:

the light source unit emits the illumination light having a wavelength band of normal light.

14. The endoscopic device of claim 13,

the light source part includes 1 or more light emitting diodes,

the light source unit emits the normal light generated by the light emission of the 1 or more light emitting diodes as the illumination light.

(modified) an operating method of an image processing apparatus,

acquiring a captured image including an object image obtained by irradiating an object with illumination light from a light source unit;

performing color attenuation processing on a region other than yellow of the captured image to relatively improve visibility of the yellow region of the captured image;

detecting a blood region, which is a region of blood in the captured image, based on color information of the captured image; and

suppressing or stopping the attenuation processing for the blood region according to a result of the detection.

(modified) an image processing program that causes a computer to execute the steps of:

acquiring a captured image including an object image obtained by irradiating an object with illumination light from a light source unit;

performing color attenuation processing on a region other than yellow of the captured image to relatively improve visibility of the yellow region of the captured image;

detecting a blood region, which is a region of blood in the captured image, based on color information of the captured image; and

suppressing or stopping the attenuation processing for the blood region according to a result of the detection.

31页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:宽视场、高光焦度可抛视网膜观察系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!