Ophthalmologic apparatus

文档序号:928479 发布日期:2021-03-05 浏览:4次 中文

阅读说明:本技术 眼科设备 (Ophthalmologic apparatus ) 是由 原直子 加藤千比吕 冈本圭一郎 于 2020-08-19 设计创作,主要内容包括:本发明提供了一种将受检眼的局部结构与眼睛前部显色相结合的技术。一种眼科设备,包括:二维图像采集单元,其通过使用彩色相机拍摄受检眼前部以获取二维图像;三维图像采集单元,其通过使用光学相干层析来获取受检眼的三维图像;和对应关系定义数据生成单元,其生成对应关系定义数据,其中,当受检眼的预定部位作为三维图像被拍摄时,三维图像中受检眼的预定部位的位置与二维图像中受检眼的预定部位的位置(当受检眼的预定部位作为二维图像被拍摄时)建立对应关系。(The present invention provides a technique for combining the local structure of the eye to be examined with the visualization of the anterior portion of the eye. An ophthalmic apparatus, comprising: a two-dimensional image acquisition unit that captures a two-dimensional image by using a color camera on the front of an eye to be examined; a three-dimensional image acquisition unit that acquires a three-dimensional image of an eye to be examined by using optical coherence tomography; and a correspondence definition data generation unit that generates correspondence definition data in which, when a predetermined part of the eye to be inspected is captured as a three-dimensional image, a correspondence is established between a position of the predetermined part of the eye to be inspected in the three-dimensional image and a position of the predetermined part of the eye to be inspected in the two-dimensional image (when the predetermined part of the eye to be inspected is captured as a two-dimensional image).)

1. An ophthalmic apparatus, comprising:

a two-dimensional image acquisition unit that captures a two-dimensional image by using a color camera on the front of an eye to be examined;

a three-dimensional image acquisition unit that acquires a three-dimensional image of an eye to be examined by using optical coherence tomography; and

a correspondence definition data generation unit that generates correspondence definition data in which, when the predetermined portion of the eye to be inspected is captured as the three-dimensional image, a position of the predetermined portion of the eye to be inspected in the three-dimensional image and a position of the predetermined portion in the two-dimensional image when the predetermined portion of the eye to be inspected is captured as the two-dimensional image establish a correspondence.

2. The ophthalmic apparatus of claim 1,

the two-dimensional image acquisition unit acquires a table image which is an image obtained by illuminating a color table having a plurality of color patches with a white light source and by photographing with the color camera;

the two-dimensional image acquisition unit acquires calibration data, wherein the calibration data is used for converting the gray value of the two-dimensional image into the gray value of a reference color space based on the gray values of a plurality of color blocks in the table image and the gray values when the colors of the color blocks are expressed in the reference color space;

the two-dimensional image acquisition unit illuminates the eye to be inspected with the white light source, obtains the two-dimensional image by the color camera shooting, and corrects the color of the two-dimensional image based on the calibration data.

3. The ophthalmic apparatus of claim 2, wherein the calibration data is a coefficient of each term of an n-th order polynomial for converting gray values of the color patches in a table image to gray values in the reference color space, where n is an integer > 2.

4. The ophthalmic apparatus of claim 2 or 3, wherein the two-dimensional image acquisition unit,

removing an image obtained by first turning off the white light source and then photographing the color chart using the color camera from an image obtained by first illuminating the color chart using the white light source and then photographing the color chart using the color camera, thereby obtaining a chart image; and the number of the first and second electrodes,

an image obtained by turning off the white light source and then photographing the eye using the color camera is removed from an image obtained by illuminating the eye using the white light source and then photographing the eye using the color camera, thereby correcting a two-dimensional image.

5. An ophthalmic device according to any one of claims 2 to 4, wherein the white light source has an average color rendering index of 80 or higher.

6. The ophthalmologic apparatus according to any one of claims 1 to 5, wherein the correspondence relation definition data is data in which a three-dimensional coordinate of the three-dimensional image and a two-dimensional coordinate of the two-dimensional image are brought into correspondence relation.

7. The ophthalmologic apparatus according to claim 6, wherein the correspondence relation defining data is data for converting three-dimensional coordinates of the three-dimensional image into two-dimensional coordinates of the two-dimensional image based on a relation between the eye to be inspected and the color camera and optical system characteristics of the color camera.

8. The ophthalmic apparatus of claim 6 or 7,

the correspondence relation definition data on the iris of the eye to be examined is data obtained by establishing a correspondence relation between the three-dimensional coordinates of the iris and the two-dimensional coordinates thereof based on ray tracing of rays from the iris after rays are refracted by the cornea.

9. An ophthalmic apparatus, comprising:

a two-dimensional image acquisition unit that captures a two-dimensional image by using a color camera on the front of an eye to be examined;

a three-dimensional image acquisition unit that acquires a three-dimensional image of the eye to be examined by using optical coherence tomography; and

and a display control unit that displays a three-dimensional image colored with the color indicated by the two-dimensional image on a display unit.

Technical Field

The invention relates to the technical field of ophthalmic equipment.

Background

Conventionally, a technique for acquiring a three-dimensional image based on optical coherence tomography (hereinafter abbreviated as OCT) of a tomographic image of an eye to be examined is known. Various methods are known for acquiring a three-dimensional image of an eye to be examined by the OCT technique (for example, patent document 1). Further, a configuration is disclosed in patent document 2 in which light is projected/received along an optical axis inclined with respect to an ocular axis of an eye to be examined to capture an entire peripheral image of an iridocorneal angle, sectional information of the iridocorneal angle is acquired by OCT, and the entire peripheral image is associated with sectional information on a positional relationship.

Reference list

Patent document

[ patent document 1] JP 2017-containing 469A

[ patent document 2] JP 2019-

Disclosure of Invention

Technical problem

When examination and diagnosis are performed using the three-dimensional image obtained by the OCT technique in patent document 1, a two-dimensional image is also generally used at the same time. For example, by displaying a two-dimensional color image together with a three-dimensional image, examination and diagnosis can be performed based on the color of the eye to be examined. However, in the conventional configuration, it is difficult to associate the three-dimensional image with the two-dimensional image. For example, when a specified color of a specific position is desired to be displayed in a three-dimensional image, it is generally necessary to compare a two-dimensional image acquired by a color camera with a three-dimensional image expressed in grayscale. However, even if a position to be focused has been found in the three-dimensional image, it is sometimes difficult to associate the position in the three-dimensional image with a corresponding position in the two-dimensional image. In this case, it is difficult to specify the color of the position to be focused. In addition, there are also cases where a structure in which a specific color position shown in a two-dimensional image is desired to be specified in a three-dimensional image. In this case, even if the position to be focused is found in the two-dimensional image, it is sometimes difficult to associate the position in the two-dimensional image with the corresponding position in the three-dimensional image. In this case, it is difficult to specify a three-dimensional structure corresponding to the focused color position.

Further, in patent document 2, the entire peripheral image of the iridocorneal angle is associated with its cross-sectional information in a positional relationship, but the document does not disclose the association between the cross-sectional information and any other image of an arbitrary position of the eye to be examined, for example, the iris or blood vessels on the surface of the eye to be examined. Therefore, when the eye to be examined is viewed from the front of the three-dimensional image, it is impossible to specify the color of the visually recognizable portion.

The present invention has been made in view of the above, and has as its object to relate the local structure of the eye to be examined to the anterior development of the eye to be examined.

In order to achieve the object, the present invention provides an ophthalmologic apparatus including: a two-dimensional image acquisition unit that captures a two-dimensional image by using a color camera on the front of an eye to be examined;

a three-dimensional image acquisition unit that acquires a three-dimensional image of an eye to be examined by using optical coherence tomography; and

a correspondence definition data generation unit that generates correspondence definition data in which, when a predetermined part of the eye to be inspected is captured as a three-dimensional image, a correspondence is established between a position of the predetermined part of the eye to be inspected in the three-dimensional image and a position of the predetermined part of the eye to be inspected in the two-dimensional image (when the predetermined part is captured as a two-dimensional image). Further, instead of or in addition to the correspondence relation definition data generation unit, a display control unit on a display unit may be employed which displays a three-dimensional image rendered with colors displayed in a two-dimensional image.

That is, when the position of a specific part in a three-dimensional image is associated with the position thereof in a two-dimensional image, correspondence definition data is obtained which can be used to associate the three-dimensional structure of the eye to be examined with the color taken as the two-dimensional image. Thus, the local structure of the eye to be examined and its color can be correlated with each other. Further, if the local position in the three-dimensional image is colored and displayed with the color of the two-dimensional image, the examiner can easily establish the correspondence between the local structure of the eye to be examined and the color appearing in front of the eye to be examined.

Drawings

Fig. 1 shows a configuration diagram of an OCT apparatus 1 according to an embodiment of the present invention.

Fig. 2 shows a schematic configuration of the scanning alignment optical system.

Fig. 3 shows a configuration diagram relating to the algorithmic process.

Fig. 4 shows a display example diagram of a two-dimensional image and a three-dimensional image.

Fig. 5A is a flowchart of a calibration data generation process, fig. 5B is a flowchart of a correspondence defining data generation process, and fig. 5C shows a calibration structure.

Fig. 6 is a flowchart of the shooting process.

Fig. 7 is an exemplary diagram for explaining ray tracing.

Detailed Description

Embodiments of the present invention will be described in the following order:

(1) construction of an ophthalmic device;

(2) the configuration of the control unit;

(3) a calibration data generation process;

(4) shooting process;

(4-1) a corresponding relation definition data generation process;

(5) other embodiments are also provided.

(1) Configuration of the ophthalmologic apparatus:

an ophthalmologic apparatus OCT apparatus 1 in an example of the present invention will be described below. Fig. 1 shows a configuration diagram of an OCT apparatus 1 according to an embodiment of the present invention. The OCT apparatus 1 roughly includes an OCT interference system 100, a k-clock generation interference optical system 400, and a control unit 240.

OCT interference system 100 is an optical system for obtaining a tomographic image of the anterior segment of the eye to be examined for OCT. In the present example, swept-source OCT (SS-OCT) is employed, and the wavelength-scanning light source 10 outputs light while changing and scanning the wavelength with time. For example, the wavelength-scanning light source 10 is a light source having a center wavelength of 1 μm or more and a scanning width band of 70nm or more, and has a performance of realizing high-speed scanning of 50KHz or more. The input light emitted from the wavelength-scanning light source 10 is guided by an optical fiber (such as a single-mode optical fiber) and used for tomographic image capturing and k-clock generation of the sample 20.

An SMFC (single mode fiber coupler) 101 for branching the emitted input light respectively is provided between the wavelength-scanning light source 10, the OCT interference system 100, and the k-clock generation interference optical system 400. The input light is branched by the SMFC101 into light toward the OCT interference system 100 and the k-clock generation interference optical system 400.

OCT interferometric system 100 includes Single Mode Fiber Couplers (SMFCs) 102 and 103, a measurement side circulator 104, a reference side circulator 105, a balanced detector 110, a polarization controller 120, a scanning alignment optical system 200, and a reference optical system 300. A beam of light branched from the input light by the SMFC101 is incident on the device SMFC 102. The SMFC102 branches an incident input light and directs one light to the scanning alignment optical system 200 and the other light to the reference optical system 300.

That is, one beam of light into which incident input light is branched by the SMFC102 is input to the scanning alignment optical system 200 via the measurement-side circulator 104, and becomes measurement light for measuring the sample 20. The other light into which the incident input light is branched by the SMFC102 is input to the reference optical system 300 via the reference-side circulator 105, becoming the reference light.

Light incident on the sample 20 is reflected by the sample, and is input to the SMFC 103 as measurement light via the measurement-side circulator 104. The light incident on the reference optical system 300 becomes reference light by the reference unit 301, and is output from the reference optical system 300, and is input to the SMFC 103 via the reference-side circulator 105 and the polarization controller 120.

When the measurement light and the reference light are input to the SMFC 103, the two are combined by the SMFC 103 to generate measurement interference light. The measurement interference light is input into the balance detector 110, and the balance detector 110 receives the measurement interference light and outputs a measurement interference signal. The measurement interference signal is input to the control unit 240, and the control unit 240 obtains a tomographic image of the sample 20 based on the measurement interference signal.

The scanning alignment optical system 200 is an optical system that irradiates the sample 20 with light input from the measurement-side circulator 104 and guides light reflected from the sample 20 to the SMFC 103. Details of the scanning alignment optical system 200 will be described later.

The measurement-side circulator 104 is an optical element disposed between the scanning alignment optical system 200 and the SMFCs 102 and 103, respectively. The measurement light guided from the SMFC102 is guided to the scanning alignment optical system 200 by the measurement-side circulator 104, and the reflected light guided from the scanning alignment optical system 200 is guided to the SMFC 103.

The reference optical system 300 is provided with a reference unit 301 that converts input light into reference light, and a reference-side circulator 105 that guides the input light to the reference optical system 300 and guides the reference light to the SMFC 103. In this example, the reference unit 301 is a prism that emits incident input light as reference light. The reference unit 301 is configured to be movable before measuring the sample 20 to match the optical path length of the scanning alignment optical system 200 with the optical path length of the reference optical system 300. The position of the reference cell 301 is fixed during the measurement of the sample 20.

The reference-side circulator 105 is an optical element disposed between the reference cell 301 and the MFCs 102 and the SMFC 103, respectively. The input light guided from the SMFC102 is guided to the reference cell 301 by the reference-side circulator 105, and the reference light guided from the reference cell 301 is guided to the SMFC 103 by the reference-side circulator 105. The SMFC 103 combines the reflected light guided from the scanning alignment optical system 200 and the reference light guided from the reference optical system 300 to generate measurement interference light. Further, the SMFC 103 branches the combined measurement interference light into two measurement interference lights different in phase by 180 ° and guides them into the balanced detector 110.

The balanced detector 110 is a photodetector that receives the measurement interference light combined by the SMFC 103. The SMFC 103 is disposed between the scanning alignment optical system 200, the reference optical system 300, and the balanced detector 110, and the polarization controller 120 is disposed between the reference optical system 300 and the SMFC 103.

The polarization controller 120 is an element that controls the polarization of the reference light guided from the reference optical system 300 to the SMFC 103. As the polarization controller 120, various modes of controllers, such as an in-line type and a paddle type, may be used without any particular limitation. The control unit 240 obtains a tomographic image of the sample 20 based on the measurement interference signal output from the balance detector 110, and the tomographic image thus obtained is displayed on the display 230.

Fig. 2 shows the configuration of the scanning alignment optical system 200. The scanning alignment optical system 200 includes a scanning optical system, a front-stage photographing system, a fixed-target optical system, and an alignment optical system. In the scanning optical system, the light output from the SMFC102 is input to the measurement-side circulator 104, and is further input from the measurement-side circulator 104 to the galvanometer scanner 202 through the collimator lens 201.

The galvanometer scanner 202 is a device for scanning input light, and is driven by a galvanometer driver (not shown). Input light output from the galvanometer scanner 202 is reflected at an angle of 90 ° by the hot mirror 203 and is incident on the eye to be inspected E through the objective lens 204. The input light incident on the eye E is reflected at a tissue portion (cornea, anterior chamber, iris, lens, etc.) of the anterior part Ec and becomes measurement light. Contrary to the above, the measurement light passes through the objective lens 204, the hot mirror 203, the galvanometer scanner 202, and the collimator lens 201 in this order, and is input to the SMFC 103 via the measurement-side circulator 104.

Then, in the SMFC 103, the reflected light from the preceding stage Ec and the reference light are combined to generate a signal, and the signal is input to the balanced detector 110. In the balanced detector 110, the interference at each wavelength is measured, and the measured interference signal is input to the control unit 240. The control unit 240 processes (e.g., inverse fourier transforms) the measurement interference signal to acquire a tomographic image of the preceding section Ec along the scanning line.

The front-end photographing system includes white light sources 205, an objective lens 204, a hot mirror 203, a beam splitter 206, an imaging lens 207, and an area sensor 208. The white light sources 205,205 are configured to irradiate the front of the eye E with illumination light in the visible light region, and the reflected light from the eye E passes through the objective lens 204, the hot mirror 203, the beam splitter 206, and the imaging lens 207, and is input to the area sensor 208. Accordingly, a front image of the eye E is captured, and the captured two-dimensional image is processed by the control unit 240.

In the present embodiment, the white light source 205 is a light source that outputs white light. The white light only has to have a spectral distribution so that the eye to be examined illuminated with white light can be visually recognized in full color. In order to accurately reproduce the color of the eye E to be examined, the white light source 205 is preferably a light source having high color rendering properties, for example, a light source having an average color rendering index (Ra) of 80 or more is preferable. In the present embodiment, the average color rendering index (Ra) of the white light source 205 is 95.

Further, in the present embodiment, the explanation is given on the premise that the white light source 205 has a color temperature of 5500K, but the color temperature is not limited. In the present embodiment, the image data output from the area sensor 208 is data in which a gradation value indicating the detection intensity of each of RGB (R: red, G: green, and B: blue) color lights is specified for each pixel arranged two-dimensionally. In the present embodiment, the front-end shooting system corresponds to a color camera.

The fixation target optical system is used to cause the subject to gaze at the fixation lamp to prevent the subject from moving his/her eyeball (eye to be examined E) as much as possible. In the present embodiment, the fixed target optical system is composed of a fixed target light source 210, a movable zoom lens 211, a cold mirror 212, a hot mirror 213, a relay lens 214, a beam splitter 215, a beam splitter 206, a hot mirror 203, and an objective lens 204. Accordingly, the light output from the fixed object light source 210 is configured to be output to the eye E to be inspected through the movable zoom lens 211, the cold mirror 212, the hot mirror 213, the relay lens 214, the beam splitter 215, the beam splitter 206, the hot mirror 203, and the objective lens 204 in this order.

Here, the movable zoom lens 211 is configured to be movable so that the focus of a fixed object can be freely changed. Specifically, the movable zoom lens 211 is movable to an arbitrary position, for example, a position such that the focal point of the fixed target reaches the power value of the eye E to be inspected. By so doing, a state in which the subject can naturally see the fixation target (a state in which no load is applied to the lens) can be established to perform measurement. Further, for example, when it is used for the study of the lens focus adjustment function, by moving the movable zoom lens 211 so that the focus of a fixed object can be seen closer than in natural vision to compare the shape of the lens, or gradually moving the movable zoom lens 211 to capture a moving image of the change in the shape of the lens, it is possible to photograph a state of natural vision and a state in which an adjustment load is applied to the lens.

The alignment optical system is composed of an XY direction position detection system for detecting the position of the eye E (corneal vertex) to be examined in XY directions (vertical and horizontal displacements with respect to the main body) and a Z direction position detection system for detecting the position of the eye E (corneal vertex) to be examined in the longitudinal direction (Z direction). The XY-direction position detection system includes an XY-position detection light source 216, a hot mirror 213, a relay lens 214, a beam splitter 215, a beam splitter 206, a hot mirror 203, an objective lens 204, an imaging lens 217, and a two-dimensional position sensor 218. Alignment light for position detection is output from the XY position detection light source 216 and emitted toward the anterior segment Ec (cornea) of the eye E to be examined via the hot mirror 213, the relay lens 214, the beam splitter 215, the beam splitter 206, the hot mirror 203, and the objective lens 204.

At this time, the corneal surface of the eye E has a spherical shape, so that the alignment light is reflected on the corneal surface so as to form a bright point image in the corneal vertex of the eye E, and the reflected light is incident from the objective lens 204. The reflected light (bright point) from the corneal vertex is input to the two-dimensional position sensor 218 via the objective lens 204, the hot mirror 203, the beam splitter 206, the beam splitter 215, and the imaging lens 217. The position of the bright spot is detected by the two-dimensional position sensor 218, and thus the position of the corneal vertex (position in each of the X and Y directions) is detected.

The signal detected by the two-dimensional position sensor 218 is input to the control unit 240. In the present embodiment, in the case where the two-dimensional position sensor 218 and the preceding stage photographing system are aligned, a predetermined (normal) image acquisition position of the corneal vertex (a position that should be followed during tomographic image acquisition) is preset. For example, the normal image acquisition position of the corneal vertex is the center position of the captured image of the imaging element or the like. Based on the detection by the two-dimensional position sensor 218, the control unit 240 obtains a positional displacement amount between the normal position and the detection position of the corneal vertex (bright point) in each of the X and Y directions.

The Z-direction position detection system includes a Z-position detection light source 219, an imaging lens 220, and a line sensor 221. The Z position detection light source 219 is configured to irradiate the eye E from an oblique direction with detection light (slit light or spot light) so that oblique reflected light from the cornea is incident on the line sensor 221 through the imaging lens 220. At this time, the incident position of the reflected light incident on the line sensor 221 differs depending on the position of the eye E in the longitudinal direction (Z direction), and thus the position of the eye E in the Z direction is detected.

Here, although not shown, the apparatus main body of the OCT apparatus 1 is supported so as to be movable in the X direction (horizontal direction) and the Y direction (vertical direction) and the Z direction (longitudinal direction) with respect to the support table. The control unit 240 freely moves the apparatus main body in the X direction, the Y direction, and the Z direction with respect to the support base. In addition, a chin rest on which the subject places his/her chin and a forehead pad on which the subject places his/her forehead are fixedly provided on the front side (subject side) of the apparatus body, and the eyes of the subject (examined eyes) are arranged in front of the examination window (provided on the front surface of the apparatus body). The control unit 240 moves the apparatus main body relative to the support table so that the amount of positional displacement of the corneal vertex (bright point) in the X and Y directions detected by the XY-direction position detecting system and the amount of positional displacement of the eye E detected by the Z-direction position detecting system are both zero.

The k-clock generation interference optical system 400 shown in fig. 1 optically generates a sampling clock (k clock) from branched input light (from the SMFC101) to sample a measurement interference signal at an equally spaced frequency. The generated k-clock signal is then output to the control unit 240. Therefore, distortion of the measurement interference signal is suppressed, and deterioration of resolution is prevented.

(2) Construction of the control unit:

in the present embodiment, the control unit 240 includes a Central Processing Unit (CPU), a Random Access Memory (RAM), a Read Only Memory (ROM), and the like (not shown). The control unit 240 may execute a program stored in the storage medium 245 and perform various algorithmic processes using data stored in the storage medium 245 included in the optical OCT apparatus 1. The control unit 240 may control targets, e.g., motors for alignment, the area sensor 208, etc., included in the OCT interference system 100, the scanning alignment optical system 200, the reference optical system 300, the k-clock generation interference optical system 400.

Further, the control unit 240 may perform an arithmetic process based on information output from the OCT interference system 100, the scanning alignment optical system 200, and the like. In the present embodiment, the control unit 240 performs processing based on the three-dimensional image output from the OCT interference system 100 and the two-dimensional image output from the scanning alignment optical system 200 so that positions in the two images can establish a correspondence relationship with each other.

Fig. 3 shows a configuration diagram relating to an algorithmic process for correlating positions. When a program (not shown) is executed, the control unit 240 functions as a two-dimensional image acquisition unit 240a, a three-dimensional image acquisition unit 240b, a correspondence relation defining data generation unit 240c, and a display control unit 240 d. The two-dimensional image capturing unit 240a is a program module that causes the control unit 240 to execute a function of acquiring a two-dimensional image of the eye E using a color camera.

That is, the control unit 240 may control a front-stage photographing system (the area sensor 208, etc.) that functions as a color camera to obtain a two-dimensional image of a subject existing in a field of view of the color camera by using the function of the two-dimensional image capturing unit 240 a. Therefore, when the subject places his/her chin on the chin rest of the optical OCT apparatus 1 and his/her forehead on the forehead pad so that the subject's eye (eye to be examined E) is arranged at the examination window of the front surface of the apparatus body, a two-dimensional image of the eye to be examined E is acquired in this state. When any objects in the part of the eye to be examined, such as a color chart, a white balance adjustment object, and a calibration structure (all of which will be described later), are photographed, two-dimensional images of these objects can be acquired.

In any case, when a two-dimensional image is acquired, two-dimensional image data 245a indicating the two-dimensional image is recorded in the storage medium 245. In the present embodiment, the two-dimensional image data is a gradation value set for each pixel arranged two-dimensionally, each gradation value indicating the detected intensity of light of RGB (R: red, G: green, and B: blue) for each channel. In the present embodiment, coordinates indicating the position of each pixel in the two-dimensional image are represented as (u, v), u being a variable indicating the position in the horizontal direction (the direction parallel to the X direction), and v being a variable indicating the position in the vertical direction (the direction parallel to the Y direction).

It is to be noted that in the present embodiment, the gradation value indicated by the two-dimensional image data 245a may take a value of 0 to 255 for each RGB. In the present embodiment, when a specific color is captured and the two-dimensional image data 245a is obtained, the color output by the two-dimensional image data 245a onto a display or a printer does not generally match the captured specific color. In this sense, the gradation value of the two-dimensional image data 245a (gradation value before correction with the calibration data 245c, which will be described later) indicates the color expressed in the device-dependent color space.

In the present embodiment, the control unit 240 displays the captured two-dimensional image on the display 230 using the function of the display control unit 240 d. That is, when the two-dimensional image data 245a is acquired, the control unit 240 controls the display 230 so that the two-dimensional image indicated by the two-dimensional image data 245a is displayed at a predetermined position. Fig. 4 shows a display example diagram of the two-dimensional image Ie. Although not shown in fig. 4, the two-dimensional image Ie is a color image.

Further, in the present embodiment, the two-dimensional image Ie to be displayed on the display 230 is an image in a state where calibration data 245c (to be described later) configures colors. In the present embodiment, the calibration data 245c is used to convert the gradation value in the two-dimensional image into the gradation value in the reference color space, and also indicates a conversion formula for converting the color of the subject indicated by the gradation value in the two-dimensional image into the gradation value when the color is expressed in the reference color space as the device-independent color space. The control unit 240 corrects the two-dimensional image data 245a based on the calibration data 245c, and provides data expressing colors using a reference color space. The control unit 240 updates the two-dimensional image data 245a with the corrected data. In the display 230, color matching in which a color expression is specified using a color expressed by a reference color space has been performed. Therefore, the color displayed on the display 230 is equivalent to the color of the eye E visually recognized by the examinee.

The three-dimensional image acquisition unit 240b is a program module having a function of causing the control unit 240 to acquire a three-dimensional image of the eye E to be examined by OCT. That is, the control unit 240 controls the OCT interference system 100, the scanning alignment optical system 200 (e.g., the galvanometer scanner 202), and the k-clock generation interference optical system 400 using the function of the three-dimensional image acquisition unit 240b to acquire the measurement interference signal. The control unit 240 performs, for example, inverse fourier transform processing on the measurement interference signal to acquire a tomographic image of the preceding section Ec along the scanning line.

The control unit 240 changes the scanning direction of the galvanometer scanner 202 and acquires tomographic images of a plurality of cross sections. In the present embodiment, a three-dimensional image of the eye E is acquired by acquiring tomographic images of a plurality of cross sections covering the entire anterior segment of the eye E. The scanning direction of the galvanometer scanner 202 may be different. For example, the entire anterior segment of the eye E to be examined can be covered by arranging a plurality of cross sections parallel to the X and Z directions at regular intervals in the Y direction. Further, assuming a cross section passing through the corneal vertex and parallel to the Z direction, by rotating the cross section at a constant angle with respect to an axis passing through the corneal vertex (parallel to the Z direction as a rotation axis), it is possible to cover the entire anterior segment of the eye E to be examined.

For each position of the cross section, each of the plurality of tomographic images displays brightness and darkness according to the structure of the eye E to be inspected. Accordingly, the control unit 240 sets each position of the plurality of tomographic images as three-dimensional image data 245b, the three-dimensional image data 245b having single-channel information indicating brightness and darkness for each coordinate in the OCT coordinate system and records the data in the storage medium 245. It should be noted that the coordinates indicated by the three-dimensional image data 245b only have to be represented in a predefined coordinate system, and an example is assumed in the present embodiment in which the three-dimensional image data 245b is defined in an XYZ coordinate system for alignment. That is, the three-dimensional image data 245b is data in which gradation values are specified for a plurality of coordinates in the XYZ coordinate system in accordance with the structure of the eye E. In the present embodiment, the coordinates in the XYZ coordinate system are represented as (X, Y, Z).

In the present embodiment, the control unit 240 displays the captured three-dimensional image on the display 230 using the function of the display control unit 240 d. That is, when the three-dimensional image data 245b is acquired, the control unit 240 controls the display 230 so that a three-dimensional image of an arbitrary portion indicated by the three-dimensional image data 245b is displayed at a predetermined position. Fig. 4 shows an example of a vertical tomographic image (a section parallel to the Y and Z directions) Iov displayed on the right side of the two-dimensional image Ie. Further, fig. 4 shows an example of a horizontal tomographic image (a section parallel to the X and Z directions) Ioh displayed below the two-dimensional image Ie, and a tomographic image Ior of a section rotated by an arbitrary angle with respect to an axis passing through the corneal vertex and parallel to the Z direction, displayed on the lower right side of the two-dimensional image Ie. These tomographic images Iov, Ioh, and Ior are sectional views and thus two-dimensional, but since a part of the three-dimensional image data 245b is displayed, it can also be said that a three-dimensional image display is provided.

As described above, the three-dimensional image data 245b is data having a single-channel gradation value of each of a plurality of coordinates in the XYZ coordinate system. Therefore, when the tomographic images Iov, Ioh, and Ior are displayed based on the three-dimensional image data 245b, they are generally grayscale images. However, when the examination and diagnosis are performed using the three-dimensional image, or when the artificial eye is created using the three-dimensional image, it is preferable that the color of each part of the eye E to be examined is a color visually recognizable by a human.

Therefore, the present embodiment has a configuration in which each site having a three-dimensional structure obtained by OCT is colored based on a two-dimensional image. For coloring, correspondence relation definition data in which the three-dimensional coordinates of the three-dimensional image and the two-dimensional coordinates of the two-dimensional image establish a correspondence relation is generated in the present embodiment. Specifically, the correspondence relation definition data generating unit 240c is a program module that causes the control unit 240 to execute a function of generating correspondence relation definition data in which, when a predetermined part is captured as a three-dimensional image, the position of the predetermined part of the eye E to be inspected in the three-dimensional image and the position of the predetermined part in the two-dimensional image when the predetermined part is captured as a two-dimensional image establish a correspondence relation. That is, in the present embodiment, the control unit 240 generates correspondence relation definition data 245d for establishing a correspondence relation between the three-dimensional coordinates of the three-dimensional image and the two-dimensional coordinates of the two-dimensional image, and colors the three-dimensional image by using the correspondence relation definition data 245 d.

In this embodiment, the correspondence relation definition data 245d is data obtained by establishing a correspondence relation between three-dimensional coordinates of a three-dimensional image and two-dimensional coordinates of a two-dimensional image. That is, in the two-dimensional image, for each pixel forming the two-dimensional image, a gradation value is defined for each color channel of RGB, and the position of each pixel is specified by two-dimensional coordinates (u, v). Therefore, when the two-dimensional coordinates (u, v) indicating each pixel of the two-dimensional image are specified, the color is specified. Therefore, when a part of the three-dimensional coordinates is photographed as a two-dimensional image, by establishing a correspondence relationship between arbitrary three-dimensional coordinates (X, Y, Z) of the three-dimensional image and two-dimensional coordinates (u, v), it can be considered that the arbitrary three-dimensional coordinates (X, Y, Z) have been colored.

As described above, the correspondence relation definition data 245d is data in which the three-dimensional coordinates of the three-dimensional image and the two-dimensional coordinates of the two-dimensional image establish a correspondence relation, and in the present embodiment, the correspondence relation definition data 245d is generated based on the two-dimensional image data 245a and the three-dimensional image data 245b obtained by photographing the eye E to be inspected. Note that the problem of distortion or the like in the optical system of the color camera does not change for each eye E to be examined, but the cornea of the eye E to be examined changes for each eye E (for each subject).

Therefore, in the present embodiment, for the iris photographed through the cornea, the correspondence relationship is defined for each eye E to be examined by ray tracing. On the other hand, for positions other than the iris (positions other than the iris when viewed from the front of the eye E), the correspondence relationship is defined based on the relationship between the eye E and the color camera and the characteristics of the optical system of the color camera. The details of the definition of such correspondence relationship will be described later. In any case, using the function of the correspondence relation definition data generation unit 240c, the control unit 240 generates correspondence relation definition data 245d indicating the correspondence relation between the three-dimensional coordinates of the three-dimensional image and the two-dimensional coordinates of the two-dimensional image, and records the correspondence relation definition data 245d in the storage medium 245 in association with the two-dimensional image data 245a and the three-dimensional image data 245 b.

When the correspondence relation definition data 245d is generated, the correspondence relation definition data 245d can be used for various purposes. For example, if the two-dimensional image data 245a and the three-dimensional image data 245b and the correspondence relation definition data 245d are used in a set in other output devices (e.g., a printer), a three-dimensional image of the eye E to be inspected, which is colored with a color very close to the actual color of the eye E to be inspected, can be output.

In the present embodiment, the correspondence defining data 245d is used to color each part of the eye E to be inspected when the three-dimensional image is displayed on the display 230. That is, the control unit 240 extracts three-dimensional coordinates existing on an arbitrary cross section from the three-dimensional image data 245b using the function of the display control unit 240 d. The control unit 240 also refers to the correspondence relation definition data 245 to specify two-dimensional coordinates corresponding to three-dimensional coordinates. Further, the control unit 240 refers to the two-dimensional image data 245a and regards the gradation value corrected by the calibration data 245c as a gradation value in the three-dimensional coordinates. Then, the control unit 240 regards the color of the three-dimensional coordinates indicated by the three-dimensional image data 245b as a gradation value, and controls the display 230 so that three-dimensional images (tomographic images Iov, Ioh, and Ior) are displayed thereon.

Therefore, although not shown in fig. 4, the tomographic images Iov, Ioh, and Ior are colored and displayed. In the correspondence defining data 245d, the correspondence with the two-dimensional coordinates need not be defined for all three-dimensional images used for expressing three-dimensional images. For example, a position out of the field of view of the color camera, a position that is transparent and thus cannot be colored, a position that is located on the back of the eye E and thus cannot be photographed by the color camera, and the like, the correspondence definition data 245d may not be defined. In this case, the control unit 240 may adopt a configuration in which a position where the correspondence defining data 245d does not exist is displayed with a color represented by a gray scale value that can be indicated by the three-dimensional image data 245 b. Further, the colored part (predetermined part) is an iris, an anterior segment surface other than the iris and the cornea, an eyelid, or the like.

The display mode of the three-dimensional image may take various modes. The cross section is not limited to the example shown in fig. 4, and of course, any cross section may be selected as specified by the examiner. In addition, a cross section parallel to the Z direction may also be selected. Further, a projection view when the cross section is viewed from a certain direction (for example, a view in which a predetermined range of information in the Z direction is projected on a portion parallel to the Z direction) may be displayed. Further, a three-dimensional model may be generated based on a three-dimensional image, and a view when the three-dimensional model is viewed from an arbitrary direction may be colored and displayed. One such example is that a three-dimensional model displayed by volume rendering or surface rendering may be rendered and displayed.

According to the above configuration, by associating the local structure of the eye E to be examined specified based on the three-dimensional image obtained by OCT with the color development in the front of the eye E, the local structure can be colored. Therefore, the present embodiment can be used for various purposes of observing the anterior segment using color, for example, evaluating the color and shape of the affected region, distribution of blood vessels, etc., for differential diagnosis, grasping the condition of a disease, and performing follow-up observation. There are many diseases including allergic conjunctival diseases accompanied by conjunctival congestion, corneal dystrophy associated with infectious keratitis accompanied by corneal clouding and genetic abnormality, corneal epithelial diseases caused by dry eye and the like, pterygium of a part of conjunctival tissues extending to the cornea with blood vessels, tumors formed in the iris, conjunctiva and eyelid, retinoblastoma observable from the pupil, and vascular glaucoma neovascularization in which new blood vessels appear in the iris.

The color anterior segment of the eye is also convenient for observation after corneal transplantation or amnion transplantation, functional evaluation of bubble filtration after glaucoma bubble filtration surgery, follow-up observation of shunt implantation surgery, and the like. Further, if coloring is performed as in the present embodiment, the probability that the pathological condition is correctly grasped and slight change is not ignored will increase. Further, for each disease, clinical evaluation criteria such as degree of congestion, color of affected area, and spread of lesions may be provided as indicators of classification, severity, and degree of malignancy. However, in the present embodiment, the reproducibility of color is ensured by the calibration data. Thus, the classification and determination of severity/malignancy can be done correctly and the likelihood of misdiagnosis or inattention can be reduced.

Further, as a detection method for the classification and determination of severity/malignancy, a method of visually comparing a color image of the anterior segment of the eye E to be examined with a color image of a representative case sample prepared for each severity, a pattern with annotations, and the like is generally used to judge the classification of severity/malignancy and the grade to which the determination belongs. Even in this case, since color reproducibility is ensured in the present embodiment, quantitative evaluation based on various color components is possible, and thus more objective and stable evaluation is possible.

Further, in recent years, when producing artificial eyes using a 3D printer, attempts have been made to use slit-lamp microscope color images of actual eyes. In order to match the color and texture of the iris and conjunctival blood vessels printed on the artificial eye with the color and texture of the actual eye, it is preferable to ensure color reproducibility of a color image. Furthermore, the color image may capture only information of the surface of the eye. However, by combining with a three-dimensional image including a lesion depth direction or an optical coherence tomography angiography image of a three-dimensional non-invasive angiography technique called through OCT, a color image can capture the degree of infiltration of internal tissues, a three-dimensional form, characteristics of an affected region within a vascular-invasive tissue, and the like, thereby performing more accurate diagnosis.

(3) A calibration data generation process:

next, a process of generating calibration data for ensuring color reproducibility of a two-dimensional image captured with a color camera will be described in detail. Fig. 5A is a flowchart of a calibration data generation process. In the present embodiment, the calibration data generation process is performed before shipment of the OCT apparatus 1. When the calibration data generation process starts, the white light sources 205,205 are turned on under a first condition (step S100). That is, the operator of the calibration data generation sets the light emission condition to the first condition via the input unit (not shown), and gives an instruction to turn on the white light sources 205, 205. Thus, the control unit 240 causes the white light sources 205,205 to be switched on under the set first conditions. The first condition is the same as the condition when the eye E is illuminated to capture the two-dimensional image data 245a with the color camera, and in the present embodiment, the first condition is white light having a predetermined luminance (luminous intensity, luminance, illuminance, etc.) at a color temperature of 5500K, which is output from the white light sources 205, 205.

Next, a target of white balance adjustment is photographed (step S105). Specifically, the operator sets the target of white balance adjustment at the position where the eye to be inspected E is to be arranged at the time of photographing. The target of the white balance adjustment is an achromatic color sample prepared in advance, and in the present embodiment, a color sample of light gray close to white. The operator gives a shooting instruction through an input unit (not shown) that sets a white balance adjustment target. Therefore, the control unit 240 controls the area sensor 208 of the scanning alignment optical system 200 to acquire an image for a white balance adjustment target, using the function of the two-dimensional image acquisition unit 240 a.

Next, the control unit 240 adjusts the white balance of the color camera (step S110). That is, the control unit 240 adjusts the gradation value representing the color of the target for white balance adjustment captured in step S105 to a predetermined gradation value. Various techniques can be used as the white balance adjustment method. For example, the gain of analog-digital conversion may be adjusted after the light intensity of each RGB is detected, and the raw data of the output value of the photoelectric conversion element for detecting each RGB color may be adjusted. In any case, the gradation values of RGB of the color as the white balance adjustment target are predetermined, and the control unit 240 adjusts the gain, adjusts the conversion formula from the raw data to the RGB gradation value conversion, and the like, thereby performing the white balance adjustment. When the white balance adjustment is performed in this manner, the setting of the white balance is maintained until the white balance adjustment is performed again later.

Next, the color chart is photographed (step S115). That is, the operator sets a color chart (color chart) at the position where the eye to be inspected E is arranged at the time of photographing. The color table is a color sample having a plurality of color patches, and has predetermined color patches and achromatic color patches so as to cover colors over the entire color space. The operator gives a photographing instruction through an input unit (not shown) provided with a color chart.

Therefore, the control unit 240 controls the area sensor 208 of the scanning alignment optical system 200 using the function of the two-dimensional image acquisition unit 240a to acquire a color chart image. Since the color chart includes a plurality of color patches (for example, 24), a chart image is acquired for each color patch.

In the present embodiment, it is assumed that the color chart and the eye to be inspected E are photographed under the indoor lighting condition. In this case, the influence of the room lighting forms the background of the image. Indoor lighting is typically varied depending on the shooting environment. Therefore, if the processing is performed in a state including the background, the reproducibility of the color may be deteriorated. Therefore, in the present embodiment, a process of removing the background is performed.

Specifically, the control unit 240 turns off the white light sources 205,205 using the function of the two-dimensional image acquisition unit 240a (step S120). At this point, the room lighting remains on. Next, the background is photographed (step S125). Specifically, the control unit 240 controls the area sensor 208 of the scanning alignment optical system 200 to capture the color chart set in step S115 using the function of the two-dimensional image acquisition unit 240a, and acquires the resultant image as a background image. Note that, when capturing the background image, exposure adjustment is not performed, and capturing is performed under the same exposure conditions as step S115.

Next, the control unit 240 removes the background image from the table image to acquire a corrected table image (step S130). That is, the control unit 240 removes each gradation value of the background image acquired in step S125 from each gradation value of the table image acquired in step S115 to acquire a resultant image as the table image after correction. According to the above configuration, the influence of the indoor lighting can be removed from the table image.

Next, the control unit 240 acquires coefficients of a polynomial for converting the color space (step S135). That is, in the present embodiment, the gradation value of an image captured with a color camera indicates the color expressed in the device-dependent color space. Therefore, when the output on the display 230 is performed using the gradation values of the table image, it cannot be ensured that the colors of the patches match the actual colors. Therefore, in the present embodiment, it is assumed that the gradation value of the patch in the table image is converted into a gradation value in the reference color space (sRGB) by using a cubic polynomial.

In order to be able to perform conversion by using such a cubic polynomial, the control unit 240 acquires coefficients of the cubic polynomial based on the RGB gradation value of each patch obtained from the table image and the sRGB gradation value known as the color of each patch. Specifically, the cubic polynomial is represented by the following formulas (1) to (3).

[ mathematical formula 1]

<IMGSRC="TO19004JPformula-1.bmp">

Equation (1) is a conversion equation for calculating an R gradation value in the sRGB color space using the RGB gradation values of the two-dimensional image data 245a, and equation (2) is a conversion equation for calculating a G gradation value in the sRGB color space using the RGB gradation values of the two-dimensional image data 245 a. Equation (3) is a conversion equation for calculating the B grayscale value in the sRGB color space using the RGB grayscale values of the two-dimensional image data 245 a. Further, symbol n in each formula represents the number of patches. Thus, if there are 24 color patches, n is an integer value from 1 to 24.

According to the above indications, (R) in each formulan,Gn,Bn) Representing a gray value obtained by photographing the nth patch with a color camera. In each formula, each gradation value includes terms of 0 th order to 3 rd order, and there are coefficients α 0 to α 9 by which each term is multiplied. In the present embodiment, these coefficients are different for R, G and B, respectively. Therefore, symbols r, g, and b are appended to each of the formulas (1) to (3) in order to distinguish the coefficients from each other in each formula.

When there are N color patches, each of equations (1) to (3) will obtain N equations. For example, if the R component of the sRGB gray scale value of the first color patch is Rs1And the gray value captured with the color camera is (R)1、G1、B1) Then left is Rs1And (R) is1、G1、B1) Is replaced to the right. Obtain the coefficient alphar0To alphar9Is an unknown formula. If such a replacement is made for N color patches, N formulas are obtained. Therefore, the control unit 240 specifies the coefficient α in the N formulas by the least square method or the liker0To alphar9. Similarly, if the coefficients are also determined for formula (2) and formula (3), it is obtained that the image will be taken by a color cameraEquations (1) to (3) for converting the obtained arbitrary gradation value into an sRGB gradation value. When the polynomial coefficients are obtained in the above manner, the control unit 240 saves the calibration data (step S140). Specifically, the control unit 240 causes the coefficient value acquired in step S135 to be recorded as the calibration data 245 in the storage medium 245 c.

(4) The shooting process:

next, a process of capturing a two-dimensional image and a three-dimensional image will be described. In a state where the eye to be inspected E is arranged in front of the inspection window and the eye to be inspected E exists in the field of view of the color camera and at a position where it can be photographed by the OCT interference system 100, a photographing process is performed according to an instruction from the examiner. When the shooting process starts, the white light sources 205,205 are turned on under a second condition (step S200). That is, the inspector sets the light emission condition to the second condition via the input unit (not shown), and gives an instruction to turn on the white light sources 205, 205. Thus, the control unit 240 causes the white light sources 205,205 to be switched on under the set second condition.

The second condition is a light emission condition at the time of alignment by the scanning alignment optical system 200 and real-time display of a two-dimensional image captured with a color camera, and is preset. In the present embodiment, the color temperature is not limited, but is preferably close to the color temperature in the first condition, which is the lighting condition of the two-dimensional image, in order to make the color of the image displayed in real time coincide with the color of the two-dimensional image captured after the real-time display. However, it is preferable that the brightness (luminous intensity, brightness, illuminance, etc.) of the white light sources 205,205 is smaller than that under the first condition.

That is, in step S200 and subsequent steps, illumination is performed for a period of time to perform a relatively time-consuming process such as alignment or OCT. Therefore, it is preferable that the luminance (luminous intensity, brightness, illuminance, etc.) is smaller than that in the first condition and is not excessively dark, in order to suppress an increase in blinking, unstable movement of the eye E to be examined, and excessive contraction of the pupil. In addition, the two-dimensional image captured in the illumination state in the first condition is always in a fixed state because the reproducibility of color must be ensured and illumination must be performed under the same conditions as when the calibration data 245c is prepared. However, it is not necessary to strictly obtain color reproducibility of real-time display or the like. Therefore, under the second condition, the brightness (luminous intensity, brightness, illuminance, etc.) of the white light may be smaller than that under the first condition. For example, JIS Z9110 specifies an illuminance suitable for a conference place or an office of about 200 to 2000lx in order to promote a comfortable working environment. In addition, according to the description of "the change in the height of eyelid fissure and pupil size due to different illuminance" (Japanese journal of clinical ophthalmology 50 (4): 769-722, 1999), when the illuminance is 200lx to 2000lx, the pupil diameter is steadily decreased with the increase of the illuminance, but when the illuminance exceeds 2000lx, the pupil diameter is unstably changed with the increase of the illuminance. Therefore, it is preferable that the white light has an illuminance of 200lx to 2000lx under the second condition, and the illuminance under the first condition is larger than the illuminance under the second condition selected from the range. Of course, the first and second conditions may be the same as long as the effect of the increase in blinking need not be considered.

Next, the control unit 240 starts real-time display of the OCT and color image (step S205). That is, the control unit 240 controls the front-stage photographing system (the area sensor 208 and the like) serving as a color camera to acquire a two-dimensional image of the eye E using the function of the two-dimensional image acquisition unit 240 a. At this time, the control unit 240 may allow two-dimensional image data 245a indicating the captured two-dimensional image to be recorded in the storage medium 245 or stored in a random access memory (not shown). When acquiring a two-dimensional image, the control unit 240 controls the display 230 to display the two-dimensional image using the function of the display control unit 240 d. The mode of real-time display is not limited, but a display mode such as the two-dimensional image Ie shown in fig. 4 may be employed.

Further, the control unit 240 controls the OCT interference system 100, the scanning alignment optical system 200 (e.g., the galvanometer scanner 202), and the k-clock generation interference optical system 400 using the function of the three-dimensional image acquisition unit 240b to acquire a measurement interference signal. The control unit 240 performs a process such as inverse fourier transform on the measurement interference signal, and acquires a tomographic image of the preceding section Ec along the scanning line. The control unit 240 restricts the scanning direction of the galvanometer scanner 202 to obtain a cross section of a tomographic image of the display target, and acquires the tomographic image of the cross section of the display target.

At this time, the control unit 240 may allow the captured tomographic image data to be recorded in the storage medium 245 or stored in a random access memory (not shown). When acquiring a three-dimensional image, the control unit 240 controls the display 230 to display the three-dimensional image using the function of the display control unit 240 d. The mode of real-time display is not limited, but for example, display modes such as tomographic images Iov, Ioh, and Ior in fig. 4 may be adopted. At a stage before the alignment is completed, the real-time display in step S205 starts to be performed. Therefore, in the initial stage, the central portion of the cornea of the eye E to be examined may not be at the center of the real-time display image.

Next, the control unit 240 performs alignment (step S210). That is, the control unit 240 controls the XY position detection light source 216 to output alignment light for detecting the position in the X-Y direction. The alignment light is reflected on the corneal surface so as to form a bright spot image in the corneal vertex of the eye to be inspected E, and the reflected light is detected by the two-dimensional position sensor 218. The control unit 240 specifies the position of the bright spot, that is, the position of the corneal vertex (positions in the X and Y directions), based on the output signal of the two-dimensional position sensor 218. Further, the control unit 240 obtains the amount of positional deviation in the X and Y directions, which is necessary to match the bright point indicating the corneal vertex with the predetermined position of the imaging element. Then, the control unit 240 moves the apparatus body in the direction opposite to the positional deviation so that the amount of positional deviation becomes zero.

Further, the control unit 240 controls the Z position detection light source 219 so that the eye E is irradiated with detection light (slit light or spot light) irradiated from an oblique direction. Therefore, the reflected light reflected on the cornea is incident on the line sensor 221 via the imaging lens 220. The control unit 240 detects the Z-direction position of the eye E based on the output signal of the line sensor 221. Further, the control unit 240 obtains the amount of positional deviation in the Z direction, which is necessary to match the reflected light from the cornea with the predetermined position of the line sensor 221. Then, the control unit 240 moves the apparatus body in the direction opposite to the positional deviation so that the positional deviation amount becomes zero.

Since the movement in the XYZ directions is performed for each predetermined movement distance as described above, the alignment process generally needs to be completed by repeating the alignment. Therefore, it is determined by the control unit 240 whether the alignment is completed (step S215). That is, when the deviation amount becomes 0 (or the absolute value of the deviation amount is equal to or smaller than the threshold), the control unit 240 determines that the alignment has been completed. When it is not determined in step S215 that the alignment has been completed, the control unit 240 repeats the processes in step S210 and subsequent steps.

When it is determined in step S215 that the alignment has been completed, the control unit 240 performs eye tracking (step S220). Eye tracking is an operation of changing the capturing position of a three-dimensional image according to the positional displacement of the eye to be examined. Specifically, a characteristic point (e.g., a bright spot or a site having a predetermined characteristic) of the eye to be inspected is detected by the two-dimensional position sensor 218 or the area sensor 208 to specify a change in the position of the eye to be inspected after alignment. Then, the control unit 240 performs feedback control for correcting the scanning position of the galvanometer scanner 202 so that the position change is eliminated. Therefore, a three-dimensional image equivalent to that when the position of the eye to be inspected is not changed after the alignment is completed can be obtained.

Next, the control unit 240 acquires a three-dimensional image using the function of the three-dimensional image acquisition unit 240b (step S225). That is, the control unit 240 controls the OCT interference system 100, the scanning alignment optical system 200 (e.g., the galvanometer scanner 202), and the k-clock generation interference optical system 400 using the function of the three-dimensional image acquisition unit 240b to acquire the measurement interference signal. The control unit 240 performs processing such as inverse fourier transform on the measurement interference signal, and acquires a tomographic image of the preceding section Ec along the scanning line.

The control unit 240 changes the scanning direction of the galvanometer scanner 202 to acquire tomographic images of a plurality of cross sections covering the entire anterior segment of the eye E. Then, the control unit 240 generates a three-dimensional image based on the obtained plurality of tomographic images, and records three-dimensional image data 245 indicating the obtained three-dimensional image in the storage medium 245. When the three-dimensional image is acquired, the control unit 240 ends the eye tracking process (step S230).

Next, the white light sources 205,205 are turned on under a first condition (step S235). That is, the inspector sets the light emission condition to the first condition via an input unit (not shown), and gives an instruction to turn on the white light sources 205, 205. Therefore, the control unit 240 turns on the white light sources 205,205 under the set first condition (color temperature of 5500K and predetermined luminance (luminous intensity, brightness, illuminance, etc.)).

Next, the control unit 240 captures an image of the eye to be inspected E using the function of the two-dimensional image acquisition unit 240a (step S240). That is, the control unit 240 controls the front-stage photographing system (the area sensor 208 and the like) serving as a color camera to acquire an image of the eye E using the function of the two-dimensional image acquisition unit 240 a.

Next, the control unit 240 performs a process for removing the background. Specifically, the control unit 240 turns off the white light sources 205,205 using the function of the two-dimensional image acquisition unit 240a (step S245). At this time, the indoor lighting is kept in the on state. Next, the background is photographed (step S250). Specifically, the control unit 240 controls the area sensor 208 of the scanning alignment optical system 200 to capture the eye E using the function of the two-dimensional image acquisition unit 240a, and acquires the resultant image as a background image. Note that, when the background image is captured, exposure adjustment is not performed, and capturing is performed under the same exposure conditions as step S240.

Next, the control unit 240 removes the background image from the image of the eye E to acquire a two-dimensional image using the function of the two-dimensional image acquisition unit 240a (step S255). That is, the control unit 240 removes each gradation value of the background image acquired in step S250 from each gradation value of the image of the eye E acquired in step S240 to acquire a resultant image as a two-dimensional image after correction. Then, the control unit 240 records two-dimensional image data 245a indicating a two-dimensional image in the storage medium 245. According to the above configuration, the influence of the indoor illumination can be removed from the two-dimensional image of the eye E to be inspected.

Next, the control unit 240 performs color calibration using the function of the two-dimensional image acquisition unit 240a (stepS260). That is, the control unit 240 refers to the calibration data 245c recorded in the storage medium 245, and corrects the gradation value of each two-dimensional coordinate indicated by the two-dimensional image data 245 a. For example, when the gradation value at a specific coordinate (u, v) is (R)i,Gi,Bi) Corrected gray value (R)o,Go,Bo) Red component R ofoIs prepared by reacting (R)i,Gi,Bi) A value obtained by substituting the formula (1). Further, the corrected green color component GoIs prepared by reacting (R)i,Gi,Bi) Substituting the value obtained in equation (2), and the corrected blue component BoIs prepared by reacting (R)i,Gi,Bi) The value obtained by the formula (3) is substituted. When all the pixels of the two-dimensional image are corrected, the control unit 240 updates the storage medium 245 with two-dimensional image data 245a indicating the two-dimensional image after the correction. According to the above-described procedure, it is possible to generate the two-dimensional image data 245a in which colors are described without depending on an output device and color reproducibility is ensured.

(4-1) correspondence definition data generating process:

next, the control unit 240 performs a correspondence relation definition data generation process using the function of the correspondence relation definition data generation unit 240c (step S265). Fig. 5B shows a flowchart of the correspondence relation definition data generation process. In the correspondence defining data generating process, the control unit 240 turns on the white light source under the first condition (step S300). That is, the control unit 240 turns on the white light sources 205,205 under the same conditions as when a two-dimensional image is taken with a color camera.

Next, the calibration structure is photographed (step S305). The correspondence relation definition data is data defining a correspondence relation between three-dimensional coordinates of the three-dimensional image and two-dimensional coordinates of the two-dimensional image. Therefore, in the present embodiment, a configuration is adopted in which the correspondence relationship is defined by actually photographing the calibration structure with a color camera.

The calibration structure is a three-dimensional structure for clarifying a correspondence between three-dimensional coordinates of a three-dimensional image and two-dimensional coordinates of a two-dimensional image, for example, a structure as shown in fig. 5C. The structure shown in fig. 5C has a shape obtained by obliquely cutting a rectangular parallelepiped having deformed cavities, and also has a shape in which the internal cavities are separated by a plurality of wall surfaces. The structure is arranged in front of an inspection window provided at a front surface of the device body such that the inner cavity is included in a field of view of the color camera. Thus, the Z direction shown in fig. 2 is the direction from top to bottom in fig. 5C.

The calibration structure is configured to have a shape such that a specific position of the calibration structure is apparent in the image when viewed from such a direction. For example, in the structure shown in fig. 5C, when the structure is photographed as a three-dimensional image or a two-dimensional image, the positions (P1, P2, P3, etc.) at the respective wall surface intersection points can be easily specified in the image. These points are referred to herein as reference points.

In the present embodiment, a two-dimensional image and a three-dimensional image are captured using a calibration structure provided in the apparatus main body. That is, the control unit 240 controls the front-stage photographing system (the area sensor 208 and the like) serving as a color camera to acquire a two-dimensional image of the calibration structure using the function of the two-dimensional image acquisition unit 240 a. Further, the control unit 240 controls the OCT interference system 100, the scanning alignment optical system 200 (e.g., the galvanometer scanner 202), and the k-clock generation interference optical system 400 using the function of the three-dimensional image acquisition unit 240b to acquire a three-dimensional image of the calibration structure.

Next, the control unit 240 specifies a reference point (step S310). That is, the control unit 240 specifies a reference point based on the two-dimensional image captured in step S305, and specifies two-dimensional coordinates of the reference point. Further, the control unit 240 specifies a reference point based on the three-dimensional image captured in step S305, and specifies the three-dimensional coordinates of the reference point. It should be noted that the reference point may be specified by a variety of techniques. Each reference point may be specified by an input of an examiner, or based on feature quantities indicating features of the reference point acquired from the two-dimensional image and the three-dimensional image, respectively. In any case, for each of the plurality of reference points, the two-dimensional image and the three-dimensional image of the same reference point are associated with each other.

Therefore, at the position where the plurality of reference points are photographed, the three-dimensional coordinates of the three-dimensional image and the two-dimensional coordinates of the two-dimensional image establish a correspondence relationship. Therefore, in the present embodiment, the correspondence relationship of a large number of coordinates is defined from these correspondence relationships. For this, the control unit 240 acquires the internal parameters and the external parameters (step S315). Specifically, when a color camera is used to photograph an object in a field of view to obtain an image, the relationship between the coordinates of the object and the coordinates of the image may be described based on the known relational expression (4).

[ mathematical formula 2]

<IMG SRC="TO19004JPformula-2.bmp">

Here, the three-dimensional coordinates (X, Y, Z) are coordinates describing a three-dimensional position in the three-dimensional image data 245b, and the two-dimensional coordinates (u, v) are coordinates describing a two-dimensional position in the two-dimensional image data 245 a. In formula (4), a column vector (X, Y, Z, 1) including three-dimensional coordinates (X, Y, Z) is multiplied by two types of matrices. A matrix of 3 rows and 3 columns represents the internal parameters of the color camera; (c)x,cy) Represents a principal point (in the present embodiment, the center of the two-dimensional image); f. ofxAndfyeach is a focal length expressed in units of pixels. On the other hand, a matrix of 3 rows and 4 columns in formula (4) represents an extrinsic parameter and represents rotation (r)11To r33) And translation (t)1To t3) For converting the three-dimensional coordinates observed from the camera into three-dimensional coordinates describing the OCT result.

The internal parameter is a parameter indicating a characteristic of an optical system of the color camera, and is determined based on the optical system of the color camera, i.e., the front-end photographing system. In the present embodiment, the internal parameter 245e is recorded in the storage medium 245 (internal parameter 245e) in advance. The external parameter is a parameter for associating a world coordinate system outside the color camera with a three-dimensional coordinate system viewed from the color camera, and is determined based on a relationship between the eye E to be examined and the color camera. In the present embodiment, the external parameters are specified in advance based on the relationship between the optical system of the color camera and the space where the subject eye E as the object is arranged, so that the world coordinate system matches the three-dimensional coordinate system used in the three-dimensional image obtained by OCT. In the present embodiment, the external parameters are also determined in advance and recorded in the storage medium 245 (external parameters 245 f).

According to the internal parameters 245E and the external parameters 245f as described above, the three-dimensional coordinates indicating the position of each part of the eye E photographed by OCT can establish a correspondence relationship with the two-dimensional coordinates indicating the position of the eye E on the two-dimensional image captured with the color camera. However, since the lenses (the objective lens 204 and the imaging lens 207) of the color camera have various distortion problems, equation (4) is deformed in the present embodiment so that the result obtained by the external parameter 245f may include the influence of the distortion.

Specifically, the formula for converting the three-dimensional coordinates (X, Y, Z) by the external parameters 245f is expressed as the following formula (5).

[ mathematical formula 3]

<IMG SRC="TO19004JPformula-3.bmp">

In the formula, R represents the rotation (R) of the external parameter 245f11To r33) T represents the translation of the external parameter 245f (t)1To t3) And (5) vector quantity.

Then, the formulas containing the distortion influence are expressed as the following formulas (6) to (12).

[ mathematical formula 4]

<IMG SRC="TO19004JPformula-4.bmp">

That is, when the coordinates (x, y, z) obtained by the formula (5) are substituted into the formulas (6) and (7), the coordinates (x ', y') are obtained. When the obtained coordinates (x ', y') are substituted into equations (8) and (9), and equation (10) is used, coordinates (x ", y") are obtained. Then, the obtained coordinates (x ", y") are substituted into equations (11) and (12), and calculation is performed based on the internal parameters 245e, obtaining two-dimensional coordinates (u, v).

The three-dimensional coordinates (X, Y, Z) may be converted into two-dimensional coordinates (u, v) in the manner described above. Therefore, when the distortion coefficients k1, k2, p1, and p2 of the unknown coefficients in the formulas (6) to (12) are specified, the correspondence relationship between the three-dimensional coordinates of the three-dimensional image and the two-dimensional coordinates of the two-dimensional image is defined. Therefore, the control unit 240 acquires the distortion coefficients k1, k2, p1, and p2 based on the correspondence relationship between the two-dimensional images and the three-dimensional images of the plurality of reference points specified in step S310 (step S320). For example, the control unit 240 specifies equations for solving the distortion coefficients k1, k2, p1, and p2 from equations (6) to (12) using the correspondence relationship with respect to the reference point to specify the distortion coefficients. Of course, by using the correspondence with respect to a large number of reference points, solutions close to the true values of the distortion coefficients k1, k2, p1, and p2 can be calculated.

In any case, when acquiring the distortion coefficient, the control unit 240 records the information of equations (6) to (12) including the distortion coefficient value as the correspondence definition data 245d in the storage medium 245. In addition, in the present embodiment, the position to be colored (predetermined position) is predetermined. Accordingly, the control unit 240 generates the correspondence relation definition data 245d for a predetermined position such as the iris, the anterior segment surface other than the iris and the cornea, the eyelid, and the like.

As described above, the distortion coefficient and equations (6) to (12) can be used to define the correspondence between the three-dimensional coordinates of the three-dimensional image of the eye E and the two-dimensional coordinates of the two-dimensional image thereof. However, in this correspondence, the cornea of the eye E to be examined, which serves as the crystalline lens, is not considered. Therefore, if the refraction of the cornea is not considered, the three-dimensional coordinates of the three-dimensional image of the eye E to be examined can be made to correspond to the two-dimensional coordinates of the two-dimensional image.

In the present embodiment, the correspondence is defined to be more faithful to the actual appearance of the eye E to be examined, taking into account the refraction of the cornea. Specifically, for the iris existing on the back of the cornea, ray tracing is taken into account to define the correspondence. For this, the control unit 240 acquires a target region for ray tracing (step S325). In this embodiment, the opaque part affected by corneal refraction is the iris, which is the target area for ray tracing. Accordingly, the control unit 240 extracts a part of the captured iris image from the two-dimensional image data 245a and the three-dimensional image data 245b and sets it as a target region for ray tracing. It is to be noted that various techniques may be used as a technique of extracting an iris image, and for example, the control unit 240 may accept range designation of the examiner or designate a target region by extracting feature amounts indicating iris features from a three-dimensional image and a two-dimensional image, respectively.

When the target area is acquired, the control unit 240 acquires the correspondence between the three-dimensional coordinates and the two-dimensional coordinates of the target area based on ray tracing (step S330). That is, the control unit 240 performs ray tracing on any portion of the iris that is included in the target region and is opaque.

Fig. 7 is an exemplary diagram of ray tracing. In fig. 7, the cornea C of the eye E to be examined and the structure of its periphery are shown with a plane passing through the center of the cornea and parallel to the Z and Y directions as a cross section. Note that, in fig. 7, an optical axis AL passing through the corneal vertex and parallel to the Z direction is indicated by a chain line, and a virtual ray is indicated by a two-dot chain line.

The control unit 240 can specify the structure of the eye E on the cross section as shown in fig. 7 by the three-dimensional image data 245 b. The control unit 240 determines the focal point position Pf and the principal point position Pp on the cross section based on the corneal shape specified by the three-dimensional image data 245 b.

The focal position Pf can be defined, for example, by specifying the optical axis AL and the pair of three-dimensional coordinates (X)1,Y1,Z1) Intersection between imaginary rays from three-dimensional coordinates (X) when ray tracing is performed1,Y1,Z1) Extends to a direction parallel to the optical axis (Z direction) and, when reaching the cornea, is refracted by the refractive power of the cornea and intersects the optical axis AL. Refraction at the cornea can be considered to occur once at a predetermined index of refraction. Alternatively, refraction at the cornea may be considered to occur once on the posterior and anterior surfaces of the cornea, respectively, with the refractive indices on the respective surfaces being different predetermined values.

The principal point position Pp may be defined, for example, as the intersection between the optical axis AL and the anterior surface of the cornea (corneal vertex). That is, the principal point position of the cornea can be considered to be almost equal to the anterior surface of the cornea (for example, lecture "basis of optics of the eye" (Japanese society for Vision), journal "Vision", Vol. 1, No. 1 (1 month in 1989)). Thus, the principal point position Pp may be defined as, for example, a specified intersection point between the anterior surface of the cornea C indicated by the three-dimensional image data 245b and the optical axis AL.

Of course, the method of acquiring the focal position Pf and the principal point position Pp is merely an example, and these points may be acquired by other methods. For example, the master point position Pp may be calculated according to a general formula based on the ray tracing-specified master point position Pp. Specifically, the master point position Pp can be acquired by using the following parameters. Radius of curvature of anterior surface of cornea r1Radius of curvature of posterior surface of cornea r2Refractive index of air ═ n1Refractive index of corneal stroma n2Anterior chamber (anterior chamber) water refractive index ═ n3D central corneal thickness and D anterior corneal surface refractive index1Refractive index of posterior surface of cornea ═ D2Total refractive index of cornea Dt

Incidentally, the radius of curvature r1 of the anterior surface of the cornea and the radius of curvature r2 of the posterior surface of the cornea may be obtained by the control unit 240 specifying the anterior surface and the posterior surface, respectively, wherein the control unit 240 calculates the center of curvature and calculates the distance to the respective surfaces based on the feature quantity and the like, based on the three-dimensional image data 245b, and based on at least three points on the respective surfaces. The thickness d of the cornea center can be acquired by, for example, the control unit 240 specifying the distance between the cornea front surface and the cornea rear surface on the optical axis AL. Refractive index n for air, refractive index n for corneal stroma2And refractive index n of aqueous humor of anterior chamber3Known values (e.g., 1.0, 1.376, and 1.336, respectively) may be used.

Refractive index D of anterior surface of cornea1Is D1=(n2-n1)/r1

Refractive index D of the posterior surface of the cornea2Is D2=(n3-n2)/r2

Total refractive index D of corneatIs Dt=D1+D2-(d/N2)D1D2

The distance from the anterior surface of the cornea to the position of the image-side principal point is e' + d, as shown below.

e'+d=-d·n3·D1/(N2Dt)+d

Based on the image-side principal point position thus calculated, the control unit 240 may regard the position of the distance e' + d to the image side from the intersection point between the anterior corneal surface and the optical axis AL as the principal point position Pp.

When the focal point Pf and the principal point position Pp are determined, coordinates (Xv, Yv, Zv) of an intersection of a line and a line Lp obtained by extending the ray Lf to the rear side of the eye E can be defined as the position of the virtual image. Here, the light ray Lf is light refracted by the cornea after traveling parallel to the Z direction from three-dimensional coordinates (X1, Y1, Z1) and toward the focal point Pf. The line Lp is a line connecting the principal point position Pp and the three-dimensional coordinate (X)1,Y1,Z1) The line of (2). In fig. 7, virtual images of the iris and lens are schematically shown by broken lines.

When the control unit 240 obtains the coordinates (X) representing the position of the virtual image through the above-described procedurev,Yv,Zv) When three-dimensional coordinates (X) can be specified1,Y1,Z1) Is taken as a two-dimensional image (u1, v 1). Specifically, at two-dimensional coordinates (u1, v1), the capture occurs at the coordinate (X)v,Yv,Zv) A virtual image of (c). Therefore, the control unit 240 acquires the data corresponding to the three-dimensional coordinates (X) by equations (6) to (12) based on the correspondence relation definition data 245dv,Yv,Zv) Two-dimensional coordinates (u1, v 1). Then, the control unit 240 converts the three-dimensional coordinates (X)1,Y1,Z1) And establishing a corresponding relation with the two-dimensional coordinates (u1, v 1).

The control unit 240 performs the above-described process on the portions of the iris to correspond the three-dimensional coordinates (X, Y, Z) of the portions of the iris to the two-dimensional coordinates (u, v). Then, the control unit 240 records the information as correspondence definition data 245d in the storage medium 245 together with the information indicating the target area. The correspondence definition data 245d is recorded in association with two-dimensional image data 245a and three-dimensional image data 245b, the two-dimensional image data 245a and the three-dimensional image data 245b being data on an image of the eye to be inspected E. According to the above procedure, the correspondence relation definition data 245d can be defined while taking into account the influence of refraction in the cornea.

When the correspondence definition data 245d is recorded in the storage medium 245 in the manner as described above, the control unit 240 returns to the shooting process shown in fig. 6 and continues the process. That is, the control unit 240 colors the three-dimensional images with the two-dimensional images using the function of the display control unit 240d, and collectively displays the three-dimensional images based on the two-dimensional images (step S275). That is, the control unit 240 generates a tomographic image of a cross section (for example, a cross section designated by an examiner) as a display target based on the three-dimensional image data 245b recorded in the storage medium 245.

Then, the control unit 240 determines whether the tomographic image contains the three-dimensional coordinates defined by the correspondence defining data 245 d. When the three-dimensional coordinates defined by the correspondence relation definition data 245 are contained, the control unit 240 specifies the corresponding two-dimensional coordinates based on the correspondence relation definition data 245d, and regards the gradation value in the two-dimensional image as the color corresponding to the three-dimensional coordinates. Then, the control unit 240 controls the display 230 to display the tomographic image while coloring the three-dimensional coordinates specified by the color based on the gray values of the corresponding two-dimensional coordinates.

Further, the control unit 240 acquires the two-dimensional image data 245a, and controls the display 230 to display a two-dimensional image. For the above reason, for example, the two-dimensional image Ie and the tomographic images Iov, Ioh, and Ior shown in fig. 4 are displayed. Not only the two-dimensional image Ie but also predetermined positions of the respective tomographic images Iov, Ioh, and Ior are displayed in color.

(5) Other embodiments

The above-described embodiment is one example for implementing the present invention, and the present invention can also be implemented in various other embodiments. Therefore, at least a part of the configuration of the above-described embodiments may be omitted or replaced, or at least a part of the processes may be omitted or replaced, or the order thereof may be changed. For example, in the correspondence relation definition data, steps S300 to S320 may be performed in advance, for example, before shipment. In this case, after the subject eye E is photographed, steps S300, S325, and S330 are performed. Then, the generation process of the correspondence relation definition data 245d is completed. Further, the calibration data generation process may be performed at any time after shipment of the OCT apparatus 1. Further, the alignment structure is not limited to the structure shown in fig. 5C. That is, the calibration structure may be a structure having a plurality of reference points, such as intersection points. Thus, for example, it may have a shape obtained by cutting a cylinder having an inner wall inclined with respect to the axis, or may have another three-dimensional structure.

Further, the light source that is turned on at the time of real-time display need not be a white light source. For example, a light source that outputs infrared light or a light source that outputs green light may be used. In particular, if the room illumination is bright enough for real-time display, or if the brightness of the fixation target is bright enough for fixation of the eye to be examined, infrared light may be used for illumination instead of the white light source to acquire a real-time image. The eye to be inspected becomes more stable as the infrared light reduces glare. Furthermore, in a two-dimensional image captured under infrared light illumination, the texture of the iris is rendered sharper than in a two-dimensional image captured under white light illumination. Therefore, it can be more easily used as a feature point for eye tracking. Further, in the two-dimensional image captured under the illumination of infrared light, the contrast between the pupil and the iris becomes clearer than in the two-dimensional image captured under the illumination of white light. Therefore, the position of the center of gravity of the pupil or the like can also be used as a feature point for tracking. Furthermore, in a two-dimensional image captured under infrared light illumination, the iris and the contrast between the pupil and the iris may be more clearly expressed than in a two-dimensional image captured under white light illumination.

Furthermore, blood vessels are more visible in the two-dimensional image captured under green illumination than in the two-dimensional image captured under white illumination. Therefore, the feature points of the blood vessel structure can be easily used as the feature points for eye tracking. Further, in a configuration providing a white light source and a light source of another color, each light source may be turned on alternately. Real-time displayed two-dimensional images captured under illumination with light of any color may also be used for purposes other than real-time display. In this case, the live displayed two-dimensional image is also stored in the storage medium 245.

The two-dimensional image acquisition unit only needs to be able to capture a two-dimensional image by photographing the front surface of the eye E with a color camera. That is, the color camera only needs to be able to capture the same front surface of the eye E as the eye E targeted for OCT and output its image as a color image. The front surface (front face) of the eye E is the front surface (front face) of the eye E, and a configuration example of photographing the front surface of the eye E includes a configuration in which an eye axis matching the visual line direction of the eye E and an optical axis of a color camera match. Of course, since the eye E to be examined moves slightly due to involuntary eye movement during fixation or the like, it is not strictly required that the eye axis matches the optical axis. It is sufficient that the eye axis direction of the eye E and the optical axis of the color camera substantially match each other in the fixation state, so that the anterior segment of the eye E can be photographed. Further, since the color camera only needs to acquire a two-dimensional image, the color camera only needs to be equipped with a sensor (area sensor, scannable line sensor, etc.) capable of two-dimensionally capturing visible light reflected from the eye E to be examined). Of course, the number, form, arrangement, and the like of optical elements forming the color camera in the optical system are not limited to the configuration shown in fig. 2. If an object (color chart or the like) other than the eye to be inspected E is arranged in the field of view of the color camera, a two-dimensional image of the object can be acquired under the same conditions (illumination conditions or the like) as the eye to be inspected E.

The three-dimensional image acquisition unit only needs to be able to acquire a three-dimensional image of the eye E to be examined by OCT. Optical Coherence Tomography (OCT) is a method involving branching light of a light source, combining reflected light of a measurement target and a reference light generating unit (mirror or the like), and measuring interference to obtain information on the measurement target in a depth direction (measurement light traveling direction) of the measurement target, and various methods can be employed. Therefore, the method of OCT is not limited to the swept source OCT (SS-OCT) described above, and may be any other method of time domain OCT (TD-OCT) or fourier domain OCT, for example, spectral domain OCT (SD-OCT).

Further, the optical system for performing OCT is not limited to the configuration of the above-described embodiment, and the number, form, arrangement, and the like of the optical elements are not limited to the configuration shown in fig. 1. Of course, the scanning direction of the input light in the galvanometer scanner 202 may also be a plurality of modes when acquiring a tomographic image.

The correspondence relation definition data generation unit need only generate correspondence relation definition data in which, when a predetermined part is captured as a three-dimensional image, the position of the predetermined part of the eye E to be inspected in the three-dimensional image is brought into correspondence relation with the position thereof in the two-dimensional image when the predetermined part is captured as the two-dimensional image. That is, the correspondence relation definition data is used only to define the same portion in the three-dimensional image as that where the eye E is photographed, and the position where the portion is photographed in the two-dimensional image. The predetermined portion may be a portion of the eye to be examined or the entire eye as long as the three-dimensional coordinates of the three-dimensional image are associated with the two-dimensional image of the two-dimensional image. Since the two-dimensional image is defined by the gradation value of each pixel and the color is defined by the gradation value, the correspondence defining data can define the color of the three-dimensional image based on the color of the two-dimensional image. The correspondence relation definition data need only define correspondence relations between positions in the respective images, which can be defined in various modes. The present invention is not limited to the configuration in which the three-dimensional coordinates of the three-dimensional image are associated with the two-dimensional coordinates of the two-dimensional image as in the above-described embodiments.

For example, a three-dimensional image obtained by OCT shows the structure of the eye to be examined E in a three-dimensional space by superimposing a plurality of tomographic images (two-dimensional images), but information generated from this information may be used to generate data in another format and use the data as correspondence defining data. More specifically, a three-dimensional image obtained by OCT is shown by each single-channel gradation value in a plurality of three-dimensional coordinates. Therefore, it is also possible to generate polygons representing the structural surface of the eye E to be inspected (the surface of the eye E, the iris, etc.) based on the three-dimensional coordinates to define the position (two-dimensional coordinates, etc.) in the two-dimensional image corresponding to each polygon, and to use this data as the correspondence defining data. In this case, the color and texture of each polygon are specified based on the two-dimensional image. Such data may be defined in a variety of modes, but is preferably in a common format (e.g., STL (stereolithography) data, etc.).

Further, the technique of associating the three-dimensional coordinates of the three-dimensional image with the two-dimensional coordinates of the two-dimensional image in the correspondence defining data is not limited to the mode in the above-described embodiment. For example, in the above-described embodiment, the correspondence between arbitrary three-dimensional coordinates and two-dimensional coordinates included in a predetermined portion is defined. However, the correspondence relationship between the three-dimensional coordinates and the two-dimensional coordinates at the representative point included in the predetermined portion may be defined. In this case, the correspondence between the three-dimensional coordinates and the two-dimensional coordinates at arbitrary coordinates other than the representative points may be calculated by interpolation or the like from the correspondence at the plurality of representative points.

Representative points may be determined by a variety of techniques. For example, coordinates forming a part having a different feature from other parts in the eye E in the two-dimensional image may represent a representative point. That is, the representative point may be a site where a blood vessel having a characteristic shape, which can be distinguished from other blood vessels, appears on the surface of the eye E to be examined, or a site having a characteristic iris pattern and which can be distinguished from other blood vessels.

When the representative point is a position having a specific color, the process may be performed to highlight the specific color. The process for highlighting may be any process of highlighting the intensity of a specific color by comparing it with other colors, and a correction for increasing the intensity of a specific color component of a two-dimensional image or a correction for decreasing color components other than the specific color component, or the color of illumination may be adjusted. For example, to highlight red blood vessels, the intensity of the R component may be increased, the intensity of at least one of the G and B components may be decreased, or the photographing may be performed under green illumination that can be absorbed by the blood vessels. Further, in order to extract representative points of a three-dimensional image, an OCT angiography image, which is a three-dimensional noninvasive angiography technique by OCT, may be constructed, and feature points such as a vascular structure may be extracted and used as the representative points.

In addition, the representative point may be specified by receiving an instruction of the examiner, or automatically specified by a feature amount extraction algorithm or the like that extracts, for example, a feature shape or a pattern in the two-dimensional image and the three-dimensional image. When the representative point is designated, image processing may be performed on the two-dimensional image or the three-dimensional image. For example, information in the depth direction (possibly also additional average) may be added based on the three-dimensional image, and the representative point may be specified based on an Enface image obtained by projecting the information on a vertical surface in the depth direction.

Various methods can be adopted as a method for designating a representative point based on the Enface image and establishing a corresponding relation between the three-dimensional coordinate and the two-dimensional coordinate. For example, if a homographic transformation is used, the Enface image and the two-dimensional image may be associated with each other, and the three-dimensional coordinates and the two-dimensional coordinates may be associated with each other from the correspondence. Specifically, the homographic transformation is represented by the following formula (13).

[ mathematical formula 5]

<IMG SRC="TO19004JPformula-5.bmp">

In the formula, h11、h12、h21And h22For rotation, including magnification/reduction at a fixed magnification, h, which is invariant with respect to coordinate position13And h23For translation. h is31And h32There is a trapezoidal transformation effect in which the scaling factor varies according to the coordinate position, and s is a constant coefficient. According to the above equation (13), the coordinates (X, Y) on one plane can be projected onto the coordinates (X, Y) on the other plane. E.g. h, included in the above formula (13)11May be determined by specifying a correspondence relationship of a plurality of representative points. Therefore, each parameter can be specified by extracting the number of representative points necessary for parameter calculation by specification by an inspector or a feature amount extraction algorithm or the like, and substituting the representative points into formula (13). Therefore, the three-dimensional coordinates and the two-dimensional coordinates may be associated with each other. Any one of the coordinates (X, Y) and the coordinates (X, Y) may be coordinates of the Enface image or coordinates of the two-dimensional image.

In addition, the representative points may be generated by a method such as machine learning. In this case, a machine learning model such as a neural network that inputs a two-dimensional image or a three-dimensional image and outputs representative point coordinates is a machine learning method. When machine learning is completed, the two-dimensional image or the three-dimensional image may be input to the machine learning model to specify the representative point.

Further, machine learning can be used as a technique of defining a correspondence between three-dimensional coordinates and two-dimensional coordinates. Such a configuration can be realized by defining a machine learning model that inputs, for example, three-dimensional coordinates and two-dimensional coordinates, and outputs information indicating the correspondence between the three-dimensional coordinates and the two-dimensional coordinates (for example, coefficients of a conversion formula in the above-described embodiment). That is, when such machine learning is completed, the three-dimensional coordinates and the two-dimensional coordinates are input to obtain information indicating the correspondence between the two coordinates, and the information can be regarded as correspondence defining data.

The correspondence relation definition data may be in various modes. For example, in the above-described embodiment, for the iris portion and the portion other than the iris of the eye E to be examined, different methods are employed for associating the three-dimensional coordinates and the two-dimensional coordinates. That is, in order to correlate coordinates, ray tracing is used for iris portions, and internal and external parameters of the color camera are used for portions other than the iris. However, the invention is not limited to this mode, for example, the coordinates can be correlated by ray tracing or using the internal and external parameters of a color camera for all parts of the eye E to be examined.

At least, the reference color space may be any color space as long as it can suppress a color change due to the device when outputting the color of the eye to be examined E. Therefore, as described above, in addition to a configuration in which a color space (sRGB, CIEXYZ, CIELAB, or the like) that can express colors without depending on the apparatus is a reference space, the reference color space may be selected according to the purpose of using correspondence defining data. For example, if the data is defined sufficiently to ensure that the color expressed by a particular output device is an appropriate color by using the correspondence, a color space in which the color is expressed on the output device may be used as the reference color space. Output devices can include a variety of devices such as displays, printers (including three-dimensional printers), and projectors.

The color chart is not limited to the standard color chart as long as it has a plurality of color patches for color calibration. For example, if the number of patches close to the target color (including the eye E) is larger than that in the standard color table, the possibility of performing the color calibration of the eye E more accurately can be increased.

As long as the white light source generates reflected light of the same color as perceived by the examiner and irradiates the eye E, the eye E can be photographed with the color by the color camera. Therefore, in the above-described embodiment, the color temperature is not limited to 5500K, and may be a plurality of color temperatures. For example, a color temperature that allows reproduction close to an environment state in which the correspondence relation definition data of the eye to be inspected E is used for display or printing may be selected.

The calibration data need only be able to convert the grey values in the two-dimensional image into grey values expressing the colours, thus acting as the colours of the reference colour space. Therefore, the conversion method is not limited to conversion by a polynomial. For example, a color conversion table indicating the correspondence relationship of the gradation values at the representative points may be defined, and interpolation calculation may be performed based on the color conversion table, thereby converting the gradation values.

In the case where there is no background light or the influence of the background light is small, the process of performing background removal by removing the captured image at the time of turning off the white light source may be omitted. For example, if a two-dimensional image is captured with a light source other than a white light source included in the ophthalmic apparatus (e.g., a lighting fixture in a room) turned off, the background removal process may be omitted.

Further, the present invention is not limited to only a configuration in which when ray tracing is used, the result of ray tracing and the internal parameter 245e and the external parameter 245f are used to establish a correspondence relationship between three-dimensional coordinates and two-dimensional coordinates. For example, light rays from the eye E to be inspected to the area sensor 208 via the lenses (objective lens 204, imaging lens 207) of the color camera can be traced.

Further, a three-dimensional image colored with a color indicated by a two-dimensional image may be subjected to various processes. For example, a three-dimensional image obtained by combining a three-dimensional image of the anterior segment of the eye obtained by OCT and a three-dimensional image obtained by OCT angiography may be colored with a color indicated by a two-dimensional image.

Further, the display mode of the color three-dimensional image is not limited to the modes in the above-described embodiments. For example, a tomographic image in an arbitrary direction obtained from a three-dimensional image may be colored and displayed, or an edge image (sum in the depth direction) may be colored. Further, the virtual three-dimensional model (a state in which at least a part of the eye E to be examined is represented in a perspective view) may be colored. Note that in color display of a three-dimensional image, color configuration based on the calibration data 245c may not be performed, and color display may be performed by using a gradation value in a two-dimensional image captured by a color camera. Of course, the three-dimensional image (tomographic image) may be displayed without being colored, for example, by the selection of the examiner. Further, the display of the data defined using the correspondence relationship is not limited to color display. For example, when an arbitrary position of a three-dimensional image is indicated, the position of the corresponding two-dimensional image may be specified based on the correspondence definition data, and the specified position may be displayed on the two-dimensional image. In addition, when an arbitrary position of a two-dimensional image is specified, the position of the corresponding three-dimensional image may be specified based on the correspondence defining data, and the specified position may be displayed on the three-dimensional image.

REFERENCE SIGNS LIST

1 optical coherence tomography apparatus

10 wavelength scanning light source

20 samples

100 Optical Coherence Tomography (OCT) interferometry system

104 measurement side circulator

105 reference side circulator

110 balance detector

120 polarization controller

200 calibrating an optical system

201 collimating lens

202 galvanometer scanner

203 hot mirror

204 objective lens

205 white light source

206 Beam splitter

207 imaging lens

208 zone sensor

210 fixed target light source

211 movable zoom lens

212 Cold mirror

213 Hot mirror

214 relay lens

215 beam splitter

216 XY position detection light source

217 imaging lens

218 two-dimensional position sensor

219Z position detecting light source

220 imaging lens

221 line sensor

230 display

240 control unit

240a two-dimensional image acquisition unit

240b three-dimensional image acquisition unit

240c correspondence relation definition data generating unit

240d display control unit

245 storage medium

245a two-dimensional image data

245b three-dimensional image data

245c calibration data

245d correspondence relation definition data

245e internal parameters

245f external parameters

300 reference optical system

301 reference cell

400 k-clock interference generating optical system

29页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于光电融合的血液参数和电生理参数的检测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!