Image acquisition device and method for operating image acquisition device

文档序号:1413497 发布日期:2020-03-10 浏览:11次 中文

阅读说明:本技术 图像获取装置以及图像获取装置的工作方法 (Image acquisition device and method for operating image acquisition device ) 是由 坂本阳平 于 2019-08-26 设计创作,主要内容包括:提供一种图像获取装置以及图像获取装置的工作方法,能够缩短获取用于对被摄体的三维形状进行复原的图像的时间。在对图像获取装置设定了图像获取模式的情况下,控制部识别由操作部受理的方向。所述图像获取模式中的图像获取条件由第一信息和第二信息来规定。所述第一信息表示摄像视场被变更的速度或距离。所述第二信息表示获取在三维形状的复原中使用的图像的时机。所述控制部使视场变更部将所述摄像视场以所述速度沿着识别出的所述方向变更,或者将所述摄像视场沿着识别出的所述方向变更所述距离。所述控制部在所述时机从摄像部获取至少2张所述图像。(Provided are an image acquisition device and an operation method of the image acquisition device, which can shorten the time for acquiring an image for restoring the three-dimensional shape of a subject. When the image acquisition mode is set for the image acquisition apparatus, the control unit recognizes the direction received by the operation unit. The image acquisition condition in the image acquisition mode is specified by the first information and the second information. The first information indicates a speed or a distance at which the imaging field of view is changed. The second information indicates a timing of acquiring an image used for restoring the three-dimensional shape. The control unit causes a field-of-view changing unit to change the imaging field of view at the speed in the direction identified or change the imaging field of view at the distance in the direction identified. The control unit acquires at least 2 images from the imaging unit at the timing.)

1. An image acquisition apparatus having:

an imaging unit that generates an image based on an optical image of an object in an imaging field of view;

a field-of-view changing unit that changes the imaging field of view;

an operation unit that receives, from a user, a direction in which the imaging field is changed; and

a control part for controlling the operation of the display device,

the image-taking apparatus is characterized in that,

the control unit recognizes the direction received by the operation unit when an image acquisition mode for acquiring the image used for restoring the three-dimensional shape of the subject is set in the image acquisition apparatus,

the control unit reads first information and second information from a storage medium, the first information indicating a speed at which the imaging field of view is changed or a distance at which the imaging field of view is changed, the second information indicating a timing at which the image used for restoring the three-dimensional shape is acquired, the first information specifying image acquisition conditions in the image acquisition mode,

the control unit causes the field-of-view changing unit to change the imaging field of view in the recognized direction at the speed indicated by the first information or change the imaging field of view in the recognized direction by the distance indicated by the first information,

the control unit acquires the image from the imaging unit at least 2 of the timings indicated by the second information.

2. The image capturing apparatus according to claim 1,

the control unit restores the three-dimensional shape using the image acquired from the imaging unit.

3. The image capturing apparatus according to claim 2,

the images acquired from the image pickup section include 1 first image and at least 1 second image,

the control section detects a region overlapping between the first image and the second image,

the control unit is capable of visually recognizing the region in the first image by processing the first image,

the control unit displays the processed first image on a display unit.

4. The image capturing apparatus according to claim 3,

the operation unit receives an execution instruction to execute restoration of the three-dimensional shape from the user after the first image is displayed on the display unit,

when the operation unit receives the execution instruction, the control unit restores the three-dimensional shape.

5. The image capturing apparatus according to claim 2,

the images acquired from the image pickup section include 1 first image and at least 1 second image,

the control section determines whether or not a specified point specified by the user in the first image is included in the second image,

when the control unit determines that the designated point is included in the second image, the control unit restores the three-dimensional shape.

6. The image capturing apparatus according to claim 2,

after the acquisition of the images based on the image acquisition condition is completed, the control portion compares a first number representing the number of the images acquired from the image pickup portion with a second number representing the number of the images required to restore the three-dimensional shape, the second number being at least 2.

7. The image capturing apparatus according to claim 6,

the control section selects at least the second number of the images acquired from the image pickup section in a case where the first number is larger than the second number,

the control unit restores the three-dimensional shape using the selected image.

8. The image capturing apparatus according to claim 7,

the control unit selects at least the second number of the images based on a degree of overlap of each of the images acquired from the image capturing unit.

9. The image capturing apparatus according to claim 7,

the control unit selects the second number of the images including the image acquired first of the images acquired from the image capturing unit and the image acquired last of the images acquired from the image capturing unit.

10. The image capturing apparatus according to claim 6,

the operation unit accepts a second direction in which the imaging field of view is changed from the user when the operation unit accepts an image acquisition end instruction from the user and the first number is smaller than the second number,

the control unit recognizes the second direction received by the operation unit,

the control unit causes the field-of-view changing unit to change the imaging field of view again in the recognized second direction at the speed indicated by the first information or to change the imaging field of view again in the recognized second direction by the distance indicated by the first information,

after changing the imaging field of view in the second direction, the control unit acquires the image from the imaging unit at least 1 of the timings indicated by the second information.

11. The image capturing apparatus according to claim 6,

the control unit determines a second direction in which the imaging field is changed based on the identified direction when the operation unit receives an image acquisition end instruction from the user and the first number is smaller than the second number,

the control unit causes the field-of-view changing unit to change the imaging field of view again in the determined second direction at the speed indicated by the first information or to change the imaging field of view again in the determined second direction by the distance indicated by the first information,

after changing the imaging field of view in the second direction, the control unit acquires the image from the imaging unit at least 1 of the timings indicated by the second information.

12. The image capturing apparatus according to claim 6,

in a case where the first number is smaller than the second number, the control portion notifies the user that the first number does not reach the second number.

13. The image capturing apparatus according to claim 2,

the operation unit further accepts a position within the imaging field of view from the user,

the control unit recognizes the position received by the operation unit,

the first information indicates the speed at which the imaging field of view is changed,

the control unit causes the field-of-view changing unit to change the imaging field of view along the recognized direction at the speed indicated by the first information until the center of the imaging field of view coincides with the position.

14. The image capturing apparatus according to claim 2,

the control unit displays at least 1 of the images acquired from the imaging unit on a display unit,

the control unit counts a first number indicating the number of images acquired from the imaging unit, and displays, on the display unit, information indicating a ratio of the first number to a second number indicating the number of images required to restore the three-dimensional shape, the second number being at least 2.

15. The image capturing apparatus according to claim 2,

the control section generates a thumbnail image by reducing the number of pixels of the image acquired from the image pickup section,

the control unit displays the thumbnail image on a display unit.

16. An operation method of an image pickup apparatus having:

an imaging unit that generates an image based on an optical image of an object in an imaging field of view;

a field-of-view changing unit that changes the imaging field of view;

an operation unit that receives, from a user, a direction in which the imaging field is changed; and

a control part for controlling the operation of the display device,

the working method of the image acquisition device is characterized by comprising the following steps:

the control unit recognizes the direction received by the operation unit when an image acquisition mode for acquiring the image used for restoring the three-dimensional shape of the subject is set in the image acquisition apparatus;

the control unit reads first information and second information from a storage medium, the first information indicating a speed at which the imaging field of view is changed or a distance at which the imaging field of view is changed, the second information indicating a timing at which the image used for restoring the three-dimensional shape is acquired, the first information specifying image acquisition conditions in the image acquisition mode;

the control unit causes the field-of-view changing unit to change the imaging field of view in the recognized direction at the speed indicated by the first information or change the imaging field of view in the recognized direction by the distance indicated by the first information; and

the control unit acquires the image from the imaging unit at least 2 of the timings indicated by the second information.

17. The method of claim 16, further comprising the steps of:

the control unit restores the three-dimensional shape using the image acquired from the imaging unit.

Technical Field

The present invention relates to an image capturing apparatus and an operating method of the image capturing apparatus.

Background

Industrial endoscope apparatuses are used for observing and inspecting damage, corrosion, and the like of the inside of boilers, turbines, engines, pipes, and the like. In this endoscope apparatus, a variety of optical adapters for observing and inspecting a variety of objects to be observed are prepared. The optical adapter is attached to the distal end portion of the endoscope and is replaceable. In an examination using such an endoscope apparatus, it is desirable to quantitatively measure the size of a defect or a damage of a subject. In response to this desire, there are endoscope apparatuses equipped with a three-dimensional measurement function.

Next, a procedure in which the user performs measurement in an examination performed using the endoscope apparatus is simply shown. First, the user confirms whether or not there is a defect or a damage inside the subject using a monocular optical adapter with high observation performance. When a defect or damage is found during the inspection and the defect or damage is determined to be a measurement target, the user replaces the monocular optical adapter with the optical adapter for measurement. The measurement optical adapter is mounted with a three-dimensional optical system. The user pulls back the endoscope front end that has been inserted inside the subject to replace the optical adapter. After the optical adapter is replaced from the monocular optical adapter to the optical adapter for measurement, the user reinserts the endoscope distal end into the inside of the subject. After the endoscope tip reaches a place of a defect or a lesion found by observation using the monocular optical adapter, the user performs measurement.

Such a procedure is required for the measurement. Therefore, the inspection time from the defect or damage finding to the measurement is long. That is, the inspection efficiency is poor. In order to solve this problem, it is desirable to mount a three-dimensional measurement function on a monocular optical adapter used in a general endoscopic examination. As a technique for performing three-dimensional measurement using a monocular optical adapter, for example, there is a technique disclosed in patent document 1. The technique disclosed in patent document 1 provides a method of measuring by combining structure from Motion (hereinafter abbreviated as SfM) with a ranging unit. Hereinafter, structure from Motion is abbreviated as SfM. The apparatus can restore the three-dimensional shape of the object using the result of SfM. Hereinafter, an image acquired for performing SfM is referred to as a measurement image.

In order to acquire a measurement image using the technique disclosed in patent document 1, it is necessary to acquire images respectively captured from a plurality of camera viewpoints. As one of specific methods for changing the viewpoint of the camera, the following methods can be cited: the camera viewpoint is changed using the bending function of the endoscope distal end. For example, patent documents 2 and 3 disclose the following methods: the bending function is used to change the camera viewpoint and acquire an image.

Disclosure of Invention

Problems to be solved by the invention

However, in the case of acquiring measurement images according to the bending control methods disclosed in patent documents 2 and 3, there is a problem that efficiency in acquiring measurement images is poor. The reason for this is explained below.

To perform SfM, the apparatus only needs to acquire an image of an area recognized as a measurement object by the user. Therefore, an image of a region that is not a measurement target is not required. In the curve control methods disclosed in patent document 2 and patent document 3, a wide range of images is acquired to prevent the omission of the acquisition of the images. In the case where the bending control methods disclosed in patent documents 2 and 3 are applied to image acquisition for SfM, an image of an area other than the area recognized as the measurement target by the user is acquired. Therefore, the time required for image acquisition for measurement becomes long.

An object of the present invention is to provide an image acquisition apparatus and an operation method of the image acquisition apparatus, which can shorten the time for acquiring an image for restoring a three-dimensional shape of a subject.

Means for solving the problems

The present invention is an image acquisition apparatus having: an imaging unit that generates an image based on an optical image of an object in an imaging field of view; a field-of-view changing unit that changes the imaging field of view; an operation unit that receives, from a user, a direction in which the imaging field is changed; and a control unit that recognizes the direction received by the operation unit when an image acquisition mode for acquiring the image used for restoring the three-dimensional shape of the object is set for the image acquisition apparatus, reads first information and second information from a storage medium, the first information indicating a speed at which the imaging field of view is changed or a distance at which the imaging field of view is changed, the second information indicating a timing at which the image used for restoring the three-dimensional shape is acquired, the second information specifying an image acquisition condition in the image acquisition mode, and causes the field of view changing unit to change the imaging field of view at the speed indicated by the first information along the recognized direction or change the imaging field of view along the recognized direction by the distance indicated by the first information, the control unit acquires at least 2 images from the imaging unit at the timing indicated by the second information, and restores the three-dimensional shape using the images acquired from the imaging unit.

In the image capturing apparatus according to the present invention, the image captured by the imaging unit includes 1 first image and at least 1 second image, the control unit detects a region overlapping between the first image and the second image, the control unit processes the first image to visually recognize the region in the first image, and the control unit displays the processed first image on a display unit.

In the image capturing apparatus according to the present invention, the operation unit receives an execution instruction to execute restoration of the three-dimensional shape from the user after the first image is displayed on the display unit, and the control unit restores the three-dimensional shape when the execution instruction is received by the operation unit.

In the image capturing apparatus according to the present invention, the image captured by the image capturing unit includes 1 first image and at least 1 second image, the control unit determines whether or not a specified point specified by the user in the first image is included in the second image, and the control unit restores the three-dimensional shape when the control unit determines that the specified point is included in the second image.

In the image acquisition apparatus of the present invention, after the acquisition of the image based on the image acquisition condition is completed, the control unit compares a first number indicating the number of the images acquired from the image pickup unit with a second number indicating the number of the images required to restore the three-dimensional shape, the second number being at least 2.

In the image acquiring apparatus according to the present invention, when the first number is larger than the second number, the control unit selects at least the second number of the images among the images acquired from the image pickup unit, and the control unit restores the three-dimensional shape using the selected images.

In the image acquisition apparatus of the present invention, the control unit selects at least the second number of the images based on a degree of overlap of each of the images acquired from the image pickup unit.

In the image acquisition apparatus according to the present invention, the control unit selects the second number of the images including the first acquired image of the images acquired from the image capturing unit and the last acquired image of the images acquired from the image capturing unit.

In the image acquisition apparatus of the present invention, when the operation unit receives an image acquisition end instruction from the user and the first number is smaller than the second number, the operation unit receives a second direction in which the imaging field of view is changed from the user, the control unit identifies the second direction received by the operation unit, the control unit causes the field of view changing unit to change the imaging field of view again in the identified second direction at the speed indicated by the first information or to change the imaging field of view again in the identified second direction at the distance indicated by the first information, and the control unit acquires at least 1 image from the imaging unit at the timing indicated by the second information after changing the imaging field of view in the second direction.

In the image acquiring apparatus of the present invention, when the operation unit receives an image acquisition end instruction from the user and the first number is smaller than the second number, the control unit determines a second direction in which the imaging field of view is changed based on the direction identified, the control unit causes the field of view changing unit to change the imaging field of view again in the determined second direction at the speed indicated by the first information or to change the imaging field of view again in the determined second direction at the distance indicated by the first information, and the control unit acquires at least 1 image from the imaging unit at the timing indicated by the second information after the imaging field of view is changed in the second direction.

In the image pickup apparatus of the present invention, when the first number is smaller than the second number, the control section notifies the user that the first number has not reached the second number.

In the image acquisition apparatus according to the present invention, the operation unit further receives a position within the imaging field of view from the user, the control unit identifies the position received by the operation unit, the first information indicates the speed at which the imaging field of view is changed, and the control unit causes the field of view changing unit to change the imaging field of view along the identified direction at the speed indicated by the first information until a center of the imaging field of view coincides with the position.

In the image capturing apparatus according to the present invention, the control unit displays at least 1 of the images captured by the image capturing unit on a display unit, counts a first number indicating the number of the images captured by the image capturing unit, and displays information indicating a ratio of the first number to a second number indicating the number of the images required to restore the three-dimensional shape, the second number being at least 2, on the display unit.

In the image capturing apparatus according to the present invention, the control unit may generate a thumbnail image by reducing the number of pixels of the image captured by the image capturing unit, and the control unit may display the thumbnail image on a display unit.

The present invention is a method of operating an image acquisition apparatus, the image acquisition apparatus having: an imaging unit that generates an image based on an optical image of an object in an imaging field of view; a field-of-view changing unit that changes the imaging field of view; an operation unit that receives, from a user, a direction in which the imaging field is changed; and a control unit, wherein the operating method of the image acquisition apparatus includes the steps of: a first step of recognizing the direction received by the operation unit when an image acquisition mode for acquiring the image used for restoring the three-dimensional shape of the subject is set in the image acquisition apparatus; a second step of reading, from a storage medium, first information and second information for specifying an image acquisition condition in the image acquisition mode, the first information indicating a speed at which the imaging field of view is changed or a distance at which the imaging field of view is changed, the second information indicating a timing at which the image used for restoring the three-dimensional shape is acquired; a third step of causing the field-of-view changing section to change the imaging field of view in the recognized direction at the speed indicated by the first information or change the imaging field of view in the recognized direction by the distance indicated by the first information; a fourth step in which the control unit acquires at least 2 images from the imaging unit at the timing indicated by the second information; and a fifth step of restoring the three-dimensional shape by the control unit using the image acquired from the imaging unit.

ADVANTAGEOUS EFFECTS OF INVENTION

According to the present invention, the image acquisition apparatus and the operation method of the image acquisition apparatus can shorten the time to acquire an image for restoring the three-dimensional shape of a subject.

Drawings

Fig. 1 is a perspective view showing the overall configuration of an endoscope apparatus according to a first embodiment of the present invention.

Fig. 2 is a block diagram showing an internal configuration of an endoscope apparatus according to a first embodiment of the present invention.

Fig. 3 is a block diagram showing a functional configuration of a CPU according to the first embodiment of the present invention.

Fig. 4 is a schematic diagram showing a situation of image acquisition in the first embodiment of the present invention.

Fig. 5 is a flowchart showing a procedure of processing for three-dimensional shape restoration and measurement in the first embodiment of the present invention.

Fig. 6 is a flowchart showing a procedure of measurement processing in the first embodiment of the present invention.

Fig. 7 is a diagram showing an image displayed on the display unit according to the first embodiment of the present invention.

Fig. 8 is a diagram showing a flow of image acquisition in the first embodiment of the present invention.

Fig. 9 is a diagram showing a flow of image acquisition in the first embodiment of the present invention.

Fig. 10 is a block diagram showing a functional configuration of a CPU according to a second embodiment of the present invention.

Fig. 11 is a flowchart showing a procedure of measurement processing in the second embodiment of the present invention.

Fig. 12 is a flowchart showing a procedure of measurement processing in the second embodiment of the present invention.

Fig. 13 is a diagram showing a measurement image in the second embodiment of the present invention.

Fig. 14 is a diagram showing a measurement image in the second embodiment of the present invention.

Fig. 15 is a view showing the movement of the endoscope distal end and the measurement image obtained in the second embodiment of the present invention.

Fig. 16 is a view showing the movement of the endoscope distal end and the measurement image obtained in the second embodiment of the present invention.

Fig. 17 is a flowchart showing a procedure of measurement processing in the first modification of the second embodiment of the present invention.

Fig. 18 is a diagram showing a measurement image in the first modification of the second embodiment of the present invention.

Fig. 19 is a block diagram showing a functional configuration of a CPU according to the third embodiment of the present invention.

Fig. 20 is a flowchart showing a procedure of measurement processing in the third embodiment of the present invention.

Fig. 21 is a flowchart showing a procedure of determination processing executed in measurement processing in the third embodiment of the present invention.

Fig. 22 is a diagram showing a measurement image in the third embodiment of the present invention.

Fig. 23 is a diagram showing a measurement image in the third embodiment of the present invention.

Fig. 24 is a diagram showing an image displayed on the display unit according to the third embodiment of the present invention.

Fig. 25 is a diagram showing a measurement image in the third embodiment of the present invention.

Fig. 26 is a diagram showing an image displayed on the display unit according to the third embodiment of the present invention.

Fig. 27 is a flowchart showing a procedure of determination processing executed in measurement processing in the fourth embodiment of the present invention.

Fig. 28 is a diagram showing a measurement image according to the fourth embodiment of the present invention.

Description of the reference numerals

1: an endoscopic device; 2: an insertion portion; 3: a main body portion; 4: an operation section; 5: a display unit; 8: an endoscope unit; 9: a CCU; 10: a control device; 12: an image signal processing circuit; 13: a ROM; 14: a RAM; 15: a card interface; 16: an external device interface; 17: a control interface; 18a, 18b, 18 c: a CPU; 20: a front end; 28: an image pickup element; 180: a main control unit; 181: an image acquisition unit; 182: a display control unit; 183: a designated point setting unit; 184: a reference size setting unit; 185: a three-dimensional shape restoration unit; 186: a measuring section; 187: a bending control section; 188: a mode setting unit; 189: a reading unit; 190: an image selection unit; 191: an area detection unit.

Detailed Description

Embodiments of the present invention will be described below with reference to the drawings. Next, an example in which the image pickup apparatus is an endoscope apparatus will be described. The image capturing device is not limited to an endoscope device as long as it has an image capturing function.

(first embodiment)

Fig. 1 shows an external appearance of an endoscope apparatus 1 according to a first embodiment of the present invention. Fig. 2 shows an internal structure of the endoscope apparatus 1. The endoscope apparatus 1 captures an image of a subject and measures a geometric feature of the subject using the image. The examiner can perform observation and measurement of various subjects by replacing the optical adapter attached to the distal end of the insertion section 2, selecting a built-in measurement processing program, and adding the measurement processing program. Next, as an example of measurement, a case will be described in which three-dimensional shape restoration using SfM and reference distance input by a user are combined together to perform measurement.

An endoscope apparatus 1 shown in fig. 1 includes an insertion portion 2, a main body portion 3, an operation portion 4, and a display portion 5.

The insertion section 2 is inserted into the subject. The insertion portion 2 is an elongated tubular shape that can be bent from the distal end 20 to the proximal end. The insertion section 2 takes an image of the measurement portion and outputs an image pickup signal to the main body section 3. An optical adapter is attached to the distal end 20 of the insertion portion 2. For example, a monocular optical adapter is attached to the tip 20 of the insertion portion 2. The main body 3 is a control device having a housing portion for housing the insertion portion 2. The operation unit 4 receives an operation of the endoscope apparatus 1 by a user. The display unit 5 has a display screen, and displays an operation menu, an image of the subject acquired by the insertion unit 2, and the like on the display screen.

The operation section 4 is a user interface. For example, the operation section 4 is at least one of a button, a switch, a key, a mouse, a joystick, a touch panel, a trackball, and a touch panel. The Display unit 5 is a monitor (Display) such as an LCD (Liquid Crystal Display). The display unit 5 may be a touch panel. In this case, the operation unit 4 and the display unit 5 are integrated.

The main body portion 3 shown in fig. 2 includes an endoscope Unit 8, a CCU (Camera Control Unit) 9, a Control device 10, and a bending mechanism 11. The endoscope unit 8 includes a light source device and a bending device, which are not shown. The light source device provides illumination light required for observation. The bending device bends the bending mechanism 11 incorporated in the insertion portion 2. The distal end 20 of the insertion portion 2 incorporates an imaging element 28. The image pickup element 28 is an image sensor. The image pickup device 28 photoelectrically converts an optical image of an object formed by the optical adapter, and generates an image pickup signal. The CCU 9 drives the image pickup element 28. The image pickup signal output from the image pickup element 28 is input to the CCU 9. The CCU 9 performs preprocessing including amplification, noise removal, and the like on the image pickup signal acquired by the image pickup element 28. The CCU 9 converts the image pickup signal subjected to the preprocessing into a video signal such as an NTSC signal.

The control device 10 includes a video signal Processing circuit 12, a ROM (Read Only Memory) 13, a RAM (Random Access Memory) 14, a card interface 15, an external device interface 16, a control interface 17, and a CPU (Central Processing Unit) 18 a.

The video signal processing circuit 12 performs predetermined video processing on the video signal output from the CCU 9. For example, the video signal processing circuit 12 performs video processing related to improvement of visibility. For example, the image processing includes color reproduction, gradation correction, noise suppression, contour enhancement, and the like. When performing measurement, the image signal processing circuit 12 also performs processing for improving measurement performance. For example, the video signal processing circuit 12 synthesizes a video signal output from the CCU 9 with a graphics image signal generated by the CPU18 a. The graphic image signal contains an image of the operation screen, measurement information, and the like. The measurement information includes an image of the cursor, an image of a specified point, a measurement result, and the like. The video signal processing circuit 12 outputs the synthesized video signal to the display unit 5.

The ROM 13 is a nonvolatile recording medium on which a program for controlling the operation of the endoscope apparatus 1 by the CPU18a is recorded. The RAM 14 is a volatile recording medium that temporarily stores information used by the CPU18a to control the endoscope apparatus 1. The CPU18a controls the operation of the endoscope apparatus 1 based on the program recorded in the ROM 13.

A memory card 42 as a detachable recording medium is connected to the card interface 15. The card interface 15 acquires control processing information, image information, and the like stored in the memory card 42 to the control device 10. The card interface 15 records control processing information, image information, and the like generated by the endoscope apparatus 1 in the memory card 42.

An external device such as a USB device is connected to the external device interface 16. For example, the personal computer 41 is connected to the external device interface 16. The external device interface 16 transmits information to the personal computer 41 and receives information from the personal computer 41. Thereby, the monitor of the personal computer 41 can display information. Further, the user can perform an operation related to the control of the endoscope apparatus 1 by inputting an instruction to the personal computer 41.

The control interface 17 performs communication for operation control with the operation unit 4, the endoscope unit 8, and the CCU 9. The control interface 17 notifies the CPU18a of an instruction input to the operation unit 4 by the user. The control interface 17 outputs control signals for controlling the light source device and the bending device to the endoscope unit 8. The control interface 17 outputs a control signal for controlling the image pickup element 28 to the CCU 9.

The program executed by the CPU18a may be recorded in a computer-readable recording medium. The program recorded in the recording medium may be read and executed by a computer other than the endoscope apparatus 1. For example, the program may be read and executed by the personal computer 41. The personal computer 41 may control the endoscope apparatus 1 by transmitting control information for controlling the endoscope apparatus 1 to the endoscope apparatus 1 in accordance with a program. Alternatively, the personal computer 41 may acquire a video signal from the endoscope apparatus 1 and perform measurement using the acquired video signal.

The program may be transmitted from a computer holding the program to the endoscope apparatus 1 via a transmission medium or by using a transmission wave in the transmission medium. The "transmission medium" for transmitting the program is a medium having a function of transmitting information. Media having a function of transmitting information include networks (communication networks) such as the internet and communication lines (communication lines) such as telephone lines. The program described above may also implement a part of the functions described above. The program may be a difference file (difference program). The foregoing functions may also be realized by a combination of a program already recorded in the computer and a difference program.

The endoscope apparatus 1 includes an image pickup device 28 (image pickup section), a bending mechanism 11 (field changing section), an operation section 4, and a CPU18a (control section). The image pickup device 28 picks up an image of an object and generates an image pickup signal. Thereby, the image pickup device 28 generates an image (image data) based on an optical image of the object within the image pickup field of view. The image generated by the image pickup device 28 is input to the CPU18a via the video signal processing circuit 12. The bending mechanism 11 changes the imaging field of view of the imaging element 28 by bending the insertion portion 2. The field-of-view changing unit may be any mechanism capable of changing the imaging field of view by moving the imaging element 28 or by moving a member including the imaging element 28. The operation unit 4 receives the direction in which the imaging field is changed from the user.

Fig. 3 shows a functional configuration of the CPU18 a. The main control unit 180, the image acquisition unit 181, the display control unit 182, the designated point setting unit 183, the reference size setting unit 184, the three-dimensional shape restoration unit 185, the measurement unit 186, the bending control unit 187, the mode setting unit 188, and the readout unit 189 constitute the functions of the CPU18 a. At least one of the blocks shown in fig. 3 may be constituted by a circuit other than the CPU18 a.

Each section shown in fig. 3 may be constituted by at least one of a processor and a logic circuit. For example, the Processor is at least one of a CPU, a DSP (Digital Signal Processor), and a GPU (Graphics processing Unit). For example, the logic Circuit is at least one of an ASIC (Application specific integrated Circuit) and an FPGA (Field-Programmable Gate Array). The various components shown in FIG. 3 can include one or more processors. Each section shown in fig. 3 can include one or more logic circuits.

The main control unit 180 controls the processes executed by the respective units. The image acquisition unit 181 acquires the image generated by the image pickup device 28 from the video signal processing circuit 12. The acquired image is held to the RAM 14.

The display controller 182 displays the image generated by the imaging device 28 on the display 5. For example, the display control unit 182 controls the processing executed by the video signal processing circuit 12. The display control unit 182 causes the video signal processing circuit 12 to output the processed image to the display unit 5. The display unit 5 displays an image output from the video signal processing circuit 12.

The display control unit 182 displays various information on the display unit 5. That is, the display control unit 182 displays various information on the image. The various information includes a cursor, an icon, and the like. The cursor is a pointer for the user to specify a particular location on the image. The icon is a mark indicating the position of a specified point specified by the user on the image. For example, the display control unit 182 generates a graphic image signal of various information. The display control unit 182 outputs the generated graphics image signal to the video signal processing circuit 12. The video signal processing circuit 12 synthesizes the video signal output from the CCU 9 and the graphics image signal output from the CPU18 a. Thereby, various kinds of information are superimposed in the image. The video signal processing circuit 12 outputs the synthesized video signal to the display unit 5. The display unit 5 displays an image on which various kinds of information are superimposed based on the video signal.

The user inputs position information of the cursor to the operation unit 4 by operating the operation unit 4. The operation unit 4 receives positional information input by the user to the operation unit 4 and outputs the positional information. The position information input to the operation section 4 is input to the control interface 17 as an input section. The position information input to the control interface 17 is input to the CPU18 a. The display control unit 182 detects a position indicated by the position information input to the operation unit 4. The display control unit 182 displays a cursor at a position indicated by the position information input to the operation unit 4. When the display unit 5 is a touch panel, the user inputs position information of a cursor to the operation unit 4 by touching the screen of the display unit 5.

The designated point setting unit 183 sets one or more designated points on the image. The specified point includes at least one of a measurement point representing a measurement position and a reference point representing a position of a reference size. For example, a specified point is input by the user. The designated point setting unit 183 sets one or more measurement points and one or more reference points. However, the designated point setting unit 183 may set only the measurement point or the reference point on the image.

The user inputs position information of the measurement point and the reference point to the operation unit 4 by operating the operation unit 4. The operation unit 4 receives position information input by a user and outputs the position information. The positional information input to the operation unit 4 is input to the CPU18a via the control interface 17. The designated point setting unit 183 sets the measurement point and the reference point at the position indicated by the position information in the image acquired by the imaging element 28 and displayed on the display unit 5. The positional information of the measurement point and the reference point set by the designated point setting unit 183 is held in the RAM 14. The measurement point and the reference point are set by associating the measurement point and the reference point with a specific image.

The designated point is coordinate information of a position of interest in the image determined based on an instruction of the user. As described above, the specified points include the measurement points and the reference points. It is assumed that the specified point is a point for specifying the position of the measurement position and the reference size. The means for deciding the designated point is not limited to input by the user. For example, the designated point setting unit 183 may automatically determine the designated point based on information registered in advance in the endoscope apparatus 1. For example, a reference image in which a predetermined point is set is acquired from the personal computer 41 or the memory card 42 to the endoscope apparatus 1. The designated point setting unit 183 may detect a point similar to the designated point set on the reference image from the image by pattern matching, and set the detected point as the designated point in the image.

The designation of the measurement point or the reference point means that the user indicates the measurement point or the reference point to the endoscope apparatus 1. The user specifies a measurement point or a reference point by specifying a position on the image with a cursor. Alternatively, the user designates the measurement point or the reference point by touching the screen of the display unit 5. The setting of the measurement point means that the designated point setting unit 183 associates the measurement point with the image. The setting of the reference point means that the designated point setting unit 183 associates the reference point with the image.

The shape and size of the cursor and icon are not critical as long as the specified point is known to the user. In this specification, the term "point" is used for convenience, but the designated point is not necessarily one point corresponding to one pixel on the screen. The designated point may also comprise an area of any size. The designated point may include a region that can be designated in units of sub-pixels.

The user inputs the reference size to the operation unit 4 by operating the operation unit 4. The operation unit 4 receives a reference size input by the user to the operation unit 4 and outputs the reference size. The reference size input to the operation unit 4 is input to the CPU18a via the control interface 17. When the reference size is input to the operation unit 4, the reference size setting unit 184 sets the reference size in the image acquired by the imaging device 28 and displayed on the display unit 5. The reference size set by the reference size setting unit 184 is held in the RAM 14. The reference size is set by associating the reference size with a specific image. The designation of the reference size means that the user instructs the endoscope apparatus 1 of the reference size. The setting of the reference size means that the reference size setting unit 184 associates the reference size with the image.

In the following example, the reference dimension is a reference distance between two points. As described above, the reference distance is provided by the user. For example, the user specifies 2 reference points and specifies their distance as a reference distance. The reference distance specified by the user is known. For example, a reference distance in a known configuration on the subject is specified by the user.

The reference distance may be input to the endoscope apparatus 1 from a distance acquisition unit not shown. For example, the distance acquisition unit has an active projection system and a three-dimensional measurement unit. The active projection system projects light having a shape of dots, lines, stripes, and the like onto an object. The three-dimensional measurement unit calculates a reference distance based on an image of the subject projected by the light. The three-dimensional measurement unit may acquire the reference point based on the position at which the reference distance is calculated. A reference point indicating the position of the reference dimension may also be input from the apparatus for measuring the reference dimension. For example, the reference point may be input to the endoscope apparatus 1 from a three-dimensional measurement unit or a distance acquisition unit. The distance acquisition unit may calculate the reference distance by a Time Of Flight measurement method (Time Of Flight). The distance acquisition unit may be a sensing unit using a sensor such as a three-dimensional acceleration sensor, a gyro sensor, or an electric wave sensor.

For example, the external device interface 16 may acquire the reference point and the reference distance from the distance acquisition unit. As described above, in one example, the distance acquisition section has an active projection system and a three-dimensional measurement section. The reference point and the reference distance output from the distance acquisition section are input to the external device interface 16. The reference point and the reference distance input to the external device interface 16 are input to the CPU18 a. The designated point setting unit 183 sets the reference point output from the distance acquisition unit in the image. The reference size setting unit 184 sets the reference distance output from the distance acquiring unit in the image. In this case, since the reference point and the reference distance are automatically determined, the user does not need to spend much time.

The endoscope apparatus 1 may have a memory for storing a reference dimension calculated in advance. The reference size setting unit 184 may read the reference size at the reference point set by the designated point setting unit 183 from the memory, and set the read reference size in the image.

The three-dimensional shape restoration unit 185 restores the three-dimensional shape of the subject, that is, the three-dimensional shape of the object to be measured, using the plurality of images acquired by the image acquisition unit 181. When each of the plurality of images is generated, at least one of the imaging position and the imaging posture is different from each other. Therefore, when each of the plurality of images is generated, the imaging fields of view of the imaging device 28 are different from each other. The method of restoring the three-dimensional shape will be described later.

The measurement unit 186 measures an object to be measured based on the three-dimensional shape, the plurality of designated points, and the reference size. The three-dimensional shape is restored by the three-dimensional shape restoring unit 185. The plurality of designated points are measurement points and reference points. The plurality of designated points are set by the designated point setting unit 183. The reference size is set by the reference size setting unit 184. The measurement unit 186 calculates three-dimensional coordinates corresponding to the measurement points using the reference distance and the two-dimensional coordinates of the measurement points and the reference points. The measurement section 186 measures the size of the object in three dimensions based on the three-dimensional coordinates corresponding to the measurement points.

The bending control section 187 controls the bending mechanism 11 for bending the distal end 20 of the insertion section 2. For example, the bending control section 187 generates a command for bending the distal end 20 of the insertion section 2 in one direction based on an instruction from the main control section 180. The command generated by the bending control section 187 is output to the endoscope unit 8 via the control interface 17. The endoscope unit 8 drives the bending mechanism 11 based on the command, thereby bending the distal end 20 of the insertion portion 2.

The user inputs the direction in which the bending angle of the distal end 20 of the insertion portion 2 is changed to the operation portion 4 by operating the operation portion 4. That is, the user inputs the direction in which the imaging field of view is changed to the operation unit 4. Hereinafter, the direction in which the bending angle of the distal end 20 of the insertion portion 2 is changed is referred to as an angle changing direction. The operation unit 4 receives an angle change direction input to the operation unit 4 by a user and outputs the angle change direction. The angle change direction input to the operation unit 4 is input to the CPU18a via the control interface 17. The bending control section 187 generates a command for driving the bending mechanism 11 based on the angle change direction input to the operation section 4.

The mode setting unit 188 sets a predetermined operation mode for the endoscope apparatus 1. For example, the mode setting unit 188 sets any one of an image acquisition mode and an examination mode (image display mode) to the endoscope apparatus 1. The image acquisition mode is a mode for acquiring an image used for restoring the three-dimensional shape of the subject. The inspection mode is a mode for displaying an image generated by the imaging device 28 at an interval based on the imaging frame rate on the display unit 5.

The user inputs an instruction of an operation mode to the operation unit 4 by operating the operation unit 4. The operation unit 4 receives an instruction input by a user to the operation unit 4 and outputs the instruction. The instruction input to the operation unit 4 is input to the CPU18a via the control interface 17. The mode setting unit 188 determines the operation mode instructed by the user based on the instruction input to the operation unit 4. The mode setting unit 188 sets an operation mode instructed by the user to the endoscope apparatus 1. The mode setting unit 188 switches the operation mode set in the endoscope apparatus 1 between the image acquisition mode and the examination mode based on an instruction input to the operation unit 4.

When the image acquisition mode is set in the endoscope apparatus 1, the reading unit 189 reads image acquisition condition information for specifying image acquisition conditions from a storage medium. When the examination mode is set in the endoscope apparatus 1, the reading unit 189 reads image display condition information for specifying image display conditions from the storage medium. The image acquisition condition information and the image display condition information include imaging field change information and timing information. The imaging view field change information indicates a speed at which the imaging view field is changed or a distance at which the imaging view field is changed. The timing information indicates the timing of acquiring an image used in the image acquisition mode or the inspection mode from the image pickup element 28. For example, the timing information in the image acquisition mode indicates a timing based on a speed at which the imaging field of view is changed or a distance at which the imaging field of view is changed. The timing information in the inspection mode indicates the same timing as the imaging timing of the imaging element 28.

The reading section 189 reads out the image acquisition condition information and the image display condition information from the RAM 14. For example, the memory card 42 stores image acquisition condition information and image display condition information. The image acquisition condition information and the image display condition information are transferred from the memory card 42 to the RAM 14 via the card interface 15. The image acquisition condition information and the image display condition information may also be stored by the personal computer 41. The image acquisition condition information and the image display condition information may also be transferred from the personal computer 41 to the RAM 14 via the external device interface 16. The image acquisition condition information and the image display condition information may also be stored by a server (cloud server or the like) on the network. The image acquisition condition information and the image display condition information may also be transferred from the server to the RAM 14 via the external device interface 16.

The outline operation of the endoscope apparatus 1 in the image acquisition mode will be described. The operation unit 4 receives the direction in which the imaging field is changed from the user. When the image acquisition mode is set for the endoscope apparatus 1, the bending control section 187 recognizes the direction received by the operation section 4. The reading unit 189 reads first information and second information for specifying image acquisition conditions in the image acquisition mode from the RAM 14. The first information indicates a speed at which the imaging field of view is changed or a distance at which the imaging field of view is changed. The second information indicates timing of acquiring an image used for restoring the three-dimensional shape. The curvature control unit 187 causes the curvature mechanism 11 to change the imaging field in the recognized direction at a speed indicated by the first information or change the imaging field in the recognized direction by a distance indicated by the first information. The image acquiring unit 181 acquires at least 2 images from the image pickup device 28 at the timing indicated by the second information. The three-dimensional shape restoration unit 185 restores the three-dimensional shape of the object using at least 2 images acquired from the image pickup device 28. The curving mechanism 11 changes the imaging field of view in the recognized direction at a speed indicated by the first information or changes the imaging field of view in the recognized direction by a distance indicated by the first information.

The outline operation of the endoscope apparatus 1 in the examination mode will be described. The operation unit 4 receives the direction in which the imaging field is changed from the user. When the examination mode is set in the endoscope apparatus 1, the bending control unit 187 recognizes the direction received by the operation unit 4. The reading unit 189 reads the third information and the fourth information for defining the display condition in the inspection mode from the RAM 14. The third information indicates a speed at which the imaging field of view is changed or a distance at which the imaging field of view is changed. The fourth information indicates a timing of acquiring an image used in the display of the image. The curvature control unit 187 causes the curvature mechanism 11 to change the imaging field in the recognized direction at a speed indicated by the third information or change the imaging field in the recognized direction by a distance indicated by the third information. The display control unit 182 outputs the image output from the imaging device 28 at the timing indicated by the fourth information to the display unit 5. The curving mechanism 11 changes the imaging field of view in the recognized direction at a speed indicated by the third information or changes the imaging field of view in the recognized direction by a distance indicated by the third information.

A specific procedure of the processing performed by the three-dimensional shape restoration unit 185 and the measurement unit 186 will be described. The three-dimensional shape restoration unit 185 receives the plurality of images output from the video signal processing circuit 12 and the coordinate information of the specified point held in the RAM 14. Next, an example in which the three-dimensional shape restoration unit 185 receives 2 images from the video signal processing circuit 12 will be described. In the case of using 3 or more images, the basic principle is also no different from the case of using 2 images. The method described below can also be applied to the case where 3 or more images are used.

Fig. 4 schematically shows a situation of image acquisition in a case where 2 images of a subject as a measurement target are acquired. In the following description, the expression camera in a broad sense is used. The camera in the following description specifically refers to an observation optical system of the endoscope distal end (distal end 20 of insertion section 2).

As shown in FIG. 4, initially in the camera's imaging state c1Lower acquisition image I1. Then, in the imaging state c of the camera2Lower acquisition image I2. In the imaging state c1And an imaging state c2Next, at least one of the imaging position and the imaging posture is different. In fig. 4, in the image pickup state c1And an imaging state c2Next, the imaging position and the imaging posture are different from each other.

In the embodiments of the present invention, it is assumed that the image I1And image I2Are images taken by the same endoscope. In each embodiment of the present invention, it is assumed that the parameters of the objective optical system of the endoscope do not change. The parameters of the objective optical system are the focal length, distortion aberration, and pixel size of the image sensor, etc. Hereinafter, the parameters of the objective optical system are simply referred to as internal parameters for convenience. When such a condition is assumed, internal parameters describing characteristics of an optical system of the endoscope can be commonly used regardless of the position and orientation of the camera at the distal end of the endoscope. In embodiments of the present invention, it is assumed that the internal parameters are obtained at the time of factory shipment and are known at the time of measurement.

In the acquisition of images I using different endoscopic devices1And imageI2In the case of (2), the common internal parameters cannot be used. In addition, the same endoscope apparatus is used to acquire the image I1And image I2However, when the internal parameters are different for each image, the common internal parameters cannot be used. However, the internal parameters can be calculated as unknowns. Therefore, the subsequent process does not change greatly depending on whether the internal parameters are known or not. In the former case, individual internal parameters may be held in advance in each endoscope apparatus.

A procedure for calculating three-dimensional coordinates of a subject based on an acquired subject image is explained with reference to fig. 5. Fig. 5 shows a procedure of a process for three-dimensional shape restoration and measurement.

First, the three-dimensional shape restoration unit 185 executes a feature point detection process (step SA). The three-dimensional shape restoration unit 185 detects the feature points of the acquired 2 images in the feature point detection process. The feature points are corners, edges, and the like having a large gradient of image brightness in the subject information captured in the image. As a method of detecting the Feature point, SIFT (Scale-invariant Feature Transform) and FAST (Features from accelerated segmentation Test) are used. The feature points within the image can be detected by using this method.

FIG. 4 shows a slave image I1Detect the feature point m1And from the image I2Detect the feature point m2Examples of (3). Although only one feature point of each image is shown in fig. 4, a plurality of feature points are actually detected for each image. The number of feature points detected in each image may be different. Each feature point detected from each image is converted into data called a feature amount. The feature amount is data indicating the feature of the feature point.

After step SA, the three-dimensional shape restoration unit 185 executes feature point correspondence processing (step SB). In the feature point correspondence processing, the three-dimensional shape restoration unit 185 compares the correlation of the feature amount between the images with respect to each feature point detected by the feature point detection processing in step SA. When the feature points having similar features are found in the respective images as a result of comparing the correlations of the features, the three-dimensional shape restoration unit 185 holds the information in the RAM 14. On the other hand, if a feature point with a similar feature amount is not found, the three-dimensional shape restoration unit 185 discards the information of the feature point.

After step SB, the three-dimensional shape restoration unit 185 reads out the coordinates of the feature points (feature point pairs) of the 2 images corresponding to each other from the RAM 14. The three-dimensional shape restoration unit 185 performs calculation processing of the position and orientation based on the read coordinates (step SC). The three-dimensional shape restoration unit 185 calculates the acquired image I in the position and orientation calculation process1C of the camera1And obtaining an image I2C of the camera2Relative position and posture therebetween. More specifically, the three-dimensional shape restoration section 185 calculates the matrix E by solving the following equation (1) using epipolar constraint.

[ numerical formula 1]

Figure BDA0002179249650000181

The matrix E is called a basic matrix. The fundamental matrix E is the image I which remains acquired1C of the camera1And obtaining an image I2C of the camera2A matrix of positions and poses of relativity between them. In equation (1), p1Is composed of a slave image I1A matrix of coordinates of the detected feature points. p is a radical of2Is composed of a slave image I2A matrix of coordinates of the detected feature points. The basic matrix E includes information on the relative position and orientation of the camera, and thus corresponds to external parameters of the camera. The fundamental matrix E can be solved using well-known algorithms.

As shown in fig. 4, when the amount of change in the position of the camera is t and the amount of change in the orientation of the camera is R, equations (2) and (3) are satisfied.

[ numerical formula 2]

t=(tx,ty,tz)…(2)

Figure BDA0002179249650000191

In formula (2), txIs the amount of movement in the x-axis direction, tyIs the amount of movement in the y-axis direction, and tzIs the amount of movement in the z-axis direction. In the formula (3), Rx(α) is the amount of rotation α, R about the x-axisy(β) is the amount of rotation β about the y-axis, and Rz(γ) is the amount of rotation γ about the z-axis. After the fundamental matrix E is calculated, an optimization process called Bundle Adjustment (Bundle Adjustment) may be performed to improve the restoration accuracy of the three-dimensional coordinates. In general, the process called SfM includes the processes of step SA, step SB, and step SC performed after the image is acquired.

After step SC, the three-dimensional shape restoration unit 185 executes a restoration process of the three-dimensional shape of the object based on the relative position and orientation (the position change amount t and the orientation change amount R) of the camera calculated in step SC (step SD). Examples of a method for restoring the three-dimensional shape of a subject include PMVS (Patch-based Multi-view Stereo) and matching processing by parallel Stereo. However, the means is not particularly limited.

After step SD, the measurement unit 186 executes three-dimensional coordinate conversion processing based on the three-dimensional shape data of the object calculated by the three-dimensional shape restoration processing in step SD and the information of the reference distance read out from the RAM 14. The measurement unit 186 converts the three-dimensional shape data of the object into three-dimensional coordinate data having a dimension of length in a three-dimensional coordinate conversion process (step SE).

After step SE, the measurement section 186 performs a size measurement process based on the three-dimensional coordinate data of the object (step SF). The dimension measurement processing is not different from the measurement processing mounted on the conventional industrial endoscope, and therefore, a detailed description thereof is omitted. For example, the measurement unit 186 performs dimensional measurement such as distance measurement between two points and surface reference measurement in accordance with a measurement mode selected by the user.

The entire measurement process in the first embodiment will be described with reference to fig. 6. Fig. 6 shows a procedure of the measurement processing.

In an examination using an endoscope, a user checks the status of a subject using a real-time image to check whether there is a defect or a damage. The mode of the endoscope apparatus 1 at this time is referred to as an examination mode. In the case where a defect or damage or the like as a measurement target is found in the subject during the inspection, the user requests to perform the measurement. At this time, the user operates the operation unit 4 to shift the operation mode of the endoscope apparatus 1 to the measurement mode (step S101). For example, when the user clicks an icon indicating transition of the measurement mode displayed on the display unit 5, the operation mode of the endoscope apparatus 1 is changed from the examination mode to the measurement mode. Alternatively, the user may press the measurement mode changeover button using an input device such as a remote controller. The operation for changing the operation mode of the endoscope apparatus 1 from the examination mode to the measurement mode is not limited to the above-described example. The measurement mode is a mode in which a combination of the measurement function and the function specified by the above-described image acquisition mode is executed.

After step S101, a first measurement image for a specified point input is acquired. That is, the image pickup element 28 generates a first measurement image by photographing the subject 1 time. The image acquisition unit 181 acquires the first measurement image generated by the image pickup device 28 (step S102). The second information included in the image acquisition condition information indicates: the measurement image is acquired at a timing when the operation mode of the endoscope apparatus 1 is changed from the examination mode to the measurement mode.

After step S102, the reading unit 189 reads the angle change amount from the RAM 14 (step S103). The angle change amount indicates an amount by which the bending angle of the distal end 20 of the insertion portion 2 is changed. The angle change amount corresponds to first information included in the image acquisition condition information. The angle change amount corresponds to the distance at which the imaging field of view is changed. The angle change amount is a change amount of the bend angle between two imaging timings.

After step S103, the reading unit 189 reads the angle change speed from the RAM 14 (step S104). The angle change speed indicates a speed at which the bending angle of the distal end 20 of the insertion portion 2 is changed. The angle change speed corresponds to first information included in the image acquisition condition information. The angle change speed corresponds to the speed at which the imaging field of view is changed. The angle change speed is a speed of change of the bending angle between two times of imaging timings.

The angle change amount is set to an amount by which the imaging fields of view in two times of imaging overlap each other. That is, the angle change amount is set to an amount by which the regions of the 2 measurement images acquired by the two times of imaging overlap each other. This improves the reliability of the processing result of SfM. The angle change speed is set to a speed at which the motion blur does not affect the measurement image. This improves the acquisition efficiency of the measurement image.

In the embodiment of the present invention, both the angle change amount and the angle change speed are read from the RAM 14. The aspects of the present invention are not limited to this method. For example, the amount of movement of the endoscope distal end may be estimated from a plurality of images acquired in sequence, and the amount of angle change of the endoscope distal end may be adaptively controlled according to the estimated amount. Alternatively, the angle change speed may be specified by the user operating the operation unit 4.

The user inputs the angle change direction to the operation unit 4 to acquire a measurement image of the region recognized as the measurement target. After step S104, the operation unit 4 receives the angle change direction from the user. The angle change direction input to the operation unit 4 is input to the CPU18a via the control interface 17. The bending control section 187 recognizes the angle change direction input to the operation section 4 (step S105). The user may input the angle change direction by operating an input device such as a touch panel.

After step S105, the bending control unit 187 generates a command for bending the endoscope distal end in the angle changing direction received from the user. The command specifies the following actions: the bending angle is changed by an amount corresponding to the angle change amount at a speed corresponding to the angle change speed. The bending control section 187 outputs the generated command to the endoscope unit 8 to drive the bending mechanism 11 and bend the endoscope distal end. Thereby, the viewpoint of the distal end of the endoscope is changed (step S106).

In the embodiment of the present invention, it is assumed that the endoscope distal end is stopped after the endoscope distal end is moved by a predetermined angle change amount. Thereby, an image without motion blur can be acquired. The time for which the distal end of the endoscope is stopped may be any time required to acquire at least 1 image. For example, the time is substantially equal to the reciprocal of the frame rate. In the case where the movement of the endoscope distal end is generated within a time that is assumed to be sufficiently small with respect to the exposure time of the camera, the endoscope distal end does not need to be stopped. That is, it is possible to permit a slow movement to the extent that the distal end of the endoscope is stopped.

After step S106, the variable n is increased by 1 (step S107). After step S107, the image acquisition unit 181 acquires the nth measurement image generated by the image pickup device 28 (step S108). The variable n represents the number of measurement images acquired for restoration and measurement of the three-dimensional shape. When the processing in step S101 is executed, the variable n is 2. The image acquisition unit 181 acquires the second measurement image when the process in step S108 is executed 1 st time.

In step S108, the image acquisition unit 181 acquires the nth measurement image at the timing indicated by the second information included in the image acquisition condition information. For example, the second information indicates: the measurement image is acquired at a timing when the bending angle is changed at the angle changing speed for a predetermined time. Alternatively, the second information indicates: a measurement image is acquired at a timing when the bending angle is changed by the angle change amount.

While the viewpoint of the endoscope distal end is changed in step S106, the imaging device 28 performs imaging at intervals based on the imaging frame rate. The measurement image acquired in step S108 is an image generated by the imaging device 28 when the change of the viewpoint of the endoscope distal end is completed. While the viewpoint of the distal end of the endoscope is changed in step S106, the imaging device 28 may stop imaging. After a predetermined time has elapsed from the timing when the change of the viewpoint of the distal end of the endoscope is completed, the imaging device 28 may perform imaging and acquire the nth measurement image generated by the imaging device 28.

The 1-time image acquisition process includes the respective processes of step S105, step S106, step 107, and step 108. When the image acquisition process was performed 1 time, 1 measurement image was acquired.

After step S108, the display controller 182 displays the acquired nth measurement image on the display 5. Further, the display control unit 182 displays information indicating the progress status regarding the acquisition of the measurement image on the display unit 5 (step S109).

The processing in step S109 will be described with reference to fig. 7. Fig. 7 shows an image displayed on the display unit 5. The minimum number of measurement images required for SfM is set in advance in the endoscope apparatus 1. For example, the number of sheets is empirically obtained based on the result of SfM performed previously. The number of sheets is 2 or more. An example in which the number of sheets is 5 will be described below, but the number of sheets is not limited to 5. Next, the processing executed in step S109 when acquiring 3 measurement images is explained.

First, the display controller 182 displays the nth measurement image I101 on the display 5. After the nth measurement image I101 is displayed, the display control section 182 displays information indicating the progress status on the nth measurement image I101. For example, the display control unit 182 generates a graphic image signal representing information on the process status. After that, the same processing as that for displaying the cursor is executed. The display unit 5 displays the nth measurement image I101 on which information indicating the progress status is superimposed.

In the example shown in fig. 7, information IF111 and information IF112 are displayed. The information IF111 indicates the minimum number of measurement images required for SfM and the number of measurement images acquired up to the present time. The information IF111 includes 2 numbers. In the information IF111, the right-hand number indicates the minimum number of measurement images required for SfM. In the information IF111, the left-hand number indicates the number of measurement images acquired up to the present time. The information IF112 represents the process status as a progress bar. In the example shown in fig. 7, a 3-thumbnail image TH121 is displayed. Each thumbnail image TH121 is a reduced image generated by reducing the number of pixels of the measurement image.

As long as the endoscope apparatus 1 can notify the user that it is necessary to acquire several images or how long it takes to finish the process for acquiring the measurement image. When the number of acquired measurement images reaches the minimum number of measurement images required for SfM, the endoscope apparatus 1 can end the processing for acquiring the measurement images. Any display method may be used as long as it can notify the user of the progress status of acquisition of the measurement image.

In step S109, the display control unit 182 displays at least 1 measurement image acquired from the image sensor 28 on the display unit 5. When 2 or more measurement images have been acquired, 2 or more measurement images may be displayed in step S109. In step S109, the display control unit 182 counts the first number and displays information indicating the ratio of the first number to the second number on the display unit 5. The first number indicates the number of measurement images acquired from the image pickup element 28. That is, in the above example, the first number indicates the number of measurement images acquired up to the present time. The second number indicates the number of measured images required to restore the three-dimensional shape, and the second number is at least 2. That is, in the above example, the second number represents the minimum number of measurement images required for SfM. In the above example, the information indicating the ratio is the information IF111 and the information IF 112.

In step S109, the display control unit 182 generates a thumbnail image by reducing the number of pixels of the image acquired from the image pickup device 28. In step S109, the display control unit 182 displays the thumbnail image on the display unit 5. When 2 or more measurement images have been acquired, 2 or more thumbnail images may be generated and displayed in step S109. In the above example, the 3-thumbnail image TH121 is displayed on the display unit 5.

After step S109, the main control portion 180 compares the variable n with a predetermined number of sheets. The variable n represents the number of acquired measurement images. The predetermined number of sheets indicates the minimum number of measurement images required for SfM. The predetermined number of sheets is at least 2. The main control unit 180 determines whether the number of acquired measurement images has reached a predetermined number based on the comparison result (step S110).

After the acquisition of the measurement images based on the image acquisition conditions is finished, the main control part 180 compares the first number with the second number in step S110. The first number indicates the number of measurement images acquired from the image pickup element 28. The second number indicates the number of measured images required to restore the three-dimensional shape, and the second number is at least 2.

When the main control portion 180 determines in step S110 that the number of acquired measurement images has reached the predetermined number, the three-dimensional shape restoration portion 185 performs 3D reconstruction using the predetermined number of measurement images acquired by the image acquisition portion 181 (step S111). The execution of the 3D reconstruction includes both SfM and restoration processing of the dense three-dimensional shape of the object. In the 3D reconstruction, a predetermined number of measurement images are used. When the main control portion 180 determines in step S110 that the number of acquired measurement images has not reached the predetermined number, the process in step S105 is executed. After that, the image acquisition process is executed again to acquire a measurement image. The image acquisition process is repeated until a predetermined number of measurement images are acquired.

The flow of image acquisition until the number of acquired measurement images reaches a predetermined number will be described with reference to fig. 8 and 9. An example in which the minimum number of measurement images required for SfM is 5 will be described.

In step S102, a first measurement image I201 is acquired. After the first measurement image I201 is acquired, the operation unit 4 accepts the angle change direction D211 from the user in step S105. In step S106 of the 1 st image acquisition process, the bending angle is changed along the angle changing direction D211. After the change of the bending angle is finished, the second measurement image I202 is acquired in step S108 of the 1 st image acquisition process. After the second measurement image I202 is acquired, the operation unit 4 receives the angle change direction D212 from the user in step S105. In step S106 of the 2 nd image acquisition process, the bending angle is changed along the angle changing direction D212. After the change of the bending angle is finished, the third measurement image I203 is acquired in step S108 of the 2 nd time image acquisition process.

After the third measurement image I203 is acquired, the operation unit 4 receives the angle change direction D213 from the user in step S105. In step S106 of the 3 rd image acquisition process, the bending angle is changed along the angle changing direction D213. After the change of the bending angle is finished, a fourth measurement image I204 is acquired in step S108 of the 3 rd time image acquisition process. After the fourth measurement image I204 is acquired, the operation unit 4 receives the angle change direction D214 from the user in step S105. In step S106 of the 4 th image acquisition process, the bending angle is changed along the angle changing direction D214. After the end of the change of the bending angle, a fifth measurement image I205 is acquired in step S108 of the 4 th image acquisition process. Since 5 measurement images are acquired, it is determined in step S110 that the number of measurement images has reached the predetermined number.

In the examples shown in fig. 8 and 9, the angle changing direction is fixed. The direction of angle change in each image acquisition condition is not always fixed.

After step S111, the display control unit 182 displays the first measurement image acquired in step S102 on the display unit 5 (step S112).

After step S112, the user designates a designated point on the displayed first measurement image by operating the operation section 4. Thereby, the user specifies the measurement point and the reference point. The operation unit 4 receives a measurement point and a reference point specified by a user. The designated point setting unit 183 sets the measurement point and the reference point designated by the user on the displayed first measurement image (step S113).

After step S113, the user specifies the reference distance by operating the operation unit 4. The user designates the length of the reference distance that the user has grasped himself as a numerical value. The operation unit 4 receives a reference distance specified by the user. The reference size setting unit 184 sets a reference distance on the displayed first measurement image (step S114).

For example, the reference distance is a distance between two reference points set on the object plane, and is defined at two points. However, the reference distance is not limited to a distance defined by two points. For example, a reference point of one point may be set on the subject, and the distance (object distance) from the reference point to the distal end of the endoscope may be set as the reference distance. In this case, the reference distance is defined by only one point.

After step S114, the measurement unit 186 converts the three-dimensional shape data of the object into three-dimensional coordinate data having a dimension of length (step S115). At this time, the measurement unit 186 uses the two reference points set in step S113 and the reference distance set in step S114.

After step S115, the measurement unit 186 measures a dimension defined by the measurement point specified by the user by a known measurement method based on the three-dimensional coordinate data obtained in step S115 (step S116).

After step S116, the display controller 182 displays the measurement result on the display 5. For example, the measurement result is superimposed on the first measurement image displayed on the display unit 5 (step S117). The measurement result may be recorded in an external medium such as the memory card 42. When the process in step S117 is executed, the measurement process ends.

The method for operating the image acquisition apparatus according to the aspects of the present invention includes first to fifth steps. When the image acquisition mode is set in the endoscope apparatus 1, the bending control unit 187 recognizes the direction received by the operation unit 4 in the first step (step S105). In the second step (step S103 and step S104), the reading unit 189 reads first information and second information defining image acquisition conditions in the image acquisition mode from the RAM 14. The first information indicates a speed at which the imaging field of view is changed or a distance at which the imaging field of view is changed. The second information indicates timing of acquiring an image used for restoring the three-dimensional shape. In the third step (step S106), the warping control unit 187 causes the warping mechanism 11 to change the imaging field in the recognized direction at a speed indicated by the first information or change the imaging field in the recognized direction by a distance indicated by the first information. The image acquiring unit 181 acquires at least 2 images from the image pickup element 28 at the timing indicated by the second information in the fourth step (step S108). The three-dimensional shape restoration unit 185 restores the three-dimensional shape of the object using at least 2 images acquired from the image pickup device 28 in the fifth step (step S111).

In each mode of the present invention, the processing in each step of step S112 to step S117 is not essential.

In the image acquisition mode of the first embodiment, the control of the angle change direction by the user's operation is combined with the control of the angle change amount or the angle change speed performed by the apparatus. Therefore, the endoscope apparatus 1 can shorten the time to acquire an image for restoring the three-dimensional shape of the object. As a result, the inspection efficiency is improved.

(modification of the first embodiment)

Next, a modified example of the first embodiment of the present invention will be described.

In the above-described example, bending is used to change the viewpoint of the distal end of the endoscope. The change of the imaging field of view in the embodiments of the present invention is not limited to the method using the curvature. For example, a control jig capable of controlling the advance and retreat of the insertion portion 2 may be used. For example, an optical adapter capable of observing a direction perpendicular to the optical axis of the insertion portion 2 is attached to the distal end 20 of the insertion portion 2. The imaging field of view is changed by the insertion portion 2 moving forward or backward. A control jig capable of controlling the twisting of the insertion portion 2 may be used. The method for changing the imaging field of view is not limited, and the insertion portion 2 may be moved in a direction different from the imaging direction.

In the above example, the operation unit 4 receives the measurement point and the reference point from the user in step S113. At this time, the first measurement image acquired in step S102 is being displayed. The measurement image displayed at this time need not be the first measurement image. For example, the second measurement image may also be displayed. Alternatively, the last acquired measurement image may be displayed. The acquired plurality of measurement images may be displayed, and the user may be able to select a measurement image for inputting a specified point from the plurality of measurement images. The measurement image for setting the reference point and the measurement image for setting the measurement point may be different.

In the above example, the processing of step S105 is executed in each of the plurality of times of image acquisition processing before the minimum number of measurement images required for SfM are acquired. It suffices that the processing of step S105 is executed in at least 1 image acquisition processing. For example, the angle change direction received from the user by the operation unit 4 in step S105 is held in the RAM 14. When the user does not specify a new angle change direction, the angle change direction in the image acquisition process executed last time may be read from the RAM 14. In step S106, the viewpoint of the distal end of the endoscope may be changed based on the angle change direction read from the RAM 14. The user may input a new angle change direction to the operation unit 4 at a timing when the user wants to change the angle change direction. Therefore, the workload of the user can be reduced.

In the above example, the endoscope apparatus 1 receives the angle changing direction from the user and changes the bending angle to the angle changing direction. Thereafter, the endoscope apparatus 1 receives the angle change direction from the user again, and changes the bending angle in the angle change direction. That is, the endoscope apparatus 1 repeats reception of the angle change direction and change of the bending angle. The image acquisition processing in the embodiments of the present invention is not limited to this.

For example, while the endoscope apparatus 1 is changing the bending angle in the image acquisition process, the operation unit 4 may receive a new angle change direction from the user. The endoscope apparatus 1 may update the angle change direction during the change of the bending angle and change the bending angle based on the updated angle change direction. In this case, the direction in which the distal end of the endoscope moves changes during the change of the bending angle. The amount of bend angle that has been changed before the angle change direction is updated is not reset. That is, after the bending angle has been changed, the bending angle changes by a difference amount. The difference amount is the difference between the amount of angle change in the 1-time image acquisition process and the amount that has changed before the direction of angle change is updated.

For example, the following example is explained: the user designates the right direction as the angle change direction, and the angle change amount read in step S103 is 100. For example, after the bending angle is changed by 50, the user indicates the upward direction as the angle change direction. In this case, the endoscope apparatus 1 changes the bending angle 50 in the upward direction.

In the above example, the operation unit 4 receives the angle change direction from the user during the execution of the image acquisition process 1 time, and the bending angle is changed based on the angle change direction. The angle changing direction may be set in the endoscope apparatus 1 in advance. For example, the endoscope apparatus 1 executes the measurement processing shown in fig. 6 based on the operation of a skilled user. At this time, the RAM 14 holds a history of the direction change from the angle received by the skilled user. Thereafter, an unskilled general user requests to perform the measurement. In step S105 of the measurement process executed in this case, the angle change direction held as the history is read out from the RAM 14. While the series of image acquisition processes is being executed, the endoscope apparatus 1 changes the bending angle in accordance with the angle change direction received from the skilled user, and acquires a measurement image.

The endoscope apparatus 1 can apply the angle change direction received from a skilled user to the measurement processing executed based on the instruction of a general user. Therefore, it is easy to stabilize the reliability of the processing result of SfM regardless of the proficiency of the user.

(second embodiment)

In the second embodiment of the present invention, the user can specify the timing at which the process for acquiring the measurement image is completed. The configuration of the first embodiment is effective in the case where the position at which the user wants to designate a designated point is relatively close to the region within the imaging field of view on the object. When the position at which the user wants to designate a designated point is relatively distant from the region within the imaging field of view on the subject, the endoscope distal end needs to be moved until the position at which the user wants to designate enters the imaging field of view. However, there are possibilities: while the distal end of the endoscope is moving, the acquisition of the minimum number of measurement images required for SfM is completed. As a result, there is a possibility that: the acquisition of the measurement image is ended before the endoscope leading end reaches the destination. The endoscope apparatus 1 of the second embodiment has a function for reliably acquiring a measurement image including a specified point that a user wants to specify.

In the second embodiment, the CPU18a shown in fig. 3 is changed to the CPU18 b shown in fig. 10. Fig. 10 shows a functional configuration of the CPU18 b. The description of the same structure as that shown in fig. 3 is omitted.

The CPU18 b has an image selecting section 190 in addition to the configuration shown in fig. 3. After the acquisition of the measurement images based on the image acquisition conditions is finished, the main control part 180 compares the first number with the second number. The first number indicates the number of measurement images acquired from the image pickup element 28. In the following example, the first number indicates the number of measurement images acquired up to the present. The second number indicates the number of measured images required to restore the three-dimensional shape, and the second number is at least 2. In the following example, the second number represents the minimum number of sheets of measurement images required for SfM. When the first number is the same as the second number, the three-dimensional shape restoration unit 185 restores the three-dimensional shape of the object.

When the first number is larger than the second number, the image selecting section 190 selects at least the second number of measurement images from among the measurement images acquired from the image pickup element 28. That is, the image selecting unit 190 selects the same number of measurement images as the second number or a number greater than the second number. In the following example, the image selecting unit 190 selects the number of measurement images required to restore the three-dimensional shape. The three-dimensional shape restoration unit 185 restores the three-dimensional shape of the object using the selected measurement image.

For example, the image selecting unit 190 selects at least the second number of measurement images based on the degree of overlap of the measurement images acquired from the image pickup device 28. Alternatively, the image selecting unit 190 selects a second number of measurement images including the first measurement image acquired from the measurement images acquired from the image pickup device 28 and the last measurement image acquired from the measurement images acquired from the image pickup device 28.

When the operation unit 4 receives an image acquisition end instruction (an execution instruction to execute restoration of the three-dimensional shape) from the user and the first number is smaller than the second number, the operation unit 4 receives a second direction (angle change direction) in which the imaging field is changed from the user. The second direction is the same as the last received angle change direction from the user. Alternatively, the second direction is different from the angle change direction last received from the user. The bending control unit 187 recognizes the second direction received by the operation unit 4. The curvature control unit 187 causes the curvature mechanism 11 to change the imaging field of view again in the recognized second direction at the speed indicated by the first information, or to change the imaging field of view again in the recognized second direction by the distance indicated by the first information. The first information indicates a speed at which the imaging field of view is changed (angle change speed) or a distance at which the imaging field of view is changed (angle change amount). After changing the imaging field of view in the second direction, the image acquisition unit 181 acquires at least 1 measurement image from the imaging element 28 at the timing indicated by the second information. The second information indicates timing of acquiring an image used for restoring the three-dimensional shape. The first information and the second information are used to specify image acquisition conditions in an image acquisition mode (measurement mode).

The endoscope apparatus 1 repeatedly acquires the measurement images based on the image acquisition condition until the sum of the third number and the fourth number becomes the second number. The third number indicates the number of measurement images acquired from the image sensor 28 before the operation unit 4 receives an image acquisition end instruction from the user. The fourth number indicates the number of measurement images acquired from the image sensor 28 after the operation unit 4 receives an image acquisition end instruction from the user.

The measurement process in the second embodiment will be described with reference to fig. 11 and 12. Fig. 11 and 12 show the procedure of the measurement processing. The explanation of the same processing as that shown in fig. 6 is omitted.

The user determines whether or not the measurement image of the region identified as the measurement target by itself is successfully acquired by confirming the nth measurement image displayed in step S109. When the user determines that the measurement image of the area is successfully acquired, the user inputs an image acquisition end instruction to the operation unit 4. The image acquisition end instruction indicates an instruction to end acquisition of the measurement image and to perform SfM and restoration of the three-dimensional shape. After step S109, the operation unit 4 receives an image acquisition end instruction from the user. The image acquisition end instruction input to the operation unit 4 is input to the CPU18 b via the control interface 17. The main control unit 180 determines whether or not an image acquisition completion instruction is accepted from the user (step S121).

When the main control unit 180 determines in step S121 that the image acquisition end instruction has not been received from the user, the process in step S105 is executed. When the main control unit 180 determines in step S121 that an image acquisition completion instruction has been received from the user, the main control unit 180 compares the variable n with a predetermined number of sheets. The variable n represents the number of acquired measurement images. The predetermined number of sheets indicates the minimum number of measurement images required for SfM. The predetermined number of sheets is at least 2. The main control portion 180 determines whether or not an appropriate number of measurement images have been acquired based on the comparison result (step S122).

When the variable n is the same as the predetermined number of sheets, the main control portion 180 determines that an appropriate number of measurement images have been acquired. In this case, the process in step S111 is executed. When the variable n is different from the predetermined number of sheets, the main control portion 180 determines that an appropriate number of measurement images have not been acquired. In this case, the main control portion 180 determines whether the variable n is smaller than the predetermined number of sheets (step S123).

When the position at which the user wants to designate a designated point is distant from an area within the imaging field of view on the object, it takes time until the distal end of the endoscope approaches the position. Therefore, a phenomenon may occur in which more measurement images than the appropriate number of sheets are acquired. When the main control unit 180 determines in step S123 that the variable n is larger than the predetermined number of sheets, the image selection unit 190 determines whether or not to perform thinning-out on the measurement image acquired from the image pickup device 28 (step S124). The thinning out of the measurement image means that the measurement image acquired from the image pickup element 28 is classified into a predetermined use image and a non-use image. The predetermined-use image is used for 3D reconstruction including SfM and restoration of a three-dimensional shape. The unused images are not used for 3D reconstruction.

When the image selecting unit 190 determines in step S124 that the thinning-out is not to be performed on the measurement image, the processing in step S111 is executed. In this case, all the measurement images acquired from the image pickup device 28 are used for the 3D reconstruction in step S111. When the image selection unit 190 determines in step S124 that the measurement image is to be thinned out, the image selection unit 190 performs thinning out on the measurement image acquired from the image pickup device 28. Thereby, the image selecting unit 190 selects only the image necessary for SfM as the usage-scheduled image (step S125).

The non-usage image is a measurement image other than the intended usage image among the measurement images acquired from the image pickup element 28. The image selecting section 190 may delete the non-use image. Examples of the processing in step S124 and step S125 will be described later.

After step S125, the process in step S111 is executed. In this case, the measurement image selected as the usage-scheduled image by the image selecting section 190 is used for 3D reconstruction.

When the main control unit 180 determines in step S123 that the variable n is smaller than the predetermined number of sheets, the processes in steps S126 to S130 are executed. Steps S126 to S130 are the same as steps S105 to S109, respectively.

The operation unit 4 receives the angle change direction (second direction) in which the imaging field is changed from the user in step S126. In step S126, the bending controller 187 recognizes the angle change direction received by the operation unit 4. In step S127, the bending controller 187 causes the bending mechanism 11 to change the bending angle by the angle change amount at the angle change speed along the recognized angle change direction. The image acquisition unit 181 acquires the nth measurement image from the image sensor 28 in step S129. In step S130, the display control unit 182 displays the nth measurement image on the display unit 5, and displays information indicating the progress status regarding the acquisition of the measurement image on the display unit 5.

After step S130, the main control portion 180 compares the variable n with a predetermined number of sheets. The variable n represents the number of acquired measurement images. The predetermined number of sheets is the same as the predetermined number of sheets used in the processing of step S122. The main control unit 180 determines whether the number of acquired measurement images has reached a predetermined number based on the comparison result (step S131).

When the main control portion 180 determines in step S131 that the number of acquired measurement images has reached the predetermined number, the process in step S111 is executed. When the main control portion 180 determines in step S131 that the number of acquired measurement images has not reached the predetermined number, the process in step S126 is executed.

In the case where the first number is smaller than the second number, the display control portion 182 may notify the user that the first number has not reached the second number. For example, before the processing in step S126 is executed, the display control unit 182 displays, on the display unit 5, information indicating that the number of acquired measurement images is smaller than the minimum number of measurement images required for SfM. The information includes text, icons, or symbols, etc. The display control unit 182 may display information indicating the number of measurement images that need to be additionally acquired on the display unit 5. The notification method of the notification to the user is not limited to the display of the information on the display unit 5. For example, a sound indicating that the first number does not reach the second number may also be output.

An example of processing for thinning out an image will be described. For example, the following methods can be applied: the measurement image is selected based on the degree of overlap of each of 2 or more measurement images acquired from the image pickup element 28. The image selecting unit 190 calculates the ratio of the common region between 1 measurement image and the measurement image obtained after 2 measurement images of the measurement image. For example, measurement image a, measurement image B, and measurement image C are acquired sequentially. The image selecting unit 190 calculates the ratio of the common region between the measurement image a and the measurement image C. When the ratio is larger than a predetermined threshold, the image selecting unit 190 determines that the thinning-out of the measurement images acquired between the 2 measurement images used for calculating the ratio of the region is possible. For example, when the ratio of the common region between the measurement image a and the measurement image C is larger than a predetermined threshold value, the image selecting unit 190 determines that the thinning-out of the measurement image B is possible. A specific example will be described with reference to fig. 13.

The measurement image acquired from the image pickup element 28 includes a measurement image I301, a measurement image I302, and a measurement image I303 shown in fig. 13. After the measurement image I301 is acquired from the image pickup element 28, the measurement image I302 is acquired from the image pickup element 28. After the measurement image I302 is acquired from the image pickup element 28, the measurement image I303 is acquired from the image pickup element 28.

The image selecting unit 190 calculates the area (the number of pixels) of the region R311 common between the measurement image I301 and the measurement image I303. Next, the image selecting section 190 calculates the ratio of the area of the region R311 to all the pixel numbers of the image. All the pixel numbers of the image are numbers obtained by multiplying the number of horizontal pixels by the number of vertical pixels. The image selecting unit 190 compares the calculated ratio with a predetermined threshold value. When the calculated ratio is larger than a predetermined threshold value, the image selecting unit 190 determines that the thinning-out of the measurement image I302 is possible. When the calculated ratio is smaller than the predetermined threshold, the image selecting unit 190 determines not to perform thinning out on the measurement image I302. The endoscope apparatus 1 can reduce the number of measurement images used for SfM by performing the above-described processing. Thus, the processing time is shortened.

When the ratio of the area of the region R311 is larger than a predetermined threshold, the image selecting unit 190 may additionally perform the processing of calculating the following 2 ratios. As described above, the image selecting unit 190 calculates the ratio of the common region between the measurement image I301 and the measurement image I302, and calculates the ratio of the common region between the measurement image I302 and the measurement image I303. When all of these ratios are larger than a predetermined threshold, the image selecting unit 190 may determine that the thinning-out of the measurement image I302 is possible.

There are the following situations: the common region between the measurement image I301 and the measurement image I303 is large, and the common region between the measurement image I302 and the other measurement images is small. In this case, performing the determination based on only the proportion of the region R311 may cause the measurement image I302 to be thinned out.

The effect obtained by executing the above-described additional processing will be described with reference to fig. 14. The measurement image acquired from the image pickup element 28 includes a measurement image I401, a measurement image I402, and a measurement image I403 shown in fig. 14. After the measurement image I401 is acquired from the image pickup element 28, the measurement image I402 is acquired from the image pickup element 28. After the measurement image I402 is acquired from the image pickup element 28, the measurement image I403 is acquired from the image pickup element 28.

The region R411 is a common region between the measurement image I401 and the measurement image I403. The region R413 is a common region between the measurement image I401 and the measurement image I402, and the region R412 is a common region between the measurement image I403 and the measurement image I402. The image selecting unit 190 calculates the ratio of the region R411. When the above-described additional processing is not executed, the image selecting unit 190 determines that the thinning-out of the measurement image I402 is possible only by checking that the ratio of the region R411 is larger than the predetermined threshold value. After the measurement image I402 is thinned out, the region R412 and the region R413 are not a common region between the measurement image I401 and the measurement image I403 used for SfM. Therefore, at the region R412 and the region R413, the three-dimensional shape cannot be restored by SfM.

When the above-described additional processing is performed, the image selecting unit 190 determines that the thinning-out of the measurement image I402 is not performed by recognizing that the ratio of at least one of the region R412 and the region R413 is smaller than the predetermined threshold value. Therefore, in the region R412 and the region R413, the three-dimensional shape can be restored by SfM.

The image selecting unit 190 may select at least the measurement image acquired first from the image pickup device 28 and the measurement image acquired last from the image pickup device 28. For example, the first to seventh measurement images are acquired in sequence. After the seventh measurement image is acquired, an image acquisition end instruction is accepted from the user in step S121. The minimum number of measurement images required for SfM is 5. In this case, the image selecting section 190 may select the first measurement image and the seventh measurement image as the usage-scheduled images, and select 3 measurement images of the second to sixth measurement images as the usage-scheduled images.

Consider the following: when the region focused by the user is included in the image, the user changes the operation mode of the endoscope apparatus 1 from the examination mode to the measurement mode. Therefore, there is a high possibility that a measurement point or a reference point is specified in the measurement image first acquired from the imaging element 28. Likewise, consider the following: when the region focused by the user is included in the image, the user inputs an image acquisition end instruction to the operation unit 4. Therefore, there is a high possibility that a measurement point or a reference point is designated in the measurement image acquired last from the image pickup device 28. The endoscope apparatus 1 can acquire a measurement image including a region where the user wants to set the reference distance or a region recognized as a measurement target by the user. Therefore, the inspection efficiency is improved, and the reliability of the processing result of SfM is improved.

An example of the processing following fig. 11 and 12 is explained. Fig. 15 shows the movement of the endoscope distal end and the measurement image acquired until an image acquisition end instruction is received from the user in step S121. For example, when the region R511 is captured in the image, the acquisition of the measurement image is started. The region R511 is a region in which the user wants to set the reference distance. In step S102, a measurement image I501 including the region R511 is acquired. After the measurement image I501 is acquired, the viewpoint of the distal end of the endoscope is changed in step S106 based on the angle change direction received from the user in step S105. After that, the measurement image I502 is acquired in step S108. After the measurement image I502 is acquired, the viewpoint of the endoscope distal end is changed in step S106 as well, and a measurement image I503 is acquired in step S108. At this point in time, 3 measurement images were acquired.

The measurement image I503 includes a region R512 recognized as a measurement object by the user. The user confirms that the region R512 is included in the measurement image I503, and inputs an image acquisition end instruction to the operation unit 4. The main control unit 180 compares the number of acquired measurement images with the minimum number of measurement images required for SfM in step S122. The number of acquired measurement images is 3, and the minimum number of measurement images required for SfM is 5. Since 2 images are missing in order to execute SfM, the main control unit 180 determines that the number of measurement images is small in step S123.

Fig. 16 shows the movement of the distal end of the endoscope and the acquired measurement images until the minimum number of measurement images required for SfM are acquired. A process for acquiring a measurement image after accepting an image acquisition end instruction from a user will be described. Based on the angle change direction received from the user in step S126, the viewpoint of the endoscope distal end is changed in step S127. For example, the user instructs the endoscope apparatus 1 of the same angle change direction as the angle change direction instructed to acquire the measurement image I503. After that, the measurement image I504 is acquired in step S129. After the measurement image I504 is acquired, the viewpoint of the endoscope distal end is likewise changed in step S127, and a measurement image I505 is acquired in step S129. When the measurement image I505 is acquired, the acquisition of the minimum number of measurement images required for SfM is completed.

Thereafter, in step S111, the three-dimensional shape restoration unit 185 performs 3D reconstruction, thereby performing SfM and restoring the three-dimensional shape of the object. In step S113, the user specifies a measurement point and a reference point. In the example shown in fig. 16, a reference point P521 and a reference point P522 are set in the measurement image I501. In the example shown in fig. 16, a measurement point P523 and a measurement point P524 are set in the measurement image I503.

After receiving an image acquisition completion instruction from the user, 2 measurement images necessary for SfM are additionally acquired. Therefore, the reliability of the processing result of SfM is improved.

The endoscope apparatus 1 can acquire the minimum number of measurement images required for SfM regardless of the positional relationship between the reference point and the measurement point specified by the user. As a result, the inspection efficiency is improved, and the reliability of the processing result of SfM is improved.

(first modification of the second embodiment)

In step S126 shown in fig. 12, the angle change direction is received from the user. However, the user does not necessarily need to input the angle change direction to the operation unit 4. When an image acquisition end instruction (an instruction to execute restoration of the three-dimensional shape) is received from the user in step S121, it is considered that a measurement image including a region in which the user intends to specify the reference point and the measurement point has been acquired. Therefore, the endoscope apparatus 1 only needs to acquire a measurement image that is missing for performing SfM. For this reason, the endoscope apparatus 1 of the first modification of the second embodiment performs control for changing the bending angle instead of the user.

Before the operation unit 4 receives an image acquisition end instruction from the user, the operation unit 4 receives an angle change direction in which the imaging field of view is changed from the user. The bending control unit 187 recognizes the angle change direction received by the operation unit 4. When the operation unit 4 receives an image acquisition end instruction from the user and the first number is smaller than the second number, the bend control unit 187 determines the second direction (angle change direction) in which the imaging field is changed, based on the identified angle change direction. The first number indicates the number of measurement images acquired from the image pickup element 28. In the following example, the first number indicates the number of measurement images acquired up to the present. The second number indicates the number of measured images required to restore the three-dimensional shape, and the second number is at least 2. In the following example, the second number represents the minimum number of sheets of measurement images required for SfM. For example, the second direction is the same as the angle change direction last received from the user. The curvature control unit 187 causes the curvature mechanism 11 to change the imaging field of view again in the determined second direction at the speed indicated by the first information, or to change the imaging field of view again in the determined second direction by the distance indicated by the first information. The first information indicates a speed at which the imaging field of view is changed (angle change speed) or a distance at which the imaging field of view is changed (angle change amount). After changing the imaging field of view in the second direction, the image acquisition unit 181 acquires at least 1 measurement image from the imaging element 28 at the timing indicated by the second information. The second information indicates timing of acquiring an image used for restoring the three-dimensional shape. The first information and the second information are used to specify image acquisition conditions in an image acquisition mode (measurement mode).

The endoscope apparatus 1 repeatedly acquires the measurement images based on the image acquisition condition until the sum of the fifth number and the sixth number becomes the second number. The fifth number indicates the number of measurement images acquired from the image sensor 28 before the operation unit 4 receives an image acquisition end instruction from the user. The sixth number indicates the number of measurement images acquired from the image sensor 28 after the operation unit 6 receives an image acquisition end instruction from the user. In each image acquisition process after the operation unit 4 receives an image acquisition end instruction from the user, the bending control unit 187 determines the angle change direction based on the angle change direction previously received from the user or the angle change direction previously determined by the bending control unit 187.

Fig. 17 shows a procedure of processing executed instead of the processing shown in fig. 12. The description of the same processing as that shown in fig. 12 is omitted.

Step S126 shown in fig. 12 is changed to step S141 shown in fig. 17. When the main control unit 180 determines in step S123 that the variable n is smaller than the predetermined number of sheets, the curve control unit 187 determines a new angle change direction based on the angle change direction received from the user in step S105. For example, the bending controller 187 determines the angle changing direction to be used for acquiring the (n +1) th measurement image, based on the angle changing direction received from the user for acquiring the n-th measurement image (step S141). After step S141, the process in step S127 is executed.

When the processing in step S105 is executed plural times, plural angle changing directions received from the user may be held in the RAM 14. The bending control unit 187 may determine a new angle change direction based on a plurality of angle change directions received from the user.

When the main control unit 180 determines in step S131 that the number of acquired measurement images has not reached the predetermined number, the process in step S141 is executed. The bending control unit 187 determines a new angle change direction based on the angle change direction received from the user in step S105 or the angle change direction previously determined in step S141.

A specific example of controlling the angle changing direction will be described with reference to fig. 18. Fig. 18 shows a measurement image acquired from the image pickup device 28. After the measurement image I601 is acquired, an image acquisition end instruction is received from the user. The curve control unit 187 determines the angle change direction to be used for acquiring the measurement image I602, based on the angle change direction received from the user for acquiring the measurement image I601. For example, the determined angle change direction is the same as the angle change direction received from the user.

After the measurement image I602 is acquired, the curve control unit 187 determines the angle change direction to be used for acquiring the measurement image I603 based on the angle change direction to be used for acquiring the measurement image I602. For example, the determined angle change direction is different from the angle change direction used for acquiring the measurement image I602.

After the measurement image I603 is acquired, the curve control unit 187 determines the angle changing direction to be used for acquiring the measurement image I604 based on the angle changing direction to be used for acquiring the measurement image I603. For example, the determined angle change direction is different from the angle change direction used for acquiring the measurement image I603.

After the measurement image I604 is acquired, the curve control unit 187 determines the angle change direction to be used for acquiring the measurement image I605, based on the angle change direction to be used for acquiring the measurement image I604. For example, the determined angle change direction is different from the angle change direction used for acquiring the measurement image I604.

In the example shown in fig. 18, the angle changing direction changes counterclockwise in order to acquire an image of a peripheral region of the region included in the measurement image I601. The additionally acquired 4 measurement images include a central region of the measurement image I601. The method for controlling the angle changing direction is not limited to the method shown in fig. 18.

The endoscope apparatus 1 controls the angle changing direction so as to acquire an image of a peripheral region of a region included in a measurement image acquired before an image acquisition end instruction is accepted from a user. As a result, the measurement image including the specified points (measurement points and reference points) to which the user pays attention can be increased. When the same specified point is contained in many images, the measurement accuracy is higher. Therefore, the accuracy of the size measurement performed after SfM is improved.

(second modification of the second embodiment)

In a second modification of the second embodiment of the present invention, the operation unit 4 and the display unit 5 are integrated and configured as a touch panel. The user inputs the angle change direction and the destination to the operation unit 4 by operating the touch panel. The bending control unit 187 causes the bending mechanism 11 to bend the endoscope distal end until the center of the imaging field coincides with the destination.

The destination may be a temporary destination for specifying the angle change direction. For example, the bending control unit 187 causes the bending mechanism 11 to bend the endoscope distal end until the center of the imaging field coincides with the temporary destination. Thereafter, the bending control section 187 further bends the endoscope distal end by the bending mechanism 11 while maintaining the angle changing direction.

The operation unit 4 receives the position within the imaging field of view in addition to the angle change direction from the user. For example, the user touches a position on the measurement image displayed on the display unit 5. At this time, the operation unit 4 receives the position. The bending control unit 187 recognizes the position received by the operation unit 4. The first information for specifying the image acquisition condition indicates a speed at which the imaging field of view is changed. The curvature control unit 187 causes the curvature mechanism 11 to change the imaging field of view along the recognized curvature changing direction at a speed indicated by the first information until the center of the imaging field of view coincides with the position.

The operation unit 4 may receive the angle change direction and the destination from the user at the same time. For example, the operation unit 4 receives the position on the measurement image displayed on the display unit 5, as described above. The center of the measurement image displayed on the display unit 5 is the same as the center of the current imaging field. Therefore, the bending controller 187 can calculate the angle change direction based on the center of the measurement image and the position on the measurement image received from the user.

The user can instruct the endoscope apparatus 1 of a destination of movement by bending. Therefore, the user can easily instruct the endoscope apparatus 1 of the position of the endoscope distal end at the time of acquiring the measurement image.

(third embodiment)

In the third embodiment of the present invention, the user confirms whether or not a designated point designated by the user is included in a plurality of measurement images. This confirmation is made before SfM is performed. Before performing SfM with a large calculation load, the user can determine whether size measurement can be performed at a measurement point that the user wants to specify. The endoscope apparatus 1 can prompt the user to re-acquire the measurement image at an early stage before the dimension measurement fails.

In the third embodiment, the CPU18a shown in fig. 3 is changed to the CPU18 c shown in fig. 19. Fig. 19 shows a functional configuration of the CPU18 c. The description of the same structure as that shown in fig. 3 is omitted.

The CPU18 c has an area detection unit 191 in addition to the configuration shown in fig. 3. The measurement images acquired from the image pickup element 28 include 1 first image and at least 1 second image. The region detection unit 191 detects a region where the first image and the second image overlap each other. The display control unit 182 can visually recognize the region in the first image by processing the first image. The display control unit 182 displays the processed first image on the display unit 5.

After the first image is displayed on the display unit 5, the operation unit 4 receives an execution instruction to execute restoration of the three-dimensional shape from the user. When the operation unit 4 receives an instruction to restore the three-dimensional shape, the three-dimensional shape restoring unit 185 restores the three-dimensional shape of the subject.

The measurement process in the third embodiment will be described with reference to fig. 20. Fig. 20 shows a procedure of the measurement processing. The explanation of the same processing as that shown in fig. 6 is omitted.

When the main control portion 180 determines in step S110 that the number of acquired measurement images has reached the predetermined number, the main control portion 180 determines whether or not size measurement using the acquired measurement images is possible (step S151). The user judges whether or not size measurement at a position which the user wants to designate as a specified point can be performed. Specifically, the user determines whether or not the position that the user wants to designate as the designated point is included in at least 2 measurement images. The user inputs the determination result to the operation unit 4. The operation unit 4 receives the determination result from the user. The determination result input to the operation unit 4 is input to the CPU18 c via the control interface 17. The main control section 180 determines in step S151 whether or not size measurement using the acquired measurement image is possible based on the determination result input by the user.

When the main control section 180 determines in step S151 that the size measurement using the acquired measurement image is possible, the process in step S111 is executed. If the main control section 180 determines in step S151 that the size measurement using the acquired measurement image cannot be performed, the measurement process ends. In this case, the operation mode of the endoscope apparatus 1 is changed from the measurement mode to the examination mode. The user sets the composition and the imaging conditions again, and performs the operation for shifting the operation mode of the endoscope apparatus 1 from the examination mode to the measurement mode again. After that, the endoscope apparatus 1 again performs the process for acquiring the measurement image.

If the main control unit 180 determines in step S151 that the size measurement using the acquired measurement image cannot be performed, the endoscope apparatus 1 may execute the same processing as that shown in fig. 12. This enables the endoscope apparatus 1 to additionally acquire a measurement image.

Details of step S151 will be described with reference to fig. 21. Fig. 21 shows the determination process executed in step S151.

Next, an example of a case where 5 measurement images shown in fig. 22 are acquired will be described. A measurement image I701 is initially acquired. The measurement image I701 includes a region where the user wants to specify the measurement point P721 and the measurement point P722. In addition, the measurement image I701 includes a region where the user wants to specify the reference point P723 and the reference point P724. The respective positions of the 4 designated points that the user wants to designate are shown in fig. 22. The 4 designated points include 2 reference points and 2 measurement points. When the processing in step S151 is executed, the 4 designated points shown in fig. 22 have not been designated.

After changing the imaging field of view along the direction D711, the measurement image I702 is acquired. After changing the imaging field of view along the direction D712, the measurement image I703 is acquired. After changing the imaging field of view along the direction D713, a measurement image I704 is acquired. After the imaging field of view is further changed in the direction D713, a measurement image I705 is acquired.

The area detection unit 191 generates an image pair using the acquired 2 or more measurement images (step S1511). The image pair includes 2 measurement images different from each other. The image pair includes 2 measurement images arbitrarily selected from the acquired 2 or more measurement images. The acquired 2 or more measurement images are included in at least one image pair. For example, 1 of the acquired 2 or more measurement images is defined as a first image. At least 1 measured image other than the first image is defined as a second image. The image pair includes 1 first image and 1 second image. All the second images are included in a certain image pair together with the first image. In the case where the first image of the first image pair is the same as the second image of the second image pair and the second image of the first image pair is the same as the first image of the second image pair, the first image pair and the second image pair are merged into one image pair. For example, when 5 measurement images are acquired, 10 image pairs are generated.

After step S1511, the region detecting section 191 selects one pair of images, and detects a region overlapping between 2 measurement images included in the selected pair of images. That is, the region detecting unit 191 detects a region overlapping between the first image and the second image (step S1512). Hereinafter, a region overlapping between 2 measurement images is defined as an overlapping region.

After step S1512, the region detection unit 191 determines whether all image pairs have been processed. That is, the region detection unit 191 determines whether or not the processing in step S1512 is performed for all the image pairs (step S1513). In the case where there is a pair of images for which the processing in step S1512 is not performed, the processing in step S1512 is performed using the pair of images.

For example, in step S1512, the region detection section 191 detects an overlap region R12 between the measurement image I701 and the measurement image I702 shown in fig. 23. Similarly, the region detection unit 191 detects an overlapping region R13 between the measurement image I701 and the measurement image I703. The region detecting section 191 detects an overlapping region R14 between the measurement image I701 and the measurement image I704. The region detecting section 191 detects an overlapping region R15 between the measurement image I701 and the measurement image I705. The region detection unit 191 repeats the same process to process all the image pairs.

In step S1513, when the area detection unit 191 determines that all the image pairs have been processed, the display control unit 182 calculates the logical or of the area overlapping with the area of the other measurement image in each of the acquired measurement images. The display control unit 182 superimposes the area corresponding to the calculated logical sum on each measurement image, and displays the measurement image on the display unit 5 (step S1514).

For example, in the above example, the display control unit 182 calculates the logical or in the pair of images including the measurement image I701. Specifically, the display controller 182 calculates the logical or of the overlap region R12, the overlap region R13, the overlap region R14, and the overlap region R15. Similarly, the display control unit 182 calculates the logical or in each pair of measurement images including the measurement image I703, the measurement image I704, and the measurement image I705.

Fig. 24 shows each measurement image displayed on the display unit 5 in step S1514. Each measurement image is a thumbnail image. The region R731 of the measurement image I701, the region R732 of the measurement image I702, the region R733 of the measurement image I703, the region R734 of the measurement image I704, and the region R735 of the measurement image I705 correspond to the logical or calculated in step S1514. For example, with respect to the measurement image I701, the first image is the measurement image I701, and the second image is the measurement image I702, the measurement image I703, the measurement image I704, and the measurement image I705. The region R731 is a region overlapping between the measurement image I701 and the other measurement images. The 5 measurement images are simultaneously displayed on the display unit 5. The 5 measurement images may be sequentially displayed on the display unit 5.

The display control unit 182 performs image processing for visually distinguishing the region corresponding to the logical or from the other region in each measurement image. For example, the display control unit 182 indicates a specific color to a logical or corresponding region. The specific color may be different from the color of the subject. The method of processing the measurement image is not limited to this method, and a region corresponding to a logical or may be visually distinguished from other regions. The display control unit 182 can visually recognize the region R731 by processing the measurement image I701. The display control unit 182 performs the same processing as the measurement image I701 on the measurement image I702, the measurement image I703, the measurement image I704, and the measurement image I705. The display control unit 182 outputs each of the processed measurement images to the video signal processing circuit 12. The video signal processing circuit 12 outputs each measurement image to the display unit 5. The display unit 5 displays each measurement image.

Fig. 24 shows the respective positions of the measurement point P721, the measurement point P722, the reference point P723, and the reference point P724 which the user wants to specify. These designated points are not yet set when each measurement image is displayed on the display unit 5. Therefore, the icons of these designated points are not displayed. When each measurement image is displayed on the display unit 5, the user imagines each designated point shown in fig. 24 on each measurement image.

After step S1514, the user observes each measurement image displayed on the display unit 5 to confirm whether or not all the designated points that the user wants to designate are included in the overlap area. The user determines whether or not the size measurement is possible, and inputs the determination result to the operation unit 4. The operation unit 4 receives the determination result from the user. The determination result input to the operation unit 4 is input to the CPU18 c via the control interface 17. The main control part 180 determines whether or not the size measurement is possible based on the determination result input by the user (step S1515).

When all positions predetermined to specify a designated point are included in the overlap region in the measurement image, the user determines that the size measurement is possible. That is, when all the designated points that the user wants to designate are included in at least 2 measurement images, the user determines that the size measurement is possible. When at least one of the positions predetermined to specify the designated point is not included in the overlap region, the user determines that the size measurement cannot be performed. That is, when at least one of the designated points that the user wants to designate is included in only 1 measurement image, the user determines that the size measurement cannot be performed. When the user determines that the size measurement is possible, the determination result input to the operation unit 4 indicates SfM and an instruction to perform restoration of the three-dimensional shape.

In the example shown in fig. 24, the measurement point P721 is included in the measurement image I701, the measurement image I702, and the measurement image I704. The measurement point P722 is included in the measurement image I701, the measurement image I702, the measurement image I703, and the measurement image I704. The reference point P723 is included in the measurement image I701, the measurement image I703, and the measurement image I704. The reference point P724 is included in the measurement image I701, the measurement image I703, and the measurement image I704, similarly to the reference point P723.

Each designated point that the user wants to designate is included in at least 2 of the acquired 5 measurement images. Therefore, the user can determine that the size measurement at the designated point which the user wants to designate can be performed. When at least one designated point is included in only 1 measurement image, the user can determine that the size measurement cannot be performed at the designated point that the user wants to designate.

In the case where the designated point is included in at least 2 measurement images, the designated point is included in the overlap area. That is, the designated point is included in at least one of the region R731, the region R732, the region R733, and the region R734. In the example shown in fig. 24, all the designated points are included in at least one of the region R731, the region R732, the region R733, and the region R734. That is, all the designated points are included in the overlap region. Therefore, the endoscope apparatus 1 can perform the size measurement at the designated point which the user wants to designate based on the 5 measurement images shown in fig. 24.

When the main control unit 180 determines in step S1515 that the size measurement is possible, the process in step S111 is executed. In this case, the three-dimensional shape restoration unit 185 performs 3D reconstruction based on an execution instruction input by the user. If the main control unit 180 determines in step S1515 that the size measurement cannot be performed, the measurement process ends.

Next, an example of a case where 5 measurement images shown in fig. 25 are acquired will be described. A measurement image I801 is initially acquired. The measurement image I801 includes areas where the user wants to specify the measurement point P821 and the measurement point P822. In addition, the measurement image I801 includes an area in which the user wants to specify the reference point P823 and the reference point P824. When the processing in step S151 is executed, 2 reference points and 2 measurement points shown in fig. 25 have not been set yet.

After changing the imaging field of view along the direction D811, a measurement image I802 is acquired. After changing the imaging field of view along the direction D812, a measurement image I803 is acquired. After changing the camera field of view along the direction D813, a measurement image I804 is acquired. After the imaging field of view is further changed in the direction D813, the measurement image I805 is acquired.

Fig. 26 shows each measurement image displayed on the display unit 5 in step S1514. Each measurement image is a thumbnail image. The region R831 of the measurement image I801, the region R832 of the measurement image I802, the region R833 of the measurement image I803, the region R834 of the measurement image I804, and the region R835 of the measurement image I805 correspond to the logical or calculated in step S1514.

In the example shown in fig. 26, the measurement point P821 is included in the measurement image I801, the measurement image I802, and the measurement image I803. The measurement point P822 is included in the measurement image I801, the measurement image I802, the measurement image I803, and the measurement image I804. The reference point P823 is included only in the measurement image I801. The reference point P824 is included in the measurement image I801 and the measurement image I803.

The reference point P823 is included in only one measurement image I801. Therefore, the reference point P823 is not included in the region R831 of the measurement image I801. The user can determine that the size measurement is not possible.

In the examples shown in fig. 24 and 26, thumbnail images are displayed. The image displayed on the display unit 5 is not necessarily a thumbnail image. Any display method may be used as long as the overlap of the areas can be notified to the user.

The endoscope apparatus 1 detects a region overlapping between 2 measurement images for each image pair, and displays each measurement image on which the region is overlapped on the display unit 5. Therefore, the user can easily determine whether or not the size measurement using the measurement point and the reference point which the user wants to specify is possible. In the measurement processing shown in fig. 20 and 21, it can be determined by the user whether or not size measurement is possible before SfM with a large calculation load is executed. In the case where at least one of 2 or more specified points that the user wants to specify is included in only 1 measurement image, the endoscope apparatus 1 can prompt the user to re-acquire the measurement image at an earlier stage.

The method of displaying the region overlapped with the 2 measurement images is not limited to the method of coloring the region, and the user may recognize the region. For example, a line surrounding the area may also be displayed. The color of the region to be displayed superimposed on the measurement image is not limited to one color. For example, if the region overlaps between 3 measurement images, the measurement accuracy is improved. Therefore, the endoscope apparatus 1 may display the region with a color different from the region overlapped between only 2 measurement images with respect to the region.

The endoscope apparatus 1 may also have a CPU18a shown in fig. 3. In this case, the display controller 182 displays all of the 2 or more measurement images acquired from the image sensor 28 on the display 5. The display control unit 182 does not need to superimpose the region overlapping between 2 measurement images on the measurement image. The endoscope apparatus 1 displays the measurement image to assist the user who determines whether or not the size measurement is possible. However, the endoscope apparatus 1 does not need to perform other assistance. The user observes the measurement image displayed on the display unit 5 to check whether or not all the designated points that the user wants to designate are included in 2 or more measurement images. The user determines whether or not the size measurement is possible, and inputs the determination result to the operation unit 4. The processing performed thereafter is the same as the aforementioned processing.

(fourth embodiment)

In the third embodiment, the endoscope apparatus 1 makes visible a region overlapping between 2 measurement images, that is, a region where size measurement can be performed. In the third embodiment, the user determines whether or not a designated point which the user wants to designate is included in the visible region. In the present invention, the main body for determining whether or not size measurement is possible is not limited to the user. In the fourth embodiment of the present invention, the endoscope apparatus 1 determines whether or not size measurement is possible based on 2 or more measurement images acquired.

It is difficult for the apparatus to determine whether or not size measurement can be performed at a specified point that the user wants to specify, using only the measurement image without using information input by the user. Therefore, in the fourth embodiment, such an example is excluded.

The endoscope apparatus 1 of the fourth embodiment has a CPU18a shown in fig. 3. The measurement images acquired from the image pickup element 28 include 1 first image and at least 1 second image. The main control section 180 determines whether or not a designated point designated by the user in the first image is included in the second image. When the main control section 180 determines that the designated point is included in the second image, the three-dimensional shape restoration section 185 restores the three-dimensional shape of the subject.

The measurement process of the fourth embodiment includes the process shown in fig. 20. The processing shown in fig. 21 is changed to the processing shown in fig. 27. Details of step S151 will be described with reference to fig. 27. Fig. 27 shows the determination process executed in step S151.

The display control unit 182 displays at least 1 representative image on the display unit 5 (step S1516). The representative image may be any image as long as it is an image acquired from the image pickup element 28 as a measurement image. The display control unit 182 may display a plurality of representative images on the display unit 5.

After step S1516, the user inputs position information of a specified point in the representative image to the operation section 4 by operating the operation section 4. The designated point is a measurement point or a reference point. The operation unit 4 receives position information from a user. The positional information input to the operation unit 4 is input to the CPU18a via the control interface 17. The main control part 180 recognizes a designated point received from the user based on the location information. The main control section 180 determines whether or not the designated point received from the user is included in the measurement image other than the representative image. That is, the main control section 180 determines whether or not a designated point designated by the user in the representative image (first image) is included in the measurement image (second image) other than the representative image. Thereby, the main control section 180 determines whether or not the size measurement is possible (step S1518).

In step S1518, the main control portion 180 executes processing for detecting a point similar to the designated point in the measurement image other than the representative image. When the point similar to the designated point is successfully detected, the main control portion 180 determines that the designated point is included in the measurement image other than the representative image. When the point similar to the designated point is not detected, the main control portion 180 determines that the designated point is not included in the measurement image other than the representative image.

In step S1518, when the main control portion 180 determines that the designated point is not included in the measurement images other than the representative image, at least one designated point is included in only 1 measurement image. Therefore, the main control section 180 determines that the dimension measurement cannot be performed. In this case, the measurement process is ended, and the operation mode of the endoscope apparatus 1 is changed from the measurement mode to the examination mode. The user sets the composition and the imaging conditions again, and performs the operation for shifting the operation mode of the endoscope apparatus 1 from the examination mode to the measurement mode again. After that, the endoscope apparatus 1 again performs the process for acquiring the measurement image. If the main control section 180 determines in step S1518 that the designated point is not included in the measurement image other than the representative image, the endoscope apparatus 1 may execute the same processing as that shown in fig. 12.

A specific process for detecting a point similar to the designated point will be described with reference to fig. 28. Fig. 28 shows an example of 5 measurement images acquired from the image pickup device 28. A measurement image I901, a measurement image I902, a measurement image I903, a measurement image I904, and a measurement image I905 are acquired from the image pickup element 28. The measurement image I901 is a representative image and is displayed on the display unit 5.

For example, in step S1517, a specified point P911 on the measurement image I901 is received from the user. Upon receiving a designated point P911 from the user, the main control unit 180 searches each of the measurement images I902, I903, I904, and I905 for a point similar to the designated point P911. For example, in a search method that can be applied thereto, the feature quantity of the specified point P911 is described as a multidimensional vector. In this search method, the coordinates of the similar point determined to be the most coincident with the multidimensional vector representing the feature amount of the designated point P911. In each measurement image, when the degree of coincidence between the point most similar to the designated point P911 and the designated point P911 is equal to or less than a predetermined threshold value, it is determined that there is no point similar to the designated point P911 in the measurement image.

The acquisition viewpoints of the plurality of acquired measurement images are different from each other. The macroscopic movement method of the images between the 2 measurement images therefore complies with the prescribed rules. In other words, the following limitations exist: the positional relationship of the object on the microscopic scale does not change between 2 measurement images. For example, the whole object moves in parallel between 2 measurement images, or the magnification of the image changes. The endoscope apparatus 1 can also use this point to search for a point similar to the specified point. The above-described search method is one of specific examples. The search method is not limited to the above method. Any search method may be used as long as a point similar to the specified point can be found.

In the example shown in fig. 28, a point P911a similar to the specified point P911 is detected in the measurement image I902, and a point P911b similar to the specified point P911 is detected in the measurement image I904. After the user inputs the designated point P911, the designated point P912, the designated point P913, and the designated point P914 are sequentially input.

The main control portion 180 searches for points similar to the respective designated points of the designated point P912, the designated point P913, and the designated point P914 in the respective measurement images of the measurement image I902, the measurement image I903, the measurement image I904, and the measurement image I905. In the example shown in fig. 28, a point P912a similar to the specified point P912 is detected in the measurement image I902, and a point P912b similar to the specified point P912 is detected in the measurement image I904. A point P913a similar to the specified point P913 is detected in the measurement image I903, and a point P913b similar to the specified point P913 is detected in the measurement image I904. A point P914a similar to the specified point P914 is detected in the measurement image I903, and a point P914b similar to the specified point P914 is detected in the measurement image I904. In the example shown in fig. 28, all of 2 or more designated points designated by the user in the representative image are included in the measurement image other than the representative image. That is, 2 or more designated points are all included in at least 2 measurement images.

When the input of the designated point is ended, the user inputs an input end instruction to the operation unit 4. The operation unit 4 receives an input termination instruction from the user. The input end instruction input to the operation unit 4 is input to the CPU18a via the control interface 17. When the main control unit 180 determines in step S1518 that the designated point is included in the measurement image other than the representative image, the main control unit 180 determines whether or not an input termination instruction has been accepted from the user. Thereby, the main control part 180 determines whether the input of all the designated points is completed (step S1519).

In step S1519, when the main control portion 180 determines that the input end instruction has not been accepted from the user, the input of the designated point is not completed. In this case, the process in step S1517 is executed.

In step S1519, when the main control portion 180 determines that the input end instruction has been accepted from the user, the input of all the designated points is completed. In this case, all the designated points are included in at least 2 measurement images. Therefore, the main control section 180 determines that the size measurement is possible. In this case, the process in step S111 is executed.

The process shown in fig. 27 includes a step (S1517) in which the user inputs a specified point. Therefore, step S114 shown in fig. 20 may be omitted.

The measurement image used in the measurement process before may also be used as the representative image. The representative image includes a designated point set based on position information received from the user. The main control section 180 determines whether or not the designated point set in the representative image is included in the measurement image other than the representative image. When all the designated points are included in at least 1 measurement image other than the representative image, the main control section 180 determines that the size measurement is possible. When at least one designated point is included in the representative image only, the main control section 180 determines that the size measurement cannot be performed.

The endoscope apparatus 1 determines whether or not a specified point specified by the user is included in a measurement image other than the representative image. With this, the endoscope apparatus 1 can determine whether or not size measurement using the measurement point and the reference point that the user wants to specify is possible. In the measurement processing shown in fig. 20 and 27, it can be determined by the user whether or not size measurement is possible before SfM with a large calculation load is executed. In the case where at least one of 2 or more specified points that the user wants to specify is included in only 1 measurement image, the endoscope apparatus 1 can prompt the user to re-acquire the measurement image at an earlier stage.

The endoscope apparatus 1 can execute the processing in step S1517 instead of executing the processing in step S114 shown in fig. 20. In this case, since only the order of processing is changed, an increase in the load of the entire processing is suppressed.

Although the preferred embodiments of the present invention have been described above, the present invention is not limited to these embodiments and modifications thereof. Additions, omissions, substitutions, and other modifications can be made to the structure without departing from the spirit of the invention. The present invention is not limited by the foregoing description, but is only limited by the scope of the appended claims.

62页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于机器视觉的钢板表面缺陷检测系统及方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!