Image processing apparatus

文档序号:1268627 发布日期:2020-08-25 浏览:24次 中文

阅读说明:本技术 图像处理设备 (Image processing apparatus ) 是由 松田一 于 2020-02-12 设计创作,主要内容包括:本发明涉及一种图像处理设备。在满足液晶面板的角度特性的要求的同时,使照明装置小型化。光源(31b)布置在液晶面板(31d)的位于照相壳体的径向外侧的外端部侧的上方。液晶面板(31d)和光源(31b)之间的相对位置被设置成使得从光源(31b)发射的光入射在液晶面板(31d)的有效角度范围内。(The present invention relates to an image processing apparatus. The lighting device is miniaturized while satisfying the requirements of the angle characteristics of the liquid crystal panel. The light source (31b) is arranged above the outer end side of the liquid crystal panel (31d) located radially outside the camera housing. The relative position between the liquid crystal panel (31d) and the light source (31b) is set so that the light emitted from the light source (31b) is incident within an effective angle range of the liquid crystal panel (31 d).)

1. An image processing apparatus for measuring a height of a measurement object, wherein the image processing apparatus comprises:

a lighting device, comprising: an illumination housing having an opening formed in the center thereof; a light source that is provided at an upper portion within the illumination housing and emits diffused light; a liquid crystal panel which is provided in the illumination housing in a state of being separated downward from the light source, and on which light emitted from the light source is incident; and a light projection control section for controlling the light source and the liquid crystal panel so that different pattern light is projected from the liquid crystal panel onto the measurement object a plurality of times;

an imaging device configured to receive reflected light from the measurement target among a plurality of pattern lights projected from the liquid crystal panel via the opening of the illumination housing and generate a pattern image set;

an inspection target image generation unit configured to generate an inspection target image including information on a height of the measurement target in a central axis direction of the illumination device, based on the pattern image set generated by the imaging device; and

an inspection unit configured to execute an inspection process based on the inspection target image generated by the inspection target image generation unit,

wherein the light source is arranged above an outer end portion side of the liquid crystal panel on a radially outer side of the illumination housing, and a relative position between the liquid crystal panel and the light source is set so that light emitted from the light source is incident on the liquid crystal panel within an effective angle range of the liquid crystal panel.

2. The image processing apparatus according to claim 1,

the effective angle range of the liquid crystal panel is an angle range capable of ensuring that the contrast of the pattern light is equal to or greater than a predetermined value.

3. The image processing apparatus according to claim 1,

the effective angle range of the liquid crystal panel is the following angle range: when light emitted from the light source passes through the liquid crystal panel in a state where liquid crystal molecules through which light most easily passes are aligned, the attenuation rate of the attenuated light is 10% or less.

4. The image processing apparatus according to claim 1,

the driving method of the liquid crystal panel is a twisted nematic method.

5. The image processing apparatus according to claim 4,

the relative position between the liquid crystal panel and the light source is set such that: an angle formed by a normal line of a display surface of the liquid crystal panel drawn from the light source toward the display surface and an outer broken line drawn from the light source toward a boundary on the outer end side in an effective angle range of the liquid crystal panel is 10 ° or less.

6. The image processing apparatus according to claim 4,

the relative position between the liquid crystal panel and the light source is set such that: an angle formed by a normal line of a display surface of the liquid crystal panel drawn from the light source toward the display surface and an inner dotted line drawn from the light source toward a boundary of an inner end portion side located radially inside the illumination housing in an effective angle range of the liquid crystal panel is 50 ° or less.

7. The image processing apparatus according to claim 1,

the distance between the light source and the surface of the liquid crystal panel is substantially equal to the width of the effective angular range of the liquid crystal panel.

8. The image processing apparatus according to claim 1,

the light source includes a first light source and a second light source that are provided apart from each other in a circumferential direction of the opening portion in the illumination housing,

the liquid crystal panel includes a first liquid crystal panel arranged to correspond to the first light source such that light emitted from the first light source is incident within an effective angle range, and a second liquid crystal panel arranged to correspond to the second light source such that light emitted from the second light source is incident within an effective angle range, and

the light projection control section is configured to control the first light source, the second light source, the first liquid crystal panel, and the second liquid crystal panel so that different pattern light is projected from the liquid crystal panel onto the measurement object a plurality of times.

9. The image processing apparatus according to claim 8,

the display surface of the first liquid crystal panel and the display surface of the second liquid crystal panel are provided so as to be positioned on the same plane perpendicular to the center axis of the opening of the illumination housing.

Technical Field

The present invention relates to an image processing apparatus including an illumination device for illuminating a measurement object and an imaging device for receiving light reflected from the measurement object, and particularly belongs to the technical field of a configuration capable of acquiring a three-dimensional shape of the measurement object.

Background

Conventionally, as such an image processing apparatus, there is known a so-called pattern projection method which projects pattern light having different light intensity distributions according to positions onto a measurement object to receive light reflected from the measurement object, and acquires a three-dimensional shape of the measurement object using height information obtained based on the amount of received light. As a pattern projection method, there is a phase shift method in which pattern light whose illuminance distribution changes in a sinusoidal pattern is projected multiple times at different phases and imaging is performed each time.

Japanese patent 4,011,561 discloses an illumination device for generating and projecting pattern light, wherein the illumination device comprises: a light source; a condensing lens for condensing light emitted from the light source; and a liquid crystal panel on which the light condensed by the condensing lens is incident, and the illumination device is configured to project a pattern formed on the liquid crystal panel onto a measurement object.

In the case of using a liquid crystal panel, when light is incident parallel to the normal line of the display surface of the liquid crystal panel, the light transmittance becomes the highest. The larger the angle formed between the normal line of the display surface and the incident direction of light, the lower the light transmittance.

In short, the liquid crystal panel has an angular characteristic in which the light transmittance changes according to the incident direction of light. Therefore, when pattern light is generated using the liquid crystal panel, there is a risk that luminance unevenness corresponding to the position of the pattern light will occur.

Therefore, in general, as disclosed in japanese patent 4,011,561, a light source and a liquid crystal panel are positioned such that the incident direction of light emitted from the light source is parallel to the normal line of the display surface of the liquid crystal panel.

In addition, at a site where a plurality of measurement objects are transported, it is desirable to shorten the time for acquiring the three-dimensional shape of one measurement object as much as possible. However, in the pattern projection method, a plurality of types of pattern light may be sequentially projected onto one measurement object, and a time required to acquire a three-dimensional shape may be extended. As a result, it is conceivable to use a liquid crystal panel of a TN system whose response speed is high to switch a plurality of types of pattern light at high speed. However, in the case of the TN system, a variation in light transmittance due to the incident direction of light is large. As a result, the occurrence of the above-described luminance unevenness remarkably occurs, and the positional relationship between the incident direction of light and the liquid crystal panel is more strictly defined.

Therefore, in the case of using a liquid crystal panel, it is necessary to arrange the light source and the liquid crystal panel so that the incident direction of light is parallel to the normal line of the display surface of the liquid crystal panel, and mount the illumination device in a state where the liquid crystal panel is tilted with respect to the measurement object as in japanese patent 4,011,561. In addition, since a condensing lens needs to be disposed between the light source and the liquid crystal panel to condense light, it is difficult to miniaturize the lighting device. Further, in the case where pattern light is to be projected onto the measurement object from a plurality of directions to reduce an unmeasured area, the size of the illumination device will be further increased.

Disclosure of Invention

The present invention has been made in view of the above, and an object of the present invention is to miniaturize an illumination device while satisfying the requirement of the angular characteristic of a liquid crystal panel.

To achieve the object, in a first invention, an image processing apparatus for measuring a height of a measurement object, wherein the image processing apparatus comprises: a lighting device, comprising: an illumination housing having an opening formed in the center thereof; a light source that is provided at an upper portion within the illumination housing and emits diffused light; a liquid crystal panel which is provided in the illumination housing in a state of being separated downward from the light source, and on which light emitted from the light source is incident; and a light projection control section for controlling the light source and the liquid crystal panel so that different pattern light is projected from the liquid crystal panel onto the measurement object a plurality of times; an imaging device configured to receive reflected light from the measurement target among a plurality of pattern lights projected from the liquid crystal panel via the opening of the illumination housing and generate a pattern image set; an inspection target image generation unit configured to generate an inspection target image including information on a height of the measurement target in a central axis direction of the illumination device, based on the pattern image set generated by the imaging device; and an inspection section for performing an inspection process based on the inspection target image generated by the inspection target image generation section, wherein the light source is arranged above an outer end portion side of the liquid crystal panel located on a radially outer side of the illumination housing, and a relative position between the liquid crystal panel and the light source is set so that light emitted from the light source is incident on the liquid crystal panel within an effective angular range of the liquid crystal panel.

According to this configuration, in the case where diffused light emitted from the light source is incident on the liquid crystal panel formed with a pattern, pattern light is generated and projected onto the measurement object. The light emitted from the light source is diffused light, and the diffused light is incident on the liquid crystal panel within an effective angle range of the liquid crystal panel. Therefore, luminance unevenness corresponding to the position of the pattern light due to the angular characteristic of the liquid crystal panel is less likely to occur. In this configuration, since a light collecting structure using a light collecting lens is not required, the illumination device can be downsized when pattern light is projected onto a measurement object from a plurality of directions.

The image pickup device receives reflected light from a measurement object among pattern light projected from the liquid crystal panel, and generates a plurality of pattern images. The inspection target image generation unit generates an inspection target image including height information of the measurement target based on the pattern image generated by the imaging device, and the inspection unit executes an inspection process based on the inspection target image.

In the second invention, the effective angular range of the liquid crystal panel is an angular range capable of ensuring that the contrast of the pattern light is equal to or greater than a predetermined value.

That is, due to the angular characteristics of the liquid crystal panel, the greater the angle formed between the normal line of the display surface of the liquid crystal panel and the incident direction of light, the lower the light transmittance. Therefore, it is considered that the contrast of the pattern light is lowered according to the incident angle of the light. In the present structure, the relative position between the liquid crystal panel and the light source may be set so that the contrast of the pattern light is equal to or greater than a predetermined value, that is, equal to or greater than the contrast at which the height information of the measurement object can be obtained based on the pattern image captured by the image capturing device.

In the third invention, the effective angular range of the liquid crystal panel is an angular range of: when light emitted from the light source passes through the liquid crystal panel in a state where liquid crystal molecules through which light most easily passes are aligned, the attenuation rate of the attenuated light is 10% or less.

According to this configuration, by setting the relative position between the liquid crystal panel and the light source so that the attenuation rate of light is 10% or less, it is possible to ensure a contrast that enables height information of the measurement object to be obtained based on the pattern image captured by the imaging device. The liquid crystal molecule alignment state through which light most easily passes refers to a state in which the alignment of the liquid crystal molecules is in a direction that hardly blocks the optical path, and this can be created by a voltage applied to the liquid crystal panel. The effective angle range of the liquid crystal panel may be set so that the attenuation rate is 5% or less.

In the fourth invention, the method of driving the liquid crystal panel is a Twisted Nematic (TN) method.

That is, by using the TN method as a driving method of the liquid crystal panel, in the case of sequentially projecting a plurality of types of pattern light, the pattern light can be switched at high speed. However, the light transmittance greatly changes depending on the incident direction of light, and thus it is considered that luminance unevenness corresponding to the position of the pattern light may occur due to the angular characteristics of the liquid crystal panel. In this structure, since light is incident within the effective angle range, even if the variation in light transmittance due to the incident direction of the liquid crystal panel light is large, luminance unevenness corresponding to the position of the pattern light is less likely to occur.

In the fifth invention, the relative position between the liquid crystal panel and the light source is set such that: an angle formed by a normal line of a display surface of the liquid crystal panel drawn from the light source toward the display surface and an outer broken line drawn from the light source toward a boundary on the outer end side in an effective angle range of the liquid crystal panel is 10 ° or less.

According to this structure, the range of angles at which light emitted from the light source is incident on the liquid crystal panel is the optimum range of the TN liquid crystal panel, and as a result, it is possible to ensure that the contrast of the pattern light is equal to or greater than a predetermined value. The angle formed by the normal line and the outer broken line may be 5 ° or less, and may also be 0 °.

In the sixth invention, the relative position between the liquid crystal panel and the light source is set such that: an angle formed by a normal line of a display surface of the liquid crystal panel drawn from the light source toward the display surface and an inner dotted line drawn from the light source toward a boundary of an inner end portion side located radially inside the illumination housing in an effective angle range of the liquid crystal panel is 50 ° or less.

According to this structure, the range of angles at which light emitted from the light source is incident on the liquid crystal panel is the optimum range of the TN liquid crystal panel, and as a result, it is possible to ensure that the contrast of the pattern light is equal to or greater than a predetermined value. An angle formed by the normal line and the inner broken line may be 45 ° or less, and may also be 40 ° or less.

In the seventh invention, a distance between the light source and the surface of the liquid crystal panel is substantially equal to a width of an effective angular range of the liquid crystal panel.

In the eighth invention, the light source includes a first light source and a second light source which are provided apart from each other in a circumferential direction of the opening portion in the illumination housing, the liquid crystal panel includes a first liquid crystal panel and a second liquid crystal panel, the first liquid crystal panel being arranged to correspond to the first light source such that light emitted from the first light source is incident within an effective angle range, and the second liquid crystal panel is arranged to correspond to the second light source such that light emitted from the second light source is incident within an effective angle range, and the light projection control section is configured to control the first light source, the second light source, the first liquid crystal panel, and the second liquid crystal panel so that different pattern light is projected from the liquid crystal panel onto the measurement object a plurality of times.

According to this structure, pattern light can be projected onto the measurement object from different directions through the first liquid crystal panel and the second liquid crystal panel. Therefore, for example, the pattern light can be projected by the second liquid crystal panel onto a portion that becomes a shadow in the projection direction of the pattern light from the first liquid crystal panel, and the unmeasured area is reduced.

In the ninth aspect of the invention, the display surface of the first liquid crystal panel and the display surface of the second liquid crystal panel are provided so as to be positioned on the same plane perpendicular to the center axis of the opening of the illumination housing.

According to this configuration, the display surface of the first liquid crystal panel and the display surface of the second liquid crystal panel are located on the same plane perpendicular to the central axis of the opening of the illumination housing, and therefore, a structure capable of projecting pattern light onto the measurement object from a plurality of directions can be formed compactly.

According to the present invention, the relative position between the liquid crystal panel and the light source is set so that diffused light emitted from the light source is incident on the liquid crystal panel within an effective angle range. Accordingly, it is possible to miniaturize the lighting device and improve the degree of freedom of mounting the lighting device while reducing the occurrence of luminance unevenness corresponding to the position of the pattern light due to the angular characteristic of the liquid crystal panel.

Drawings

Fig. 1 is a diagram showing an example of a system configuration of an image processing apparatus according to an embodiment of the present invention;

fig. 2 is a block diagram of a controller section;

fig. 3 is a block diagram of an image pickup apparatus;

FIG. 4 is a plan view of the lighting device;

FIG. 5 is a bottom view of a lighting device according to a second embodiment;

FIG. 6 is a sectional view taken along line VI-VI in FIG. 5;

fig. 7 is a diagram showing a positional relationship between the light emitting diode and the liquid crystal panel;

fig. 8 is a diagram showing a positional relationship between a light emitting diode and a liquid crystal panel according to a modification;

fig. 9 is a diagram illustrating the gist of pattern light generation;

fig. 10 is a diagram showing an example of the arrangement of first light emitting diodes;

fig. 11 is a graph showing the relationship between pixel ripple (wavine) and the size of the light emitting diode and the ratio of half cycles of the wave;

fig. 12 is a diagram showing another arrangement example of the first light emitting diode;

FIG. 13 is a block diagram of a lighting device;

FIG. 14 is a diagram showing a process of generating an intermediate image and a reliability image from a pattern image set;

FIG. 15 is a diagram illustrating a process of generating an intermediate image and a reliability image from a pattern image set using an image example;

fig. 16 is a diagram showing a relationship of relative phase and absolute phase, the formation process of the gray code pattern and the phase shift pattern;

fig. 17 is a diagram illustrating a height measuring method using a height measuring section;

fig. 18 is a diagram illustrating the flow of the correction processing;

fig. 19A to 19C are diagrams showing the positions of the light emitting diodes before and after correction;

fig. 20 is a diagram showing the presence or absence of tilt (tilt);

fig. 21 is a schematic view showing a case where a height deviation occurs in a pair of light emitting diodes;

fig. 22 is a diagram illustrating a flow of estimation of a deviation and adjustment of each unit;

FIG. 23 is a diagram illustrating a method of estimating bias;

FIG. 24 is a conceptual diagram of camera calibration;

fig. 25A and 25B are schematic diagrams showing a state in which camera parameters are set by changing the height of the measurement object;

fig. 26 is a diagram showing mathematical formulas of a camera parameter matrix and a distortion model;

FIG. 27 is a flowchart showing an estimation process;

fig. 28 is a side view showing a configuration example of the first light projecting part having the adjustment mechanism;

fig. 29 is a plan view showing a configuration example of the first light projecting part having the adjustment mechanism;

fig. 30 is a perspective view showing a configuration example of a first light projecting part having another adjustment mechanism;

FIGS. 31A and 31B are diagrams illustrating use of a section before and after a change;

fig. 32 is a diagram showing a phase shift pattern image set obtained by projecting pattern light with the first light projecting part in the case where the measurement object is a rectangular parallelepiped box;

FIG. 33 is a diagram showing relative phase images obtained based on the set of phase shift pattern images shown in FIG. 32 and intermediate images corresponding to the set of phase shift pattern images shown in FIG. 32;

fig. 34 is a diagram showing a phase shift pattern image set obtained by projecting pattern light with the second light projecting part in the case where the measurement object is a rectangular parallelepiped box;

FIG. 35 is a diagram showing relative phase images obtained based on the set of phase shift pattern images shown in FIG. 34 and intermediate images corresponding to the set of phase shift pattern images shown in FIG. 34; and

fig. 36 is a diagram showing a user interface displaying a height image and a cross-sectional profile in the case where the measurement object is a rectangular parallelepiped box.

Detailed Description

Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following description of the preferred embodiment(s) is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses.

Fig. 1 is a diagram showing an example of the system configuration of an image processing apparatus 1 according to an embodiment of the present invention. The image processing apparatus 1 includes an imaging device 2, an illumination device 3, a controller section 4, a display section 5, a console section 6, and a mouse 7, and is configured to be able to obtain a height image of a measurement object W, to measure the height of the measurement object W based on the height image, and to perform various inspections on the measurement object W.

The measurement object W is in a state where the measurement object W is placed on the placement surface 100 of the conveying device such as a belt conveyor, and height measurement, various inspections, and the like are performed on the measurement object W placed on the placement surface 100. During the height measurement, the measurement object W preferably remains stationary.

The image processing apparatus 1 may be connected to a Programmable Logic Controller (PLC)101 in a wired manner through a signal line 101 a. However, this is not limited thereto, and it is also possible that a conventionally known communication module is incorporated in the image processing apparatus 1 and the PLC 101, and the image processing apparatus 1 and the PLC 101 are wirelessly connected. The PLC 101 is a control device for performing sequence control of the conveying device and the image processing apparatus 1, and a general-purpose PLC may be used. The image processing apparatus 1 can also be used without being connected to the PLC 101.

The display section 5 is a display device including, for example, a liquid crystal display panel, and constitutes a display unit. The display section 5 may display, for example, an operation user interface for operating the image processing apparatus 1, a setting user interface for setting the image processing apparatus 1, a height measurement result display user interface for displaying a height measurement result of the measurement target, an inspection result display user interface for displaying various inspection results of the measurement target, and the like. By visually recognizing the display section 5, the user of the image processing apparatus 1 can perform operations and settings of the image processing apparatus 1, can also grasp measurement results, inspection results, and the like of the measurement object W, and can also grasp the operation state of the image processing apparatus 1.

As shown in fig. 2, the display section 5 is connected to a display control section 46 included in the controller section 4, and is configured to be controllable by the display control section 46 to display the above-described user interface, height image, and the like.

The console section 6 is an input unit through which a user operates the image processing apparatus 1 and inputs various information, and the console section 6 is connected to the controller section 4. The mouse 7 is also an input unit by which the user operates the image processing apparatus 1 and inputs various information, and the mouse 7 is also connected to the controller section 4. The console section 6 and the mouse 7 are merely examples of the input unit, and the input unit may be, for example, a touch panel screen or the like provided in the display section 5, or a voice input device, or a configuration in which a plurality of such input units are combined. In the case of a touch panel screen, the display unit and the input unit may be implemented by one device.

The controller section 4 may be connected to a general-purpose personal computer PC for generating and storing a control program of the controller section 4. Further, an image processing program for performing various settings related to image processing may be installed in the personal computer PC, and various settings of image processing performed by the controller section 4 may be performed. Alternatively, a processing sequence program defining the processing sequence of image processing may be generated by software running on the personal computer PC. The image processing is sequentially executed at the controller section 4 according to the processing order. The personal computer PC and the controller section 4 are connected via a communication network, and a processing sequence program generated on the personal computer PC is transmitted to the controller section 4 together with, for example, layout information defining a display mode of the display section 5. Conversely, it is also possible to read a processing sequence program, layout information, and the like from the controller section 4 and edit them on the personal computer PC. The program may be generated not only in the personal computer PC but also in the controller section 4.

The controller unit 4 may be constructed by dedicated hardware. However, the present invention is not limited to this structure. For example, it may be a general-purpose personal computer PC or workstation that is installed with a dedicated image processing program, inspection processing program, height measurement program, and the like and functions as a controller section. In this case, it is only necessary to connect the image pickup device 2, the illumination device 3, the display section 5, the console section 6, and the mouse 7 to a personal computer PC or a workstation.

Although the functions of the image processing apparatus 1 will be described later, all the functions of the image processing apparatus 1 may be realized by the controller section 4 or by a general-purpose personal computer PC. In addition, it is also possible to realize a part of the functions of the image processing apparatus 1 by the controller section 4 and the remaining functions by a general-purpose personal computer PC. The functions of the image processing apparatus 1 may be implemented by software or by a combination of hardware.

The interface for connecting the image pickup apparatus 2, the illumination apparatus 3, the display section 5, the console section 6, and the mouse 7 to the controller section 4 may be a dedicated interface, and for example, existing communication standards such as Ethernet (product name), USB, and RS-232C may also be used.

The height image representing the height of the measurement object W is an image representing the height of the measurement object W in the direction of the central axis a (shown in fig. 1) of the opening portion 30a of the illumination device 3 shown in fig. 4, and may be referred to as a distance image. The height image may be displayed as a height with reference to a placement surface (also referred to as a reference surface) 100 of the measurement object W or as a relative distance in the direction of the central axis a with respect to the illumination device 3, and is an image in which the gradation value of each pixel changes according to the height. In other words, the height image may be referred to as an image in which a gradation value is determined based on the height with respect to the placement surface 100 of the measurement object W as a reference, and may also be referred to as an image in which a gradation value is determined based on a relative distance in the direction of the central axis a with respect to the lighting device 3. Further, the height image may be referred to as a multi-valued image having a gradation value corresponding to a height with respect to the placement surface 100 of the measurement object W as a reference, and may also be referred to as a multi-valued image having a gradation value corresponding to a relative distance in the direction of the central axis a with respect to the illumination device 3. Further, the height image may be referred to as a multivalued image in which the height with the placement surface 100 of the measurement object W as a reference has been converted into a gradation value for each pixel of the luminance image, and may also be referred to as a multivalued image in which the relative distance in the direction of the central axis a with respect to the illumination device 3 has been converted into a gradation value for each pixel of the luminance image.

Further, the height image is an image including height information of the measurement object W. For example, a three-dimensional composite image in which an optical luminance image has been synthesized as texture information and attached to a distance image may be used as the height image. The height image is not limited to an image displayed in three-dimensional form, and also includes an image displayed in two-dimensional form.

As a method for obtaining the height image as described above, there are generally two types. One type of method is a passive method (passive measurement method) of generating a distance image using an image captured under an illumination condition for obtaining a normal image, and another type of method is an active method (active measurement method) of generating a distance image by actively irradiating a measurement object W with light for measurement in the height direction. The present embodiment uses an active method to obtain a height image, and specifically uses a pattern projection method.

The pattern projection method is a method of obtaining a three-dimensional shape of a measurement object W by shifting the shape and phase of a pattern of measurement pattern light (also simply referred to as pattern light) projected onto the measurement object W to acquire a plurality of images and analyzing the acquired plurality of images. There are several types of pattern projection methods. For example, there is a phase shift method of shifting the phase of a sine wave stripe pattern to acquire a plurality of (at least three) images, obtaining the phase of a sine wave for each pixel from the plurality of images, and obtaining the three-dimensional coordinates of the surface of the measurement object W using the obtained phases. There is also a spatial coding method of causing a pattern projected onto a measurement object W to differ for each imaging, sequentially projecting, for example, stripe-like patterns in which a stripe width with a black-and-white duty ratio of 50% becomes smaller from 1/2, 1/4, 1/8, 1/16, … of the entire width, acquiring a pattern projection image for each pattern, and obtaining an absolute value of the height of the measurement object W. The sine pattern light and the stripe pattern light are pattern lights having a periodic illuminance distribution that changes in one-dimensional direction. Note that "projecting" the measurement pattern light onto the measurement object W is synonymous with "irradiating" the measurement object W with the measurement pattern light.

In the image processing apparatus 1 according to the present embodiment, the height image is generated by combining the phase shift method and the spatial encoding method described above. However, this is not limited thereto, and the height image may be generated only by the phase shift method or only by the spatial encoding method. In addition, other conventionally known active methods may be used to generate the height image of the measurement object W.

A method for measuring the height of the measurement object W with the image processing apparatus 1 is summarized as follows. First, the measurement object W is irradiated from different directions with the first measurement pattern light and the second measurement pattern light generated by the first light projecting part 31 and the second light projecting part 32 of the illumination device 3, respectively. The imaging device 2 receives the first measurement pattern light reflected from the measurement object W and generates a first pattern image set including a plurality of first pattern images, and the imaging device 2 also receives the second measurement pattern light reflected from the measurement object W and generates a second pattern image set including a plurality of second pattern images. Then, a first angle image in which each pixel has irradiation angle information of the first measurement pattern light to the measurement object W is generated based on the plurality of first pattern images, and a second angle image in which each pixel has irradiation angle information of the second measurement pattern light to the measurement object W is generated based on the plurality of second pattern images. Next, a height image representing the height of the measurement object W is generated from the irradiation angle information of each pixel of the first angle image, the irradiation angle information of each pixel of the second angle image, and the relative position information of the first light projecting part 31 and the second light projecting part 32, and the height of the measurement object W is obtained from the height image.

Although not necessary, as shown in fig. 4, in the image processing apparatus 1, the illumination device 3 includes a third light projecting part 33 and a fourth light projecting part 34 in addition to the first light projecting part 31 and the second light projecting part 32. As a result, the measurement object W can also be irradiated with the third measurement pattern light and the fourth measurement pattern light generated by the third light projecting part 33 and the fourth light projecting part 34 of the illumination device 3, respectively, from different directions. In this case, the image pickup device 2 receives the third measurement pattern light reflected from the measurement object W and generates a third pattern image set including a plurality of third pattern images, and the image pickup device 2 also receives the fourth measurement pattern light reflected from the measurement object W and generates a fourth pattern image set including a plurality of fourth pattern images. Then, a third angle image in which each pixel has irradiation angle information of a third measurement pattern light to the measurement object W is generated based on the plurality of third pattern images, and a fourth angle image in which each pixel has irradiation angle information of a fourth measurement pattern light to the measurement object W is generated based on the plurality of fourth pattern images. Next, a height image representing the height of the measurement object W is generated from the irradiation angle information of each pixel of the third angle image, the irradiation angle information of each pixel of the fourth angle image, and the relative position information of the third light projecting part 33 and the fourth light projecting part 34, and the height of the measurement object W is obtained from the height image.

(phase shift method)

Here, a phase shift method will be explained. In the phase shift method, when pattern light having a pattern in which the illuminance distribution changes sinusoidally is sequentially projected onto the measurement object W, pattern light having three or more patterns with different sine wave phases is projected. Each lightness value at the height measurement point is obtained from an image captured for each pattern from an angle different from the projection direction of the pattern light, and the phase value of the pattern light is calculated from each lightness value. The phase of the pattern light projected on the measurement point changes according to the height of the measurement point, and light having a phase different from that observed by the pattern light reflected at the reference position is to be observed. Therefore, the phase of the pattern light at the measurement point is calculated, and the height of the measurement point is measured by substituting the phase into the geometric relational expression using the principle of triangulation. Thus, the three-dimensional shape of the measurement object W can be obtained. According to the phase shift method, the height of the measurement object W can be measured with high resolution by reducing the period of the pattern light. However, the range of heights that can be measured is only low heights (with small height differences) within 2 π in terms of the amount of phase shift. Therefore, a spatial coding method is also used.

(spatial coding method)

According to the spatial coding method, a space irradiated with light may be divided into a plurality of small spaces having substantially fan-shaped sections, and a series of spatial code numbers may be assigned to the small spaces. For this reason, even if the height of the measurement object W is large, that is, even if the height difference is large, the height can be calculated from the space code number as long as the height is within the space irradiated with light. Therefore, the shape of the measurement object W can be measured for the entire measurement object W having a large height. According to the spatial encoding method, the range of the allowable height (dynamic range) becomes wide.

(detailed construction of the Lighting device 3)

As shown in fig. 4, the illumination device 3 according to the first embodiment includes an illumination housing 30, a first light projecting part 31, a second light projecting part 32, a third light projecting part 33, a fourth light projecting part 34, and a light projection control part 39. As shown in fig. 1, the lighting device 3 and the controller 4 are connected by a connection line 3 a. However, the lighting device 3 and the controller portion 4 may be connected wirelessly.

The illumination device 3 may be a device dedicated to pattern light projection for projecting only pattern light, or may be a device that doubles as an observation illumination for observing the measurement object W. In the case where the illumination device 3 is a device dedicated to pattern light projection, the illumination device for observation may be provided separately from the device dedicated to pattern light projection or integrally with the device dedicated to pattern light projection. As the observation lighting device, a light emitting diode, a semiconductor laser, a halogen lamp, an HID, or the like can be used.

The illumination housing 30 has an opening portion 30A at a center portion in a plan view thereof, and a first side portion 30A, a second side portion 30B, a third side portion 30C, and a fourth side portion 30D are continuously formed in a shape that is nearly rectangular in a plan view. Since the first side portion 30A, the second side portion 30B, the third side portion 30C, and the fourth side portion 30D extend linearly, the opening portion 30A also has a nearly rectangular shape in plan view.

The outer shape of the illumination housing 30 and the shape of the opening portion 30a are not limited to those shown in the drawings, and may be, for example, circular or the like. The central axis a of the opening 30a shown in fig. 1 is an axis passing through the center of the opening 30a and extending in a direction perpendicular to the lower surface of the illumination housing 30. When the illumination device 3 is attached such that the lower surface of the illumination housing 30 is horizontal, the central axis a of the opening 30a extends vertically. However, the illumination device 3 may be mounted such that the lower face of the illumination housing 30 is inclined, and in this case, the central axis a of the opening portion 30a is inclined.

The central axis a of the opening 30a need not pass strictly through the center of the opening 30 a. Although this depends on the size of the measurement object W, etc., the axis passing through a portion approximately several millimeters away from the center of the mouth 30a may be the central axis a. That is, an axis passing through the center of the opening portion 30a and the vicinity of the center of the opening portion 30a may be defined as the central axis a. The extension line of the central axis a intersects the placement surface 100.

In the following description, for convenience, as shown in fig. 4, the first side portion 30A side is referred to as a left side of the lighting device 3, the second side portion 30B side is referred to as a right side of the lighting device 3, the third side portion 30C side is referred to as an upper side of the lighting device 3, and the fourth side portion 30D side is referred to as a lower side of the lighting device 3. However, this does not specify the direction of the lighting device 3 when in use, and the lighting device 3 may be in any direction when in use.

The first, second, third and fourth sides 30A, 30B, 30C, 30D of the illumination case 30 are hollow. The first light projecting part 31 is accommodated inside the first side part 30A. The second, third, and fourth light projecting parts 32, 33, and 34 are accommodated inside the second, third, and fourth side parts 30B, 30C, and 30D, respectively. The first light projecting part 31 and the second light projecting part 32 are a pair, and the third light projecting part 33 and the fourth light projecting part 34 are a pair. Further, the light projection control unit 39 is also accommodated in the illumination housing 30.

The first side portion 30A and the second side portion 30B are arranged to face each other in a state of sandwiching the central axis a, and thus the first light projecting portion 31 and the second light projecting portion 32 are arranged to be bilaterally symmetrical (point-symmetrical) with the central axis a as a center of symmetry, and the first light projecting portion 31 and the second light projecting portion 32 are separated from each other in a circumferential direction of the central axis a.

Further, the third and fourth sides 30C and 30D are also arranged to face each other in a state of sandwiching the center axis a, and thus the third and fourth light projecting parts 33 and 34 are arranged to be vertically symmetrical (point-symmetrical) with the center axis a as a center of symmetry, and the third and fourth light projecting parts 33 and 34 are separated from each other in the circumferential direction of the center axis a. In a plan view, the four light projecting parts are arranged in a clockwise direction around the central axis a in the order of the first light projecting part 31, the third light projecting part 33, the second light projecting part 32, and the fourth light projecting part 34.

Fig. 5 is a bottom view of a lighting device 3 according to a second embodiment of the invention. In the second embodiment, eight light projecting parts 31 to 38 are provided inside the illumination housing 30. A fifth light projecting part 35 is provided between the first light projecting part 31 and the third light projecting part 33, a sixth light projecting part 36 is provided between the second light projecting part 32 and the fourth light projecting part 34, a seventh light projecting part 37 is provided between the second light projecting part 32 and the third light projecting part 33, and an eighth light projecting part 38 is provided between the first light projecting part 31 and the fourth light projecting part 34. The fifth light projecting part 35 and the sixth light projecting part 36 are a pair, and are arranged to be symmetrical with the central axis a as a center of symmetry. The seventh light projecting part 37 and the eighth light projecting part 38 are a pair, and are arranged to be symmetrical with the central axis a as a symmetry center. In a plan view, the first light projecting part 31, the fifth light projecting part 35, the third light projecting part 33, the seventh light projecting part 37, the second light projecting part 32, the sixth light projecting part 36, the fourth light projecting part 34, and the eighth light projecting part 38 are arranged in a clockwise direction around the central axis a in this order.

In the illumination device 3 of the second embodiment, an observation illumination 50 for illuminating and observing the measurement object W is provided separately from the light projecting parts 31 to 38. The observation illumination 50 is provided on the outer periphery of the bottom of the illumination case 30, and is formed in a ring shape surrounding the first to eighth light projecting portions 31 to 38. As shown in fig. 6, the observation illumination 50 includes a substrate 51a, a plurality of observation light emitting diodes 51b mounted on the substrate 51a and emitting observation illumination light, and a cover member 52. The substrate 51a is disposed to surround the first to eighth light projecting parts 31 to 38. The plurality of observation light emitting diodes 51b are provided at intervals in the circumferential direction so as to surround the first to eighth light projecting portions 31 to 38. The cover member 52 is provided to cover the observation light emitting diode 41b from the light emitting surface side, and is made of a material having a light transmittance and a property of diffusing light.

(Structure of first light projecting part 31)

The first to fourth light projecting parts 31 to 34 of the illumination device 1 of the first embodiment are the same as the first to fourth light projecting parts 31 to 34 of the illumination device 1 of the second embodiment.

As shown in fig. 6, the first light projecting part 31 includes: a housing 31 a; a first LED (light emitting diode) 31b serving as a first light source that emits diffused light; and a first LCD (first pattern light generating section) 31d for receiving diffused light emitted from the first LED31 d to sequentially generate a plurality of first measurement pattern lights having different patterns and irradiate the measurement object W with the first measurement pattern lights. The LCD is a liquid crystal display (i.e., a liquid crystal panel), and thus the first LCD31d is a first liquid crystal panel. The light source is not limited to a light emitting diode, and may be any light emitter that emits diffused light.

As shown in fig. 7, the first LCD31d is arranged to correspond to the first LED31b, and the light emitting surface of the first LED31b faces the first LCD31 d. This ensures that the light emitted from the first LED31b is incident on the first LCD31 d. As the light emitting diode, for example, a white light emitting diode can be used. The first LCD31d is disposed such that the display surface (emission surface) of the first LCD31d is located on a plane (which is indicated by a broken line 200 in fig. 6) perpendicular to the central axis a of the opening portion 30a of the illumination housing 30, and the plane 200 is located on the same plane as the display surface of the first LCD31 d.

The second light projecting part 32 is configured in the same manner as the first light projecting part 31. Specifically, as shown in fig. 6, the second light projecting part 32 includes a case 32a, second LEDs 32b mounted on a substrate 32c, and second LCDs (second pattern light generating parts) 32d arranged corresponding to the second LEDs 32 b. The first LED31b and the second LED32b are paired. The first LED31b and the second LED32b are attached to the illumination housing 30 so that the relative positions of the two can be corrected. Details of the correction of the relative position will be described later.

The structure of the first light projecting part 31 will be described in detail below. As shown in fig. 6 and 7, there are a plurality of first LEDs 31b of the first light projecting part 31, and the first LEDs 31b are provided at an upper portion within the illumination housing 30. The arrangement direction of the first LEDs 31b is a direction intersecting the light emission direction.

That is, inside the case 31a, a substrate 31c is disposed above. A plurality of first LEDs 31b are mounted on a surface of the substrate 31c facing downward. The plurality of first LEDs 31b may be arranged in a line shape, or adjacent first LEDs 31b may be arranged to be displaced in the vertical direction. In the case of generating pattern light having a periodic illuminance distribution that changes in one-dimensional direction, the first LEDs 31b are arranged in a direction in which the illuminance of the pattern light does not change. By arranging the plurality of first LEDs 31b along the longitudinal direction of the first side portion 30A of the illumination housing 30 shown in fig. 4, light emitted from the first LEDs 31b becomes light that is substantially continuous in the longitudinal direction of the first side portion 30A.

The light emitting directions of the plurality of first LEDs 31B may be the same, and in the present embodiment, as shown by the lower left diagonal line in fig. 1, light is provided to reach at least the second side portion 30B side (the right side of the illumination device 3) from directly below the first LEDs 31B compared to the central axis a of the opening portion 30a of the illumination housing 30. The light irradiation range by the plurality of first LEDs 31b is set to be wider than the imaging field of view of the imaging device 2.

The light emission range of the plurality of first LEDs 31b will be specifically described. As shown in fig. 17, the separation direction between the first light projecting part 31 and the second light projecting part 32 is the X direction, and the vertical direction is the Z direction. The illumination device 3 is arranged so that the lower face of the illumination device 3 becomes horizontal (parallel to the placement surface 100), and the illumination device 3 is placed at a distance "1" above the placement surface 100 of the measurement object W. The portion where X is 0 is directly below the first LED31b, and a straight line D extending from the first LED31b to a point C where (X, Z) is (0,0) is drawn. Further, a straight line F extending from the first LED31b to a point E where (X, Z) ═ 1,0 is drawn. At this time, the direction of the first LED31b is set, and the light source lens of the first LED31b is designed so that the light emission range of the plurality of first LEDs 31b is the region sandwiched between the straight line D and the straight line F.

As shown in fig. 6, the first LCD31d is provided in the illumination housing 30 in a state of being separated downward from the first LED31 b. The driving method of the first LCD31d is a TN (twisted nematic) method. Therefore, when the voltage applied to the first LCD31d is 0, the liquid crystal composition (liquid crystal molecules) is arranged parallel to the display surface to pass the light of the first LED31 b. In the case where the voltage rises from this state, the liquid crystal composition rises perpendicularly to the display surface, and when the voltage reaches the maximum voltage, the light of the first LED31b is blocked.

(relative position between LED and LED)

The relative positional relationship between the first LED31b and the first LCD31d will be explained. As shown in fig. 9, the display surface of the first LCD31d is located on the same plane as the plane 200 perpendicular to the central axis a of the opening 30a of the illumination housing 30, and the end of the first LCD31d on the radial outside of the illumination housing 30 is the outer end side, while the end of the first LCD31d on the radial inside of the illumination housing 30 is the inner end side. The first LED31b is arranged above the outer end side of the first LCD31 d. Reference numeral SEG in fig. 9 refers to a segment included in the first LCD31 d. The black segments SEG are segments that do not transmit light, and the white segments SEG are segments that transmit light. By alternately forming the black segments SEG and the white segments SEG on the display surface of the first LCD31d, the light transmitted through the first LCD31d forms a sinusoidal pattern light whose intensity varies periodically as shown below. This is the principle of pattern light generation. The number, interval, and formation position of the black and white segments SEG may be arbitrarily changed, and a plurality of types of pattern light having different wave periods and phases may be generated.

The number of black segments SEG may be set to be the same as the number of white segments SEG. As a result, when pattern light is generated, a region that transmits light and a region that does not transmit light are alternately formed in the first LCD31d having the same width. This is performed by a light projection control section 39 which will be described later.

Here, general properties of the liquid crystal panel will be explained. When light is incident parallel to the normal line of the display surface of the liquid crystal panel, the light transmittance becomes highest. The larger the angle formed between the normal line of the display surface and the incident direction of light, the lower the light transmittance. Since the liquid crystal panel has an angular characteristic in which the light transmittance changes according to the incident direction of light, there is a risk that luminance unevenness corresponding to the position of pattern light will occur when the pattern light is generated using the liquid crystal panel.

In this regard, in the present embodiment, the relative position between the first LCD31d and the first LED31b is set such that the light emitted from the first LED31b is incident on the first LCD31d within the effective angular range of the first LCD31 d. Since the diffused light emitted from the first LED31b is incident on the first LCD31d within the effective angular range of the first LCD31d, luminance unevenness corresponding to the position of the pattern light due to the angular characteristic of the first LCD31d is less likely to occur.

The effective angular range of the first LCD31d may be an angular range capable of ensuring that the contrast of the pattern light is equal to or greater than a predetermined value. The contrast of the pattern light being equal to or greater than the predetermined value means a degree as follows: when the imaging device 2 receives the pattern light reflected from the measurement object W, the imaging device 2 can obtain a pattern image capable of generating an inspection object image. That is, due to the angular characteristic of the first LCD31d, the larger the angle formed between the normal line of the display surface of the first LCD31d and the incident direction of light, the lower the light transmittance. Therefore, it is considered that the contrast of the pattern light is lowered according to the incident angle of the light. However, in the present embodiment, the relative position between the first LCD31d and the first LED31b is set to have a contrast that can obtain the height information of the measurement object W based on the pattern image captured by the imaging device 2.

Further, the effective angular range of the first LCD31d may be defined as follows. For example, when light emitted from the first LED31b passes through the first LCD31d in a state where liquid crystal molecules through which light most easily passes are aligned, an angle range in which the attenuation rate of the attenuated light is 10% or less is an effective angle range of the first LCD31 d. By setting the relative position between the first LCD31d and the first LED31b so that the attenuation rate of light is 10% or less, it is possible to ensure the contrast with which the height information of the measurement object W can be obtained based on the pattern image captured by the imaging device 2.

The alignment state of the liquid crystal molecules through which light most easily passes means a state in which the liquid crystal molecules are aligned in a direction in which the optical path is hardly blocked. The effective angle range of the first LCD31d may also be set such that the attenuation rate is 5% or less.

Further, as shown in fig. 9, the relative position between the first LED31b and the first LCD31d may be set such that an angle α formed by a normal line 201 drawn from the center of the first LED31b (the center of the light emitting surface) toward the display surface of the first LCD31d and an inner broken line 202 drawn from the center of the first LED31b toward the boundary of the inner end portion side on the radially inner side of the illumination housing 30 in the effective angle range of the first LCD31d is 50 ° or less. In this way, the angle range when the light emitted from the first LED31b is incident to the first LCD31d becomes the optimum range of the TN liquid crystal panel, and as a result, it is possible to ensure that the contrast of the pattern light is equal to or greater than a predetermined value. The angle α formed by the normal line 201 and the inner broken line 202 may be set to 45 ° or less, and may also be set to 40 ° or less.

Further, an angle β formed by a normal line 201 drawn from the center of the first LED31b toward the display surface of the first LCD31d and an outer broken line 203 drawn from the center of the first LED31b toward the boundary of the outer end portion side in the effective angle range of the first LCD31d is set to 10 ° or less. The angle formed by the normal line 201 and the outer broken line 203 may be set to 5 ° or less, and may also be set to 0 °. In this way, the angle range when the light emitted from the first LED31b is incident to the first LCD31d becomes the optimum range of the TN liquid crystal panel, and as a result, it is possible to ensure that the contrast of the pattern light is equal to or greater than a predetermined value.

The distance between the first LED31b and the surface (display surface) of the first LCD31d may be substantially equal to the width of the effective angular range of the first LCD31d (the distance between the outer end portion and the inner end portion of the first LCD31 d).

(size of LED)

As shown in fig. 9, a dimension K in a direction in which the wave continues on the light emitting surface of the first LED31b is set to be equal to or smaller than a dimension of a half period of the wave formed in the first LCD31 d. In the present embodiment, one wave is formed of eight segments SEG arranged in the direction in which the wave in the first LCD31d is continuous, and thus four segments SEG correspond to half cycles of the wave. That is, the dimension J of the four segments SEG arranged in the direction in which the waves continue is equal to the dimension K of the light emitting surface of the first LED31b, or is larger than the dimension K.

As a result of the experiment, it was found that: when the size K of the light emitting surface of the first LED31b is equal to or smaller than the size of the half period of the wave formed in the display surface of the first LCD31d (the same size as the portion through which light is transmitted), i.e., the size J, the pattern light may not have an appropriate sinusoidal illuminance distribution in some cases. In the case where the sinusoidal pattern light is not appropriately generated, when a height image is obtained by image processing, high-frequency moire may occur in the height image, that is, a wavy surface which is actually a flat surface may be formed, and this will be an obstacle to inspection. The same applies to the case of generating a wavy pattern light other than the sinusoidal pattern light.

In this respect, it is conceivable to select a light emitting diode having a size in which the dimension K of the light emitting face of the first LED31b is larger than the dimension J of the half period of the wave. However, the size of the generally available light-emitting diodes is limited and the dimension K of the light-emitting surface cannot be set freely. As a result, as in the present embodiment, it may be necessary to use a light emitting diode whose light emitting face dimension K is equal to or less than the dimension J of the half period of the wave.

Fig. 10 shows an arrangement form of the first LEDs 31 b. The left-right direction in fig. 10 is a direction in which the illuminance of the pattern light does not change, and 64 first LEDs 31b are arranged in order from the left side to the right side in fig. 10. The number of the first LEDs 31b is an example, and may be arbitrarily set.

The first LED31b located at the left end of fig. 10 is referred to as a first LED31 b-1, and the other first LEDs 31b are referred to as a first LED31 b-2, a first LED31b-3, … in this order. The first LED31b located at the right end of fig. 10 is referred to as a first LED31 b-64. The first LEDs 31b-1 to 31b-64 are arranged in a direction in which the illuminance of the pattern light does not change, and are deviated from each other in a direction in which the illuminance of the pattern light changes (up-down direction in fig. 10). The optical axes illuminated by the first LED31 b-1 to the first LED31b-64 are located on a straight line 205. The straight line 205 is a straight line extending in a direction in which the illuminance of the pattern light does not change and passing through a portion where the amount of light of the LED array is maximum.

The first LEDs 31b-1 to 31b-64 are arranged such that the centers of the light emitting surfaces thereof are located within a predetermined range in the direction in which the illuminance of the pattern light changes (the up-down direction of fig. 10). The center of the light emitting surface of the first LED31 b-1 and the center of the light emitting surface of the first LED31b-64 are located on a straight line 206 parallel to the straight line 205, and the centers of the light emitting surfaces of the first LED31 b-1 and the first LED31b-64 are arranged at one end of the predetermined range. The centers of the light emitting surfaces of the first LEDs 31b-32 and the first LEDs 31b-33 are located on a straight line 207 parallel to the straight line 205, and the centers of the light emitting surfaces of the first LEDs 31b-32 and the first LEDs 31b 31b-33 are arranged at the other end of the predetermined range.

A separation dimension 208 between a straight line 205 in which the optical axis is located and a straight line 206 passing through one end of the predetermined range is set equal to a separation dimension 209 between the straight line 205 in which the optical axis is located and a straight line 207 passing through the other end of the predetermined range. The size 210, which is a combination of the size 208 and the size 209, becomes a size representing a predetermined range. The first LED31 b-1 and the first LED31b-64 are one-end light emitting diodes whose light emitting surface is centered at one end of the predetermined range, and the first LED31 b-32 and the first LED31b-33 are the other-end light emitting diodes whose light emitting surface is centered at the other end of the predetermined range. The first LEDs 31b-2 to 31b-31 and the first LEDs 31b-34 to 31b-63 are middle light emitting diodes whose light emitting surfaces have centers located in the middle of a predetermined range. The first LEDs 31b-2 to 31b-63 are arranged to be deviated from each other in a direction in which the illuminance of the pattern light changes. Specifically, the first LEDs 31b-2 to 31b 31b-31 and the first LEDs 31b-34 to 31b 31b-63 are arranged to be deviated in a plurality of stages from the vicinity of one end of the predetermined range until the vicinity of the other end.

Since the first LEDs 31b-1 to 31b-64 are arranged such that the centers of the light emitting surfaces thereof are deviated within a predetermined range, as shown in fig. 9, the first LEDs 31b-32 and 31b-33 are displaced upward, while the first LEDs 31b-1 and 31b-64 are displaced downward, and for example, the first LEDs 31b-16 are located near the centers in the up-down direction, when viewed from the direction in which the first LEDs 31b-1 to 31b-64 are arranged. As described above, the size of the light emitting surface of each first LED31b is K. However, by arranging the first LEDs 31b-1 to 31b-64 to deviate within a predetermined range, the apparent size of the light emitting faces of the first LEDs 31b-1 to 31b-64 is K', which is longer than K. The dimension K' is longer than the dimension J of the half period of the wave formed on the first LCD31 d. As a result, high-frequency moire is less likely to occur in the inspection object image.

The ratio between the dimension J of the half period of the wave formed on the first LCD31d and the dimension K' in the direction in which the wave continues over the apparent light emitting surface of the first LED31 b-1 to the first LED31b-64 is in the range of 1:1.2 to 1: 1.4. Fig. 11 is a diagram showing a relationship between pixel moire and a ratio of the size J to the size K'. In fig. 11, the horizontal axis represents the ratio of the dimension J to the dimension K', and the vertical axis represents the degree of pixel moire, and the pixel moire is stronger the higher on the vertical axis. As shown in the figure, when the ratio of the dimension J to the dimension K' is 1 or less, the pixel moire becomes strong. On the other hand, at a ratio greater than 1.2, the pixel ripple becomes small and remains at a low level until the ratio reaches 1.4. Therefore, the ratio is preferably set in the range of 1:1.2 to 1: 1.4.

Fig. 12 is a diagram showing another arrangement example of the first LEDs 31b-1 to 31 b-64. In this example, the centers of the light emitting surfaces of the first LEDs 31b-32 and 31b-33 are located at the center of the predetermined range, the centers of the light emitting surfaces of the first LEDs 31b-1 and 31b 64 are located at one end of the predetermined range, and the centers of the light emitting surfaces of the first LEDs 31b-2 and 31b 31b-63 are located at the other end of the predetermined range. Each first LED31b is arranged such that: when the center of the light emitting surface is close to the center in the arrangement direction of the first LEDs 31b, the center of the light emitting surface is close to the center of the predetermined range. In this example, as shown in fig. 9, the apparent size of the light emitting surfaces of the first LEDs 31b-1 to 31b-64 is also K', which is longer than K.

As shown in fig. 8, a diffusion unit 31g may be provided between the first LED31b and the light emitting surface of the first LCD31d, the diffusion unit 31g diffusing light emitted from the light emitting surface and making the light incident on the first LCD31 d. In this way, even if the size K of the light emitting surface of the first LED31b is equal to or smaller than the size of the half period of the wave formed on the first LCD31d, the incident form of light to the first LCD31d is the incident form as in the case where the size of the light emitting surface is longer than the size of the half period of the wave. Therefore, high-frequency moire is less likely to occur in the inspection object image. The diffusion unit 31g may be, for example, a diffusion plate or a diffusion lens. In the case where the diffusion unit 31g is provided, pixel ripples are suppressed due to the diffusion effect. Therefore, the first LEDs 31b-1 to 31b-64 need not be arranged to be offset in the up-down direction. However, the first LEDs 31b-1 to 31b-64 may be arranged to be offset in the up-down direction.

(LED drive circuit and LCD drive circuit)

As shown in fig. 13, a first LED driving circuit (light source driving circuit) 31e for driving the first LED31b and a first LCD driving circuit (liquid crystal panel driving circuit) 31f for driving the first LCD31d are provided in the first light projecting section 31. The first LED driving circuit 31e is a circuit for changing the value of the current supplied to the first LED31b, and is controlled by the light projection control section 39. Therefore, the first LED31b is controlled by the light projection control section 39 via the first LED drive circuit 31 e. The current value control by the first LED drive circuit 31e is DAC control.

The first LCD drive circuit 31f is a circuit for changing the arrangement of the liquid crystal composition of each segment SEG included in the first LCD31d (shown in fig. 9) by changing the voltage applied to each segment SEG. In the present embodiment, as shown in fig. 16 as an example, 64 segments SEG are included in the first LCD31d, and the voltage applied to each of the 64 segments SEG may be changed. Each segment SEG can be switched between a state in which light emitted from the first LED31b is transmitted and a state in which light emitted from the first LED31b is not transmitted. The first LCD drive circuit 31f is controlled by a light projection control section 39 common to the first LED drive circuit 31 e. Therefore, the first LCD31d is controlled by the light projection control section 39 via the first LCD drive circuit 31 f. In addition, since the first LED driving circuit 31e and the first LCD driving circuit 31f are controlled by the common light projection control section 39, the first LED driving circuit 31e and the first LCD driving circuit 31f can be accurately synchronized.

The first LCD drive circuit 31f controlled by the light projection control section 39 drives the first LCD31d, and thus the first LCD drive circuit 31f can receive diffused light emitted from the first LED31b to sequentially generate a plurality of first measurement pattern lights having different patterns and irradiate the measurement object W with these first measurement pattern lights. The plurality of first measurement pattern lights include pattern lights for a spatial code (gray code) used in a spatial encoding method and pattern lights having a periodic illuminance distribution used in a phase shift method.

The upper side of fig. 16 shows the case of generating pattern light for a spatial code using the first LCD31d, and the lower side of fig. 16 shows the case of generating pattern light with a periodic illuminance distribution used in the phase shift method. In fig. 16, the black-coated portion is a segment SEG that does not transmit diffused light emitted from the first LED31b, and the white-coated portion is a segment SEG that transmits diffused light emitted from the first LED31 b. Further, fig. 16 illustrates a case where 64 segments SEG included in the first LCD31d are arranged in the horizontal direction of fig. 16.

The case of generating pattern light for space code at the upper side of fig. 16 shows the case of generating a striped pattern in which the stripe width with a black-and-white duty of 50% is reduced from 1/2, 1/4, … of the entire width. By controlling the first LCD31d, pattern light for the space code can be sequentially generated.

In the case of generating pattern light for the phase shift method in the lower side of fig. 16, a plurality of pattern lights are generated by shifting the phase of the sine wave stripe pattern. In this example, binary control is used for the LCD display to generate a rectangular wave pattern. However, as shown in fig. 9, the rectangular wave pattern generated by the first LCD31d is blurred on the light irradiation surface, so that a sinusoidal pattern can be obtained. More specifically, a pattern close to a sine wave can be obtained by combining a pattern on a rectangular wave formed on a liquid crystal panel with a light emitting pattern of a light emitting diode having an area. Assuming that an ideal point light source or line light source is used instead of the LED, a binary pattern will be obtained instead of the sine wave pattern. For this reason, in order to obtain a sine wave pattern, a balance between the LED light source size and the LCD aperture size is important. In this example, eight pattern lights are generated. By controlling the first LCD31d in this manner, pattern light for the phase shift method can be sequentially generated.

That is, the light projection control section 39 controls the first LED31b and the first LCD31d so that a plurality of pattern lights according to the phase shift method and/or the spatial coding method are sequentially generated. When the projection of one pattern light of the plurality of pattern lights is completed, the next pattern light is projected, and all the pattern lights are projected by repeating the process. The pattern forming process using the first LCD31d will be described later.

Note that the number of pattern lights used for the spatial code and the number of pattern lights used for the phase shift method are not limited to the numbers shown in the drawings.

(Structure of second light projecting part 32)

As shown in fig. 1, the light emission range of the second LED32b of the second light projecting part 32 is set so as to reach at least the first side part 30A (the left side of the illumination device 3) from the center axis a of the opening part 30A of the illumination housing 30 from directly below the second LED32 b. That is, the light emission range of the second LEDs 32b of the second light projecting part 32 is set to be bilaterally symmetrical to the light emission range of the first LEDs 31b of the first light projecting part 31 with the central axis a of the opening part 30a of the illumination housing 30 as a symmetrical center. The light emission range of the second LED32b is indicated by the lower right oblique line in fig. 1.

As shown in fig. 13, a second LED driving circuit (light source driving circuit) 32e for driving the second LED32b and a second LCD driving circuit (liquid crystal panel driving circuit) 32f for driving the second LCD32d are provided in the second light projecting section 32, and the second LED driving circuit 32e and the second LCD driving circuit 32f are controlled by a light projection control section 39. Since the second LCD32d is driven in the same manner as the first LCD31d, it is possible to receive diffused light emitted from the second LED32b to sequentially generate a plurality of second measurement pattern lights having different patterns and irradiate the measurement object W with the second measurement pattern lights. The plurality of second measurement pattern lights include a pattern light for space code and a pattern light for phase shift method.

The first light projecting part 31 and the second light projecting part 32 are integrally supported by the illumination housing 30 in a state of being separated from each other in the circumferential direction of the central axis a so that the pattern light emitted from the first light projecting part 31 and the pattern light emitted from the second light projecting part 32 have substantially the same spread angle, whereby these pattern lights intersect on the central axis a of the opening part 30a of the illumination housing 30. "integrally supported" means that the first light projecting part 31 and the second light projecting part 32 are fixed to the lighting housing 30 so that the relative positional relationship between the first light projecting part 31 and the second light projecting part 32 does not change during installation or use. Therefore, the relative position between the first light projecting part 31 and the second light projecting part 32 within the illumination housing 30 does not change during operation. Therefore, for example, as shown in fig. 17, when the separation distance between the center portion of the first LED31b and the center portion of the second LED32b is preset to I, the separation distance between the center portion of the first LED31b and the center portion of the second LED32b is fixed to I during operation. The separation distance between the center portion of the first LED31b and the center portion of the second LED32b is relative position information of the first light projecting part 31 and the second light projecting part 32 within the illumination housing 30, and may be stored in the controller part 4 or the image pickup device 2 in advance. When not in operation, the separation distance between the center portion of the first LED31b and the center portion of the second LED32b may be changed.

Further, the relative position information of the first light projecting part 31 and the second light projecting part 32 in the illumination housing 30 may be a straight distance between the center part of the first LED31b and the center part of the second LED32b, and may be a distance set in consideration of the path length of light irradiated from each LED when the light is returned by a mirror or the like and irradiated onto the measurement object W.

Since the first LCD31d is disposed on the left side of the illumination device 3, the first LCD31d projects pattern light from the left side onto the measurement object W placed on the placement surface 100. Further, since the second LCD32d is disposed on the right side of the illumination device 3, the second LCD32d projects pattern light from the right side onto the measurement object W placed on the placement surface 100. The first LCD31d and the second LCD32d are liquid crystal panels that project pattern light onto the measurement object W from different directions.

(Structure of third light projecting part 33 and fourth light projecting part 34)

The third light projecting part 33 and the fourth light projecting part 34 are configured in the same manner as the first light projecting part 31. As shown in fig. 13, the third light projecting part 33 includes a third LED (third light source) 33b and a third LCD (third pattern light generating part) 33d arranged corresponding to the third LED33 b. The fourth light projecting part 34 includes a fourth LED (fourth light source) 34b and a fourth LCD (fourth pattern light generating part) 34d arranged corresponding to the fourth LED34 b. The third LED33b and the fourth LED34b are paired. The third LED33b and the fourth LED34b are attached to the illumination housing 30 so that the relative positions of the two can be corrected. Details of the correction of the relative position will be described later.

The display surfaces of the third LCD33d and the fourth LCD34d are located on the same plane as a plane (a plane indicated by reference numeral 200 in fig. 6) perpendicular to the central axis a of the opening portion 30a of the illumination housing 30.

The light emission range of the third LED33b of the third light projecting part 33 and the light emission range of the fourth LED34b of the fourth light projecting part 34 are set to be the same as the relationship between the light emission range of the first LED31b of the first light projecting part 31 and the light emission range of the second LED32b of the second light projecting part 32. Specifically, the light emission range of the third LED33b of the third light projecting part 33 is set so as to reach at least the fourth side part 30D side from the center axis a of the opening part 30a of the illumination housing 30 from directly below the third LED33 b. The light emission range of the fourth LED34b of the fourth light projecting part 34 is set so as to reach at least the third part 30C side from the center axis a of the opening 30a of the illumination housing 30 from directly below the fourth LED34 b. Therefore, when the central axis a of the opening 30a of the illumination housing 30 is set to be the center of symmetry, the light emission range of the third LED33b of the third light projecting part 33 and the light emission range of the fourth LED34b of the fourth light projecting part 34 are set to be vertically symmetrical.

As shown in fig. 13, a third LED driving circuit (light source driving circuit) 33e for driving the third LED33b and a third LCD driving circuit (liquid crystal panel driving circuit) 33f for driving the third LCD33d are provided in the third light projecting section 33, and the third LED driving circuit 33e and the third LCD driving circuit 33f are controlled by the light projection control section 39. Since the third LCD33d is driven in the same manner as the first LCD31d, it is possible to receive diffused light emitted from the third LED33b to sequentially generate a plurality of third measurement pattern lights having different patterns and irradiate the measurement object W with these third measurement pattern lights. The plurality of third measurement pattern lights include a pattern light for space code and a pattern light for phase shift method.

Further, a fourth LED driving circuit (light source driving circuit) 34e for driving the fourth LED34b and a fourth LCD driving circuit (liquid crystal panel driving circuit) 34f for driving the fourth LCD34d are provided in the fourth light projecting part 34, and the fourth LED driving circuit 34e and the fourth LCD driving circuit 34f are controlled by the light projection control part 39. Since the fourth LCD34d is driven in the same manner as the first LCD31d, it is possible to receive diffused light emitted from the fourth LED34b to sequentially generate a plurality of fourth measurement pattern lights having different patterns and irradiate the measurement object W with these fourth measurement pattern lights. The plurality of fourth measurement pattern lights include a pattern light for space code and a pattern light for phase shift method.

The third light projecting part 33 and the fourth light projecting part 34 are integrally supported by the illumination housing 30 in a state of being separated from each other in the circumferential direction of the central axis a so that the pattern light emitted from the third light projecting part 33 and the pattern light emitted from the fourth light projecting part 34 have substantially the same spread angle, whereby these pattern lights intersect on the central axis a of the opening part 30a of the illumination housing 30. Therefore, the relative positions of the third light projecting part 33 and the fourth light projecting part 34 within the illumination housing 30 do not change during operation. Therefore, when the separation distance between the center portion of the third LED33b and the center portion of the fourth LED34b is set to a predetermined value in advance, the separation distance between the center portion of the third LED33b and the center portion of the fourth LED34b is fixed to the predetermined value during operation. The separation distance between the center portion of the third LED33b and the center portion of the fourth LED34b is relative position information of the third light projecting part 33 and the fourth light projecting part 34 within the illumination housing 30, and may be stored in the controller part 4 or the image pickup device 2 in advance.

Since the third LCD33d is disposed on the upper side of the illumination device 3, the third LCD33d projects pattern light from this direction onto the measurement object W placed on the placement surface 100. Further, since the fourth LCD34d is disposed on the lower side of the illumination device 3, the fourth LCD34d projects pattern light from this direction onto the measurement object W placed on the placement surface 100. The third LCD33d and the fourth LCD34d are liquid crystal panels that project pattern light onto the measurement object W from different directions.

The fifth to eighth light projecting parts 35 to 38 shown in fig. 5 are configured in the same manner as the first to fourth light projecting parts 31 to 34.

(control by light projection control section 39)

As shown in fig. 13, in the present embodiment, the first LED drive circuit 31e, the second LED drive circuit 32e, the third LED drive circuit 33e, the fourth LED drive circuit 34e, the first LCD drive circuit 31f, the second LCD drive circuit 32f, the third LCD drive circuit 33f, and the fourth LCD drive circuit 34f are controlled by the common light projection control section 39, and thus these drive circuits can be accurately synchronized. Also, the fifth to eighth light projecting parts 35 to 38 have LED driving circuits and LCD driving circuits, and these driving circuits can be precisely synchronized.

The light projection control section 39 controls the first LCD31d, the second LCD32d, the third LCD33d, and the fourth LCD34d so that, when projection of one pattern light of the plurality of pattern lights from any one of the first LCD31d, the second LCD32d, the third LCD33d, and the fourth LCD34d is completed, the formation process of the pattern to be projected next is completed at least on the other liquid crystal panel on which the pattern light is to be projected next, and after the projection of the pattern light using the above-mentioned one liquid crystal panel is completed, the process for projecting the next pattern light on the above-mentioned other liquid crystal panel is repeated.

Specifically, the light projection control section 39 of the illumination device 3 is configured such that a trigger signal for starting projection of pattern light and a resynchronization trigger signal for synchronizing with the image pickup device 2 during projection of pattern light are input from the controller section 4 to the light projection control section 39. A trigger signal may also be input from the PLC 101. For example, the trigger signal may be input to the light projection control section 39 based on a detection result obtained by a photosensor or the like connected to the PLC 101. The device that generates the trigger signal need not be the PLC 101, and may be a photosensor or the like. In this case, the photosensor or the like may be directly connected to the light projection control section 39, or may be connected to the light projection control section 39 via the controller section 4.

Upon input of the trigger signal, the light projection control section 39 controls the first LCD31d via the first LCD drive circuit 31f to switch the pattern formed on the first LCD31d to a pattern different from the current display form. Here, in order to switch the pattern on the first LCD31d, the first LED driving circuit 31e changes the voltage applied to the liquid crystal composition of each segment included in the first LCD31d by a well-known method. The time from changing the voltage applied to the liquid crystal composition until the liquid crystal composition changes its arrangement is longer than an imaging interval of the imaging device 2 to be described later. That is, in order to switch the pattern currently formed on the first LCD31d to a different pattern, a predetermined pattern switching time longer than the imaging interval of the imaging device 2 is required, and a time taken to switch from one pattern to another pattern is required. Also, for the second LCD32d, the third LCD33d, and the fourth LCD34d, a time taken to sequentially switch the patterns is required.

When the pattern on the first LCD31d is completely formed, control is performed such that: light is emitted from the first LED31b in synchronization with the formation of the pattern, and light is not emitted from the second LED32b, the third LED33b, and the fourth LED34 b. As a result, only the pattern formed on the first LCD31d is projected as pattern light onto the measurement object W, and the patterns formed on the second LCD32d, the third LCD33d, and the fourth LCD34d will not be projected onto the measurement object W.

The time taken to form a pattern on the first LCD31d is a part of the pattern switching time taken to form a pattern on the second LCD32 d. The time taken to form a pattern on the second LCD32d is longer than the time taken to form a pattern on the first LCD31d, and specifically, the time taken to form a pattern on the second LCD32d starts before the completion of the pattern formation on the first LCD31 d.

When imaging of pattern light of a pattern projected on the measurement object W is completed, control is performed such that: during the complete formation of the pattern on second LCD32d, light is emitted from second LED32b in synchronization with the formation of the pattern, and no light is emitted from first LED31b, third LED33b, and fourth LED34 b. As a result, only the pattern formed on the second LCD32d is projected as pattern light onto the measurement object W.

The time taken to form a pattern on the second LCD32d is a part of the pattern switching time taken to form a pattern on the third LCD33 d. The time taken to form a pattern on the third LCD33d is longer than the time taken to form a pattern on the second LCD32d, and specifically, the time taken to form a pattern on the third LCD33d starts before the completion of the pattern formation on the first LCD31 d.

When imaging of pattern light of a pattern projected on the measurement object W is completed, control is performed such that: during the complete formation of the pattern on the third LCD33d, light is emitted from the third LED33b in synchronization with the formation of the pattern, and no light is emitted from the first, second, and fourth LEDs 31b, 32b, and 34 b. As a result, only the pattern formed on the third LCD33d is projected as pattern light onto the measurement object W.

The time taken to form a pattern on the third LCD33d is a part of the pattern switching time taken to form a pattern on the fourth LCD34 d. The time taken to form a pattern on the fourth LCD34d is longer than the time taken to form a pattern on the third LCD33d, and specifically, the time taken to form a pattern on the fourth LCD34d starts before the completion of the pattern formation on the first LCD31 d.

When imaging of pattern light of a pattern projected on the measurement object W is completed, control is performed such that: during the complete formation of the pattern on the fourth LCD34d, light is emitted from the fourth LED34b in synchronization with the formation of the pattern, and no light is emitted from the first, second, and third LEDs 31, 31b b, and 33 b. As a result, only the pattern formed on the fourth LCD34d is projected as pattern light onto the measurement object W. A part of the time taken to form the pattern is a part of the switching time taken to form the next pattern on the first LCD31 d.

That is, in the present embodiment, in the first LCD31d, the second LCD32d, the third LCD33d, and the fourth LCD34d, a plurality of pattern lights are not sequentially and continuously projected through one liquid crystal panel. The first LCD31d, the second LCD32d, the third LCD33d, and the fourth LCD34d are controlled such that: upon completion of the projection of the first pattern light by one liquid crystal panel, the other liquid crystal panel projects the first pattern light; upon completion of the projection of the first pattern light by the other liquid crystal panel, the other liquid crystal panel projects the first pattern light; when the projection of the first pattern light is completed in all the liquid crystal panels in this manner, the above-mentioned one liquid crystal panel projects the second pattern light; when the projection of the second pattern light through the liquid crystal panel is completed, the other liquid crystal panel projects the second pattern light; and upon completion of the projection of the second pattern light by the other liquid crystal panel, the other liquid crystal panel projects the second pattern light. As a result, a pattern to be projected next can be prepared to be formed at the liquid crystal panel where the projection of the pattern light is not being performed, and thus the slow response speed of the liquid crystal panel can be masked.

The above example explains the case where pattern light is projected through all of the first LCD31d, the second LCD32d, the third LCD33d, and the fourth LCD34 d. However, this is not limited thereto, and pattern light may be projected using only the first LCD31d and the second LCD32d, or only the third LCD33d and the fourth LCD34 d. In the case where the pattern light is projected using only the first LCD31d and the second LCD32d, the projection of the pattern light may be alternately performed. For example, during the first LCD31d is performing the projection of the first pattern light, the formation process of the first pattern is performed on the second LCD32 d. Then, while the second LCD32d is performing the projection of the first pattern light, the formation process of the second pattern is performed on the first LCD31 d. This process is repeated. The same applies to the case where pattern light is projected using only the third LCD33d and the fourth LCD34 d.

The pattern light formation information is transmitted from the controller section 4 to the light projection control section 39 in addition to the trigger signal and the resynchronization trigger signal. The transmitted pattern light formation information is temporarily stored in the light projection control section 39, and the first LED31b, the second LED32b, the third LED33b, and the fourth LED34b, and the first LCD31d, the second LCD32d, the third LCD33d, and the fourth LCD34d are controlled based on the pattern light formation information.

The pattern light formation information includes, for example, an irradiation pattern, presence or absence of irradiation of pattern light for the space code, specific pattern and number of pattern light for the space code, presence or absence of irradiation of pattern light for the phase shift method, specific pattern and number of pattern light for the phase shift method, irradiation order of pattern light, and the like. The irradiation mode includes: a first illumination mode in which pattern light is illuminated only by the first LCD31d and the second LCD32d and projected onto the measurement object W; a second illumination mode in which pattern light is projected through all of the first LCD31d, the second LCD32d, the third LCD33d, and the fourth LCD34 d; and a third illumination mode in which pattern light is illuminated only by the third LCD33d and the fourth LCD34d and projected onto the measurement object W.

(construction of the image pickup device 2)

As shown in fig. 1 and the like, the imaging device 2 is provided separately from the illumination device 3. As shown in fig. 1, the imaging device 2 and the controller 4 are connected via a connection line 2 a. However, the imaging device 2 and the controller unit 4 may be wirelessly connected.

The imaging device 2 constitutes a part of the image processing apparatus 1, and thus may also be referred to as an imaging section. Since the image pickup device 2 is provided separately from the illumination device 3, the image pickup device 2 and the illumination device 3 can be separately installed. Therefore, the mounting positions of the image pickup device 2 and the lighting device 3 can be changed, and the mounting positions of the image pickup device 2 and the lighting device 3 can be separated. As a result, the degree of freedom of mounting the image pickup device 2 and the illumination device 3 is greatly improved, and the image processing apparatus 1 can be introduced into various production fields and the like.

Note that, in a scene where the installation position of the image pickup device 2 and the installation position of the illumination device 3 can be made the same, the image pickup device 2 and the illumination device 3 can be attached to the same member, and the user can arbitrarily change the installation state according to the scene. Further, the image pickup device 2 and the illumination device 3 may be attached to the same member and used integrally.

The image pickup device 2 is disposed above the illumination housing 30 of the illumination device 3, i.e., on the opposite side to the pattern light emission direction, so as to observe the opening portion 30a of the illumination housing 30. Accordingly, the imaging device 2 can receive the first measurement pattern light reflected from the measurement object W via the opening 30a of the illumination housing 30 of the illumination device 3 to generate a plurality of first pattern images, while receiving the second measurement pattern light reflected from the measurement object W via the opening 30a of the illumination housing 30 of the illumination device 3 to generate a plurality of second pattern images. In the case where the illumination device 3 includes the third light projecting part 33 and the fourth light projecting part 34, the imaging device 2 may receive the third measurement pattern light reflected from the measurement object W via the opening part 30a of the illumination housing 30 of the illumination device 3 to generate a plurality of third pattern images, while receiving the fourth measurement pattern light reflected from the measurement object W via the opening part 30a of the illumination housing 30 of the illumination device 3 to generate a plurality of fourth pattern images. Also, in the case of including the fifth to eighth light projecting parts 35 to 38, fifth to eighth pattern images may be generated.

As shown in fig. 3, the image pickup device 2 includes a lens included in an optical system, and an image pickup element 22 including a light receiving element for receiving light incident from the lens 21. The camera is composed of a lens 21 and an image pickup device 22. The lens 21 is a member for forming an image of at least a height measurement region or an inspection target region on the measurement target W on the image pickup element 22. The optical axis of the lens 21 may or may not coincide with the central axis a of the opening 30a of the illumination housing 30 of the illumination device 3. In addition, the distance between the imaging device 2 and the illumination device 3 in the direction of the central axis a may be arbitrarily set within a range in which the illumination device 3 does not interfere with the imaging by the imaging device 2, and is designed with a high degree of freedom of mounting.

As the image pickup element 22, a CCD or CMOS sensor or the like can be used. The image pickup device 22 receives the reflected light from the measurement object W to acquire an image, and outputs the acquired image data to the data processing unit 24. In this example, a high-resolution CMOS sensor is used as the image pickup element 22. An image pickup element capable of taking an image in color may also be used. The image pickup element 22 can photograph a normal-luminance image in addition to the pattern projection image. In the case of capturing a normal-luminance image, it is only necessary to light all the LEDs 31b, 32b, 33b, and 34b of the lighting device 3 and control all the LCDs 31d, 32d, 33d, and 34d so as not to form pattern light. In the case where the observation illumination 50 shown in fig. 5 and 6 is present, the imaging device 2 can capture a normal-luminance image using the observation illumination 50.

The imaging apparatus 2 includes, in addition to the camera, an exposure control section 23, a data processing section 24, a phase calculation section 26, an image processing section 27, an image storage section 28, and an output control section 29. The data processing section 24, the phase calculating section 26, the image processing section 27, and the image storage section 28 are connected to the built-in common bus line 25 in the image pickup device 2, and can transmit and receive data with respect to each other. The exposure control unit 23, the data processing unit 24, the phase calculation unit 26, the image processing unit 27, the image storage unit 28, and the output control unit 29 may be configured by hardware, or may be configured by software.

(configuration of Exposure control section 23)

A trigger signal for starting image capturing and a resynchronization trigger signal for synchronizing with the illumination device 3 during image capturing are input from the controller portion 4 to the exposure controller portion 23. The input timings of the trigger signal and the resynchronization trigger signal to be input to the exposure control section 23 are set to be the same as the timings of the trigger signal and the resynchronization trigger signal to be input to the illumination device 3.

The exposure control section 23 is a section that directly controls the image pickup element 22, and controls the image pickup timing and the exposure time of the image pickup element 22 in accordance with a trigger signal and a resynchronization trigger signal input to the exposure control section 23. Information on the imaging conditions is input from the controller unit 4 to the exposure control unit 23, and is stored in the exposure control unit 23. The information on the imaging conditions includes, for example, the number of times of imaging, an imaging interval (time after imaging until next imaging is performed), an exposure time (shutter speed) at the time of imaging, and the like.

When the trigger signal transmitted from the controller portion 4 is input, the exposure control portion 23 causes the image pickup device 22 to start image pickup. In the present embodiment, it is necessary to generate a plurality of pattern images for one input of the trigger signal. Therefore, it is configured such that a resynchronization trigger signal is input from the controller portion 4 during image capturing, and synchronization with the lighting device 3 can be achieved by inputting the resynchronization trigger signal.

Specifically, the exposure control section 23 controls the image pickup element 22 so that the image pickup element 22 performs image pickup (exposure) while the pattern completely formed on the first LCD31d is being projected as pattern light onto the measurement object W. The exposure time may be set to be the same as the time when the pattern is being projected as pattern light onto the measurement object W. However, the timing for starting exposure may be set slightly later than the timing for starting projection of pattern light.

After that, the exposure control section 23 controls the image pickup device 22 so that the image pickup device 22 picks up an image while the pattern formed on the second LCD32d is being projected as pattern light onto the measurement object W. Upon completion of this imaging, the exposure control section 23 controls the imaging element 22 so that the imaging element 22 performs imaging while the pattern formed on the third LCD33d is being projected as pattern light onto the measurement object W. Then, the exposure control section 23 controls the image pickup device 22 so that the image pickup device 22 picks up an image while the pattern formed on the fourth LCD34d is being projected as pattern light onto the measurement object W. By repeating this operation, a plurality of first pattern images, a plurality of second pattern images, a plurality of third pattern images, and a plurality of fourth pattern images are generated.

The image pickup device 22 transmits image data to the data processing unit 24 every time image pickup is completed. The image data may be stored in the image storage section 28 shown in fig. 3. That is, since the image pickup timing of the image pickup element 22 does not coincide with the image request timing of the controller section 4, the image storage section 28 functions as a buffer to absorb the time lag.

Between the image capturing and the next image capturing, the image data is transferred to the data processing section 24 shown in fig. 3. However, this is not limited to this, and for example, image capturing and data transfer may be performed in parallel. When the imaging of the measurement object W by the pattern light irradiation is completed, and the imaging of the measurement object W by the pattern light irradiation of the next pattern is being performed, the image data of the previous pattern is transferred to the data processing section 24. In this way, the image data captured the previous time can be transferred to the data processing section 24 at the time of the next image capture.

Further, the measurement object W irradiated with the pattern light of a certain pattern may be imaged a plurality of times. In this case, the first LED31b may be lit only during image capturing by the image pickup element 22. The exposure time of the image pickup element 22 may be set so that the first time of image pickup is longer than the second time of image pickup, and may also be set so that the second time of image pickup is longer than the first time of image pickup. When the measurement object W irradiated with the pattern light of another pattern is imaged, the imaging may be performed a plurality of times. In this way, a plurality of images having different exposure times can be generated while one of the plurality of pattern lights is being projected onto the measurement object W. In performing high dynamic range processing to be described later, a plurality of images having different exposure times are used. The first LED31b may remain on while the measurement object W irradiated with the pattern light of a certain pattern is being imaged a plurality of times.

(configuration of data processing section 24)

The data processing unit 24 shown in fig. 3 generates a plurality of pattern image sets based on the image data output from the image pickup device 22. When the image pickup device 22 generates a plurality of first pattern images, the data processing section 24 generates a first pattern image set including the plurality of first pattern images. Also, a second pattern image set including a plurality of second pattern images is generated, a third pattern image set including a plurality of third pattern images is generated, and a fourth pattern image set including a plurality of fourth pattern images is generated. Therefore, the imaging device 2 can receive the reflected light from the measurement object W among the plurality of pattern lights projected from the respective liquid crystal panels, and generate a plurality of pattern image sets corresponding to the respective liquid crystal panels.

In the case where pattern light is projected only through the first LCD31d and the second LCD32d, a first pattern image set and a second pattern image set are generated. In the case where the pattern light is projected only through the third LCD33d and the fourth LCD34d, a third pattern image set and a fourth pattern image set are generated.

The data processing section 24 may generate a phase shift pattern image set by projecting the pattern light according to the phase shift method, and may also generate a gray code pattern image set by projecting the pattern light according to the spatial coding method.

The pattern light according to the phase shift method is a pattern light in which an illuminance distribution is changed in a sine wave shape, for example. However, other patterns of light are possible. In the present embodiment, although the number of pattern lights according to the phase shift method is 8, it is not limited thereto. The pattern light according to the spatial coding method is a striped pattern in which the stripe width of a black-and-white duty ratio of 50% is reduced from 1/2, 1/4, … of the entire width. In the present embodiment, although the number of pattern lights according to the spatial encoding method is 4, it is not limited thereto. Note that the pattern described in this example is a case of using gray code as the spatial code, and although forming the pattern light by halving the stripe width is not the purpose of gray code, it is only so as a result. In addition, the gray code is a coding scheme that takes noise immunity into consideration by setting a Hamming distance (Hamming) from an adjacent code to 1.

As shown in fig. 15, when the first light projecting unit 31 of the illumination device 3 irradiates the measurement object W with four pattern lights according to the spatial coding method, the data processing unit 24 generates a gray code pattern image set including four different images. When the first light projecting unit 31 of the illumination device 3 irradiates the measurement object W with eight pattern lights according to the phase shift method, the data processing unit 24 generates a phase shift pattern image set including eight different images. The gray code pattern image set and the phase shift pattern image set obtained by irradiating the pattern light by the first light projecting section 31 are together the first pattern image set.

Similarly, when the second light projecting unit 32 of the illumination device 3 irradiates the measurement object W with pattern light according to the spatial coding method, a gray code pattern image set is generated. When the measurement object W is irradiated with pattern light according to the phase shift method, a phase shift pattern image set is generated. The gray code pattern image set and the phase shift pattern image set obtained by irradiating the pattern light by the second light projecting part 32 are together the second pattern image set.

Similarly, when the third light projecting unit 33 of the illumination device 3 irradiates the measurement object W with pattern light according to the spatial coding method, a gray code pattern image set is generated. When the measurement object W is irradiated with pattern light according to the phase shift method, a phase shift pattern image set is generated. The gray code pattern image set and the phase shift pattern image set obtained by irradiating the pattern light by the third light projecting section 33 are together the third pattern image set.

Similarly, when the fourth light projecting unit 34 of the illumination device 3 irradiates the measurement object W with pattern light according to the spatial coding method, a gray code pattern image set is generated. When the measurement object W is irradiated with pattern light according to the phase shift method, a phase shift pattern image set is generated. The gray code pattern image set and the phase shift pattern image set obtained by irradiating the pattern light by the fourth light projecting section 34 are together the fourth pattern image set.

Each pattern image set may be stored in the image storage section 28 shown in fig. 3.

As shown in fig. 3, the data processing unit 24 includes an HDR processing unit 24 a. The HDR processing is high dynamic range (high dynamic range image capturing) combining processing, and the HDR processing unit 24a combines a plurality of images having different exposure times. That is, as described above, in the case where the measurement object W irradiated with pattern light of a certain pattern is imaged a plurality of times at different exposure times, a plurality of luminance images having different exposure times are obtained, and an image having a dynamic range wider than that of each luminance image can be generated by combining these luminance images. Conventionally known methods can be used as the HDR synthesis method. Instead of changing the exposure time, it is also possible to change the intensity of the irradiated light to obtain a plurality of luminance images different in lightness and then synthesize the luminance images.

(construction of phase calculating section 26)

The phase calculation section 26 shown in fig. 3 is a section that calculates an absolute phase image of the raw data serving as a height image. As shown in fig. 14, in step SA1, the relative phase calculation process is performed by acquiring each image data of the phase shift pattern image set and using the phase shift method. This can be expressed as a relative phase (unwrapping) pre-phase) in fig. 16, and a phase image is obtained by the relative phase calculation process of step SA 1.

On the other hand, in step SA3 of fig. 14, the spatial code calculation process is performed, and a slice number image is obtained by acquiring each image data of a gray code pattern image set and using a spatial coding method. The band-number image is an image that can be recognized by assigning a series of spatial code numbers to a large number of small spaces in the case of dividing the space irradiated with light into these small spaces. Fig. 10 illustrates the subject matter of assigning a series of spatial code numbers.

In step SA4 of fig. 14, absolute phase phasing processing is performed. In the absolute phase phasing process, an absolute phase image (intermediate image) is generated by synthesizing (unwrapping) the phase image obtained in step SA1 and the band number image obtained in step SA 3. Since phase jump correction (phase unwrapping) by the phase shift method can be performed using the spatial code number obtained by the spatial encoding method, a high-resolution measurement result can be obtained while ensuring a high wide dynamic range.

The height measurement can be made by the phase shift method only. In this case, the measurement dynamic range of the height becomes narrow. Therefore, when the difference in height is large and the phase is shifted by one cycle or more, the height cannot be measured accurately. On the contrary, in the case of the measurement object W having a small change in height, there is an advantage that the processing speed is increased by the above method because the image capturing or the synthesis of the band image by the spatial coding method is not performed. For example, when measuring the measurement object W having a small difference in the height direction, it is not necessary to have a wide dynamic range. Therefore, even by the phase shift method alone, the processing time can be shortened while maintaining the high-precision height measurement performance. In addition, since the absolute height is known, it may be configured to measure the height only by a spatial encoding method. In this case, the accuracy can be improved by increasing the number of codes.

Further, in step SA2 of fig. 14, each image data of the phase shift pattern image set is acquired, and the reliability image calculation process is performed. In the reliability image calculation process, a reliability image indicating the reliability of the phase is calculated. This is an image that can be used for the determination of invalid pixels.

The phase image, the band number image, and the reliability image may be stored in the image storage section 28 shown in fig. 3.

The absolute phase image generated by the phase calculation unit 26 may be referred to as an angle image in which each pixel has irradiation angle information of the measurement pattern light to the measurement object W. That is, since the first pattern image set (phase shift pattern image set) includes eight first pattern images imaged by shifting the phases of the sine wave stripe patterns, each pixel has irradiation angle information of the measurement pattern light to the measurement object W by using the phase shift method. In other words, since the phase calculation unit 26 is a part that generates the first angle image having the irradiation angle information of the first measurement pattern light to the measurement object W for each pixel based on the plurality of first pattern images, the phase calculation unit 26 may also be referred to as an angle image generation unit. The first angle image is an image obtained by imaging the angle of light emitted from the first LED31b to the measurement object W.

Similarly, the phase calculation unit 26 may generate a second angle image in which each pixel has irradiation angle information of the second measurement pattern light to the measurement object W based on the plurality of second pattern images, generate a third angle image in which each pixel has irradiation angle information of the third measurement pattern light to the measurement object W based on the plurality of third pattern images, and generate a fourth angle image in which each pixel has irradiation angle information of the fourth measurement pattern light to the measurement object W based on the plurality of fourth pattern images. The second angle image is an image obtained by imaging the angle of the light emitted from the second LED32b to the measurement object W. The third angle image is an image obtained by imaging the angle of light emitted from the third LED33b to the measurement object W. The fourth angle image is an image obtained by imaging the angle of light emitted from the fourth LED34b to the measurement object W. The uppermost image of the intermediate images in fig. 15 is a first angle image, the second image from the uppermost is a second angle image, the third image from the uppermost is a third angle image, and the lowermost image is a fourth angle image. The all black-appearing portion in each angle image is a portion that is a shadow of illumination (each LED), and becomes an invalid pixel having no angle information.

(configuration of the image processing section 27)

The image processing section 27 is a section that performs image processing such as gamma correction, white balance adjustment, gain correction, and the like on each of the pattern image, the phase image, the band number image, and the reliability image. The pattern image, the phase image, the band number image, and the reliability image after the image processing may each be stored in the image storage section 28. The image processing is not limited to the above processing.

(configuration of output control section 29)

The output control unit 29 is a unit that: upon receiving the image output request signal output from the controller unit 4, only the image indicated by the image output request signal among the images stored in the image storage unit 28 is output to the controller unit 4 via the image processing unit 27 in accordance with the image output request signal. In this example, the pattern image, the phase image, the band number image, and the reliability image before the image processing are each stored in the image storage section 28, and only the image requested by the image output request signal from the controller section 4 is subjected to the image processing by the image processing section 27 and output to the controller section 4. The image output request signal may be output when the user performs various measurement operations and inspection operations.

In the present embodiment, the data processing section 24, the phase calculation section 26, and the image processing section 27 are provided in the imaging apparatus 2. However, this is not limited thereto, and these portions may be provided in the controller portion 4. In this case, the image data output from the image pickup device 22 is output to the controller section 4 and processed.

(Structure of controller portion 4)

As shown in fig. 2, the controller portion 4 includes an imaging light projection control portion 41, a height measurement portion 42, an image synthesis portion 43, an inspection portion 45, a display control portion 46, and a history storage portion 47. The controller unit 4 is provided separately from the imaging device 2 and the illumination device 3.

(construction of the image pickup light projection control section 41)

The imaging light projection control unit 41 outputs the formation information of the pattern light, the trigger signal, and the resynchronization trigger signal to the illumination device 3 at predetermined timings, and outputs the information on the imaging conditions, the trigger signal, and the resynchronization trigger signal to the imaging device 2 at predetermined timings. The trigger signal and the resynchronization trigger signal output to the illumination device 3 are synchronized with the trigger signal and the resynchronization trigger signal output to the image pickup device 2. The formation information of the pattern light and the information related to the imaging conditions may be stored in the imaging light projection control section 41 or other storage section (not shown), for example. When the user performs a predetermined operation (height measurement preparation operation, inspection preparation operation), formation information of pattern light is output to the illumination device 3 and temporarily stored in the light projection control section 39 of the illumination device 3, and information on the imaging condition is output to the imaging device 2 and temporarily stored in the exposure control section 23. In this example, the lighting device 3 is configured to control the LED and the LCD with the light projection control section 39 integrated in the lighting device 3, and thus the lighting device 3 may also be referred to as a smart lighting device. The image pickup apparatus 2 is configured to control the image pickup element 22 by the exposure control section 23 integrated in the image pickup apparatus 2, and thus the image pickup apparatus 2 may be referred to as a smart image pickup apparatus.

When the imaging device 2 and the illumination device 3 are controlled independently, there is a problem that: as the number of times of image capturing increases, the image capturing timing and the illumination timing (pattern light projection timing) shift, and the image obtained by the image capturing apparatus 2 becomes dark. In particular, as described above, the first pattern image set includes a total of 12 images (i.e., eight images in the phase shift pattern image set and four images in the gray code pattern image set), and the second pattern image set is configured in the same manner. In the case of also performing image capturing for HDR, the number of image capturing times increases and the time lag between the image capturing timing and the illumination timing becomes significant.

In this example, the resynchronization trigger signal is output in synchronization with the illumination device 3 and the imaging device 2, and thus the illumination device 3 and the imaging device 2 can be synchronized in the middle of imaging. As a result, even if the number of times of image capturing increases, the time lag between the image capturing timing and the illumination timing is so small as not to cause a problem. Further, image darkening during irradiation of the phase shift pattern or the gray code pattern can be suppressed, and phase distortion and the possibility of erroneous code determination can be reduced. The resynchronization trigger signal may be output multiple times.

The imaging light projection control unit 41 includes an irradiation mode switching unit 41 a. The illumination mode may be switched to any one of the following illumination modes: a first irradiation mode in which the first measurement pattern light and the second measurement pattern light are irradiated by the first light projecting part 31 and the second light projecting part 32, respectively; a second irradiation mode in which after the first measurement pattern light and the second measurement pattern light are irradiated by the first light projecting part 31 and the second light projecting part 32, respectively, the third measurement pattern light and the fourth measurement pattern light are irradiated by the third light projecting part 33 and the fourth light projecting part 34, respectively; and a third irradiation mode in which the third measurement pattern light and the fourth measurement pattern light are irradiated by the third light projecting part 33 and the fourth light projecting part 34, respectively. The user can switch the irradiation mode by operating the console section 6 or the mouse 7 while viewing the display section 5. In addition, the controller portion 4 may be configured to automatically switch the irradiation mode.

(Structure of height measuring section 42)

The height measuring unit 42 is configured to be able to measure the height of the measurement object W in the direction of the central axis a of the illumination device 3, based on the irradiation angle information of each pixel of the first angle image and the irradiation angle information of each pixel of the second angle image generated by the phase calculating unit 26, and the relative position information between the first light projecting unit 31 and the second light projecting unit 32 within the illumination housing 30 of the illumination device 3.

A specific method for measuring the height using the height measuring section 42 will be described below. As described above, the angle from the illumination for each pixel is determined by generating an angle image using phase unwrapping. The first angle image is an image showing the angle of light irradiated from the first LED31b to the measurement object W, and the second angle image is an image showing the angle of light irradiated from the second LED32b to the measurement object W. The first LED31b and the second LED32b are integrally supported by the illumination housing 30, and as described above, the distance between the first LED31b and the second LED32b is set to I (shown in fig. 17).

Fig. 17 shows a case where the height at an arbitrary point H on the measurement object W is obtained. The direction directly below the first LED31b is set to 0 °, and the direction of 45 ° from the first LED31b is set to 1. The right direction of fig. 17 is positive, and the left direction is negative. The angle of light irradiated from the first LED31b to the point H may be obtained from a pixel corresponding to the point H in the first angle image, and the inclination of a straight line connecting the point H and the first LED31b is 1/a 1. Further, the angle of light irradiated from the second LED32b to the point H may be obtained from the pixel corresponding to the point H in the second angle image, and the inclination of the straight line connecting the point H and the second LED32b is-1/a 2. a1 and a2 are phases.

Z1/a 1X +0 equation 1

Z-1/a 2 (X-1) equation 2

The height is obtained by solving Z for equations 1 and 2.

a1Z=X

a2Z=-X+1

Z=1/(a1+a2)

X=a1*1/(a1+a2)

Thus, the height of each point on the measurement object W can be obtained. Since there is no variable relating to the position of the imaging device 2 in each of the above equations, it can be seen that the position of the imaging device 2 is irrelevant when obtaining the height at each point on the measurement object W. However, since there is no angle information for a pixel in the angle image as an invalid pixel, the height at that point cannot be obtained. That is, the calculated Z coordinate does not indicate the distance between the image pickup device 2 and the measurement object W, but indicates the distance to the measurement object W when viewed from the illumination device 3. The Z coordinate obtained by the mounting position of the illumination device 3 is determined regardless of the mounting position of the imaging device 2.

Although not shown in the drawings, also, the angle of light irradiated from the third LED33b to the point H may be obtained from a pixel corresponding to the point H in the third angle image, and the angle of light irradiated from the fourth LED34b to the point H may be obtained from a pixel corresponding to the point H in the fourth angle image. Therefore, the height of each pixel can be obtained based on the third angle image and the fourth angle image.

For example, fig. 15 shows the following case: the height measuring unit 42 generates a first height image indicating the height of the measurement object W based on the irradiation angle information of each pixel of the first angle image and the irradiation angle information of each pixel of the second angle image, and the relative position information between the first light projecting unit 31 and the second light projecting unit 32 in the illumination housing 30, and also generates a second height image indicating the height of the measurement object W based on the irradiation angle information of each pixel of the third angle image and the irradiation angle information of each pixel of the fourth angle image, and the relative position information between the third light projecting unit 33 and the fourth light projecting unit 34 in the illumination housing 30.

The first height image can grasp the height of each pixel, and thus can be used as an inspection object image used in performing various inspections. In addition, the second height image can also grasp the height of each pixel, and thus can be used as an inspection target image used in performing various inspections. Therefore, the height measuring unit 42 may also be referred to as an inspection target image generating unit that generates an inspection target image based on a plurality of intermediate images.

In the case shown in fig. 15, first, a first angle image is generated from a first pattern image set obtained by projecting the pattern light by the first light projecting part 31, and a second angle image is generated from a second pattern image set obtained by projecting the pattern light by the second light projecting part 32. In the first angle image, since the first light projecting part 31 irradiates light from the left side of the measurement object W, a shadow is formed on the right side of the measurement object W, and the portion becomes an ineffective pixel. On the other hand, in the second angle image, since the second light projecting part 32 irradiates light from the right side of the measurement object W, a shadow is formed on the left side of the measurement object W, and the portion becomes an ineffective pixel. Since the first height image is generated using the first angle image and the second angle image, a pixel that is an invalid pixel in one of the angle images is also an invalid pixel in the first height image.

Also, a third angle image is generated from a third pattern image set obtained by projecting the pattern light by the third light projecting part 33, and a fourth angle image is generated from a fourth pattern image set obtained by projecting the pattern light by the fourth light projecting part 34. In the third angle image, since the third light projecting part 33 irradiates light from the upper side of the measurement object W (the upper side of the figure), a shadow is formed on the lower side of the measurement object W (the lower side of the figure), and the portion becomes an ineffective pixel. On the other hand, in the fourth angle image, since the fourth light projecting part 34 irradiates light from the lower side of the measurement object W, a shadow is formed on the upper side of the measurement object W (the upper side of the figure), and the portion becomes an ineffective pixel. Since the second height image is generated using the third angle image and the fourth angle image, a pixel that is an invalid pixel in one of the angle images is also an invalid pixel in the second height image. In order to reduce invalid pixels as much as possible, in the present embodiment, as shown in fig. 2, an image synthesizing section 43 is provided in the controller section 4.

In the present embodiment, the case where the height measuring section 42 is provided in the controller section 4 has been described. However, this is not limited to this, and although not shown in the drawings, the height measuring section may be provided in the image pickup device 2.

(construction of the image synthesizing section 43)

The image synthesizing section 43 is configured to synthesize the first height image and the second height image to generate a synthesized height image. As a result, a portion that is an invalid pixel in the first height image but is not an invalid pixel in the second height image is represented as an effective pixel in the post-synthesis height image. In contrast, a portion that is an invalid pixel in the second height image but is not an invalid pixel in the first height image is represented as an effective pixel in the post-synthesis height image. Therefore, the number of invalid pixels in the post-synthesis height image can be reduced. In contrast, in the case where it is desired to obtain a height with a high degree of reliability, only when both the first height image and the second height image are effective and the difference between the two is small (i.e., equal to or smaller than a predetermined value), the average height of these images can be made effective.

In other words, by irradiating the measurement object W with pattern light from four different directions, the number of effective pixels in the height image can be increased and the blind spots can be reduced, and the reliability of the measurement result can also be improved. For the measurement object W in which invalid pixels are sufficiently reduced by irradiating pattern light from two directions, only one height image needs to be generated. In this case, it may be configured such that the user selects whether to generate the first height image or the second height image. In the case where only one height image is generated, there is an advantage that the measurement time is shortened.

Since the post-synthesis height image can also grasp the height of each pixel, the post-synthesis height image can be used as an inspection object image used when various inspections are performed. Therefore, the image synthesizing unit 43 may be referred to as an inspection target image generating unit for generating an inspection target image.

In the present embodiment, the case where the image synthesizing section 43 is provided in the controller section 4 has been described. However, this is not limited to this, and although not shown in the drawings, the image combining section may be provided in the image pickup device 2.

(Structure of inspection part 45)

The inspection section 45 is a section for performing inspection processing based on any of the first height image, the second height image, and the post-synthesis height image. The inspection unit 45 is provided with a presence/absence inspection unit 45a, an appearance inspection unit 45b, and a size inspection unit 45 c. However, this is an example, and not all of these inspection portions are indispensable, and inspection portions other than these may be provided. The presence/absence checking portion 45a is configured to be able to determine, by image processing, the presence or absence of the measurement object W, the presence or absence of a component attached to the measurement object W, and the like. The appearance inspecting unit 45b is configured to be able to determine whether or not the outer shape or the like of the measurement object W is a predetermined shape by image processing. The size checking section 45c is configured to be able to determine whether the size of each portion of the measurement object W is a predetermined size or not, or determine the size of each portion by image processing. Since these determination methods are conventionally known methods, detailed description thereof will be omitted.

(Structure of display control section 46)

The display control section 46 is configured to: the first height image, the second height image, the post-synthesis height image, and the like can be displayed on the display section 5, and an operation user interface for operating the image processing apparatus 1, a setting user interface for setting the image processing apparatus 1, a height measurement result display user interface for displaying a height measurement result of the measurement object, an inspection result display user interface for displaying various inspection results of the measurement object, and the like can be generated to display these user interfaces on the display section 5.

(Structure of History storage 47)

The history storage section 47 may be constituted by using a storage device such as a RAM. The first height image, the second height image, the post-synthesis height image, and the like output from the image pickup device 2 to the controller section 4 may be stored in the history storage section 47. The image stored in the history storage section 47 can be read out by operating the console section 6 or the mouse 7, and displayed on the display section 5.

(correction processing)

As described above, in the present embodiment, the first pattern image is generated by imaging the measurement object W in a state where the pattern light is projected from the first light projecting part 31, and the second pattern image is generated by imaging the measurement object W in a state where the pattern light is projected from the second light projecting part 32. An angle image having irradiation angle information is generated based on the first pattern image and the second pattern image. Regardless of the relative positional relationship between the image pickup device 2 and the illumination device 3, the height of the measurement object W can be measured based on the known relative position between the first LED31b and the second LED32b and the irradiation angle information.

In this case, since the relative position between the first LED31b and the second LED32b affects the height measurement result, it is necessary to strictly define the relative position between the first LED31b and the second LED32 b. However, at the time of manufacture, variations in assembly and assembly positions are inevitable, and it is difficult to strictly define the relative positions between the first LED31b and the second LED32 b. Therefore, in this example, it is configured such that the position of the first LED31b or the second LED32b in the lighting device 3 can be corrected.

Fig. 18 shows an overall flow of the correction process. Specific examples of the deviation will be explained with reference to fig. 19A to 19C. Fig. 19A shows the following case: when the "0" point is to be located directly below the first LED31b, the mounting angle of the first LED31b will deviate by θ 0, and thus the "0" point is located in the negative direction (left direction of the drawing) from directly below the first LED31b by only a predetermined amount. Fig. 19B shows a case where the first LED31B is deviated in the Z direction, and in this example, the first LED31B is deviated downward from the regular mounting position.

As shown in FIG. 19C, these deviations are corrected so that the "0" point is located directly below the first LED31b and on the reference plane, and has an irradiation angle of 0 to 45 deg.. Specifically, this operation can be performed based on the flow shown in fig. 18. First, the coordinates of absolute phase 0 are estimated, and the coordinates of absolute phase 1 are estimated. Further, after the position of the first LED31b is estimated, a correction coefficient is derived. The correction may be a linear correction. For example, the equation Φ '═ a Φ + b may be used, where Φ' is the absolute phase after correction and Φ is the absolute phase before correction. In this equation, a ═ tan θ 1+ tan θ 0, and b ═ tan θ 0.

Further, as shown in fig. 20, in the case where there is a tilt angle (tilt angle), the larger the absolute phase, the wider the interval therebetween. However, in an ideal state where there is no tilt angle, the absolute phases have equal intervals. When the length of the line segment 230 where the inclination angle exists is compared with the length of the line segment 231 where the inclination angle does not exist, the length of the line segment 230 where the inclination angle exists is longer, and the difference in the lengths becomes an error. In other words, there is a shift in the value of the absolute phase Φ of the tilt angle compared with the ideal state, and is smaller than in the case where there is no tilt angle.

To correct the tilt angle, first, the tilt is directly measured and the conversion equation is applied. In the case where the tilt angle cannot be directly measured, the absolute phase may be corrected twice. The conversion equation is Φ '═ Φ cos θ/(1- Φ sin θ), where Φ is the observed absolute phase, and Φ' is the absolute phase after conversion (the absolute phase when no tilt angle is present).

Fig. 21 schematically shows a case where a height deviation occurs in a pair of the first LED31b and the second LED32 b. In this example, the second LED32b is located above the first LED31b by a dimension d. As a precondition for performing the height correction of the first LED31b and the second LED32b, it is necessary that the irradiation angle ranges of the first LED31b and the second LED32b have been standardized (adjusted) to 0 ° to 45 °. The correction equation for height correction is h ═ I-d ═ Φ 1)/(Φ 0+ Φ 1).

That is, the distance I between the first LED31b and the second LED32b is I ═ h × tan θ 0+ (h + d) × tan θ 1. This relationship is solved for h.

h*(tanθ0+tanθ1)=I–d*tanθ1

h=(I–d*tanθ1)/(tanθ0+tanθ1)

Here, since tan θ 0 and tan θ 1 are the same as the absolute phases after normalization, h becomes (I-d × Φ 1)/(Φ 0+ Φ 1).

Fig. 22 is a diagram illustrating a flow of estimation of a portion requiring correction and adjustment of each portion for correction. First, the lighting is installed in step SB 1. For example, a first LED31b and a second LED32b are attached to the illumination housing 30. Then, after θ xy, θ xz, and the residual error are estimated in the first estimation in SB2, each θ xy and each θ xz are adjusted in step SB 3. After the adjustment, the process proceeds to step SB4, and θ 0, θ 1, illumination coordinates X, Y, Z, θ xy, θ yz, θ xz, and a residual error are estimated in the second estimation. In step SB5, it is determined whether or not there is a residual error, and in the case where it is determined that there is a residual error, the process proceeds to step SB3 to perform each adjustment, and the second estimation is performed again. The adjusting and the second estimating are repeated until there is no residual error. In the case where there is no residual error, the process proceeds to step SB6, where the estimation result is acquired and stored in step SB 6. The first estimate and the second estimate may be made identical. The above-described steps may be performed by the error estimating unit 49a shown in fig. 2.

Fig. 23 is a diagram illustrating a specific method of estimating the deviation. The center coordinates of the first LED31b are (X,0, Z). The phase P of the coordinate x is as follows.

t0=tanθ0

t1=tanθ1

Base length 1 ═ (Z-Z) (t0+ t1)

The distance from the end on the side of θ 0 to X is 1X ═ X) + (Z-Z) × t0

P=1x/1

P={(x–X)+(Z–z)*t0}/{(Z–z)*(t0+t1}

In the case where the first LED31b is inclined, it may be set so that the illumination center position X has Y dependency.

X(y)=X0+axy*y

P={(x–(X0+a*y))+(Z–z)*t0}/{(Z–z)*(t0+t1)}

={(x–(X0+a*y))}/{(Z–z)*(t0+t1)}+t0/{t0+t1}

The unknown coefficients are summarized as follows.

t0- -illumination Width 0 of the first LED31b

t1- -illumination Width 1 of the first LED31b

X0- -X coordinate of center of illumination

a- - -inclination xy of the first LED31b (a. tan θ xy)

Z- -height of the first LED31b

P={(x–(X0+a*y))*Ti}/{(Z–z)+t0*Ti}

Here, the objective function of the least square method is J ═ P ^ 2. In this equation, p is the measured phase.

Since P is not a linear combination of unknown coefficients, an iterative method such as a gradient method needs to be used, and the Levenberg-MarQuardt method can be used.

After the respective coefficients are obtained, t0, t1, and θ xy are corrected.

t0’=t0/cosθxy

t1’=t1/cosθxy

(Camera calibration)

Fig. 24 is a diagram of the concept of camera calibration. First, coefficients (camera parameters) for associating world coordinates with camera coordinates are estimated. The "camera" refers to the imaging device 2. In the present embodiment, the world coordinate is an illumination coordinate. In other words, the measurement pattern light may be projected onto the measurement object W from different directions through two pairs of the first and second light projecting parts 31 and 32 and the third and fourth light projecting parts 33 and 34. Therefore, XYZ can be obtained without using a calibration plate.

The camera coordinates corresponding to the world coordinates of the measurement object W are known. The height of the measurement object W is changed, and imaging is performed a plurality of times. Sampling points (specific points) having high reliability are used, and the camera parameters are estimated by a method of Tsai (1987) which is conventionally known. Although a flat plate is a prerequisite in the paper by Tsai, in the present embodiment the heights of the first LED31b and the second LED32b are used to obtain the X, Y and Z coordinates. Thus, it is not necessarily a flat plate, and may be substantially free-form. However, since the phase may be deviated in some cases, RANSAC, which is a robust estimation method, is used together to eliminate an error factor.

Fig. 25A is a diagram showing a case where X, Y and the Z coordinate of the sampling point SP are acquired with the measurement object W at the initial position (first height). As described above, the X, Y and Z coordinates of the sampling point SP may be obtained using the first pattern image set, the second pattern image set, the third pattern image set, the fourth pattern image set, the distance information of the first LED31b and the second LED32b, and the distance information of the third LED33b and the fourth LED34 b. Fig. 25B is a diagram showing a case where X, Y and the Z coordinate of the sampling point SP are acquired in a case where the measurement object W is at a position (second height) higher than the initial position. The X, Y, Z coordinates of the sampling point SP can be obtained in the same manner as in the case of fig. 25A.

Fig. 26 shows mathematical formulas of a camera parameter matrix and a distortion model, and is used to obtain parameters for associating world coordinates with camera coordinates. Here, x and y are camera coordinates after correcting lens distortion, and are known. X, Y and Z are world coordinates and are also known. Other parameters are estimated using X, Y, and Z. Here, s and a are skew and aspect ratio, respectively, and may both be 1 in general. Here, tx and ty are center coordinates of an image sensor serving as the image pickup element 22, and f is a focal length of vertical and horizontal. R is the rotation matrix and T is the translation vector. In addition, k1, k2, p1, and p2 are distortion parameters.

Fig. 27 is a flowchart showing an estimation process of each parameter. In step SC1, tx and ty are estimated, and in step SC2, R is estimated. Steps SC1 and SC2 are processes similar to the 2D calibration, and tx, ty, and R are obtained relatively stably. However, in the case where there is a pixel having an incorrect height at the end of the field of view or the like, there will be a large influence. Therefore, robust estimation (RANZAC method) is performed.

In step SC3, f and tz are roughly estimated, and in step SC4, f and tz are accurately estimated and k is estimated. Steps SC3 and SC4 are processes for 3D calibration and lens distortion estimation that vary greatly and may not be stable by the least squares method. Therefore, the RANZAC method is applied. Step SC5 is not a necessary step. However, step SC5 is a step of fine-tuning all the estimated values.

The f and tz values are affected by the outer pixels (pixels outside the center of the image sensor) which are susceptible to viewing angles. Therefore, in performing calibration, pixels outside the image sensor can be used. In addition, dots not affected by the viewing angle can be omitted.

In order to improve the reliability of the sampling point SP, information on the X, Y, and Z coordinates of points around the sampling point SP may be considered for the X, Y, and Z coordinates of the sampling point SP. For example, the sampling point SP and an average value (e.g., 3 × 3, median, etc.) of the X coordinate, the Y coordinate, and the Z coordinate of points around the sampling point SP may be used as the X coordinate, the Y coordinate, and the Z coordinate of the sampling point SP. In this way, a highly reliable calibration object can be generated.

With the first LED31b, the second LED32b, the third LED33b, and the fourth LED34b fixed and the phase P known, the world coordinates (wx, wy, wz) can be derived from the camera coordinates (xf, yf) and the phase P.

The estimation of the camera parameters described above may be performed by the controller section 4 shown in fig. 1 and the like. The camera parameters are calibration targets, and the generation of the camera parameters is performed by a calibration target generation section 48a shown in fig. 2. The calibration object generating unit 48a may generate the calibration object using the X coordinate, the Y coordinate, and the Z coordinate of the sampling point SP on the surface of the measurement object W when generating the calibration object. That is, since the first pattern light generating portion 31d and the second pattern light generating portion 32d form a pair, and the third pattern light generating portion 33d and the fourth pattern light generating portion 34d form a pair, it is possible to sequentially project each pattern light onto the measurement object W from a plurality of different directions. As a result, blind spots of the pattern images are reduced, and thus the X-coordinate, Y-coordinate, and Z-coordinate of the sampling point SP can be measured based on each pattern image. The measurement results become the calibration target.

The calibration execution unit 48b shown in fig. 2 executes camera calibration using the calibration object generated by the calibration object generation unit 48 a. In this way, the world coordinates are associated with the coordinates of the imaging device 2. Therefore, no calibration board is required to correlate the world coordinates with the coordinates of the imaging device 2.

(adjustment mechanism of light projection part)

Fig. 28 and 29 show a configuration example of the first light projecting part 31 including the adjustment mechanism. The illumination housing 30 of the illumination device 3 includes a base member 60, an LCD holder (first pattern light generating portion holder member) 61 to which the first LCD31d is fixed, and an LED holder (first light source holder member) 62 to which the first LED31b is fixed. The LCD holder 61 is attached to the base member 60 such that the position of the LCD holder 61 is adjustable, and the LED holder 62 is also attached to the base member 60 such that the position of the LED holder 62 is adjustable. The width direction in fig. 28 and 29 refers to the direction in which the first LEDs 31b are arranged.

That is, as shown in fig. 28, on the upper face of the base member 60, a boss portion 60a is provided in an upwardly projecting manner, and a patch portion 60b is provided in an upwardly projecting manner at a portion distant from the boss portion 60a in the width direction. On the upper face of the base member 60, a first spacer 63a (which is indicated by left lower oblique lines in fig. 28) is placed at a portion distant from the boss portion 60a in the width direction. The LCD holder 61 is placed on the upper end surface of the boss portion 60a and the upper surface of the first spacer 63 a. As shown in fig. 29, the LCD holder 61 is fastened and fixed to the base member 60 by a first screw 64, a second screw 65, and two third screws 66. The first screw 64 and the second screw 65 are screwed to the boss portion 60a of the base member 60. Two third screws 66 are screwed to the base member 60 passing through the first spacer 63 a. Therefore, by changing the thickness of the first spacer 63a, the inclination of the LCD holder 61, that is, the inclination of the LED holder 62 can be changed. In summary, θ zx of the LED holder 62 may be adjusted, and θ zx may be adjusted between the first light projecting part 31 and the second light projecting part 32.

An arc-shaped slit 61c into which the second screw 65 is inserted is formed in the LCD holder 61. The circular arc slit 61c extends to draw a circular arc centered on the center line of the first screw 64. In addition, a slit 61d into which the third screw 66 is inserted is formed in the LCD holder 61. By forming the circular arc-shaped slits 61c and 61d, the LCD holder 61 can be rotated about the center line of the first screw 64, and the LCD holder 61 can be fixed at an arbitrary rotational position. A second spacer 63b (which is indicated by right underline in fig. 28) is disposed between the patch portion 60b of the base member 60 and the side surface of the LCD holder 61. As a result, by changing the thickness of the second spacer 63b, the direction of the LCD holder 61, that is, the direction of the LED holder 62 can be changed. In summary, θ xy of the LED holder 62 can be adjusted, and θ xy can be adjusted between the first light projecting part 31 and the second light projecting part 32.

As shown in fig. 28, on the upper face of the LCD holder 61, a boss portion 61a is provided in an upwardly projecting manner, and a patch portion 61b is provided in an upwardly projecting manner at a portion distant from the boss portion 61a in the width direction. On the upper surface of the LCD holder 61, a third spacer 63c is placed at a portion distant from the boss portion 61a in the width direction. The LED holder 62 is placed on the upper end face of the boss portion 61a and the upper face of the third pad 63 c. The LED holder 62 is fastened and fixed to the LCD holder 61 in the same manner as the fixing configuration of the LCD holder 61 to the base member 60. Therefore, by changing the thickness of the third spacer 63c, θ zx of the LED holder 62 can be adjusted. In this case, θ zx may be adjusted in the first light projecting part 31.

Further, a fourth spacer 63d is disposed between the patch portion 61b of the LCD holder 61 and the side surface of the LED holder 62. Therefore, by changing the thickness of the fourth spacer 63d, θ xy of the LED holder 62 can be adjusted. In this case, θ xy may be adjusted in the first light projecting part 31. Each adjustment is made so that the error during the height measurement is minimized.

In the configuration example shown in fig. 30, one end side in the width direction of the substrate 31c on which the first LED31b is mounted is fixed to the housing 31a by a screw 70. On the other end side in the width direction of the base plate 31c, a plurality of long holes 71 that are long in the width direction of the base plate 31c are provided at intervals. The screws 72 are screwed to portions of the housing 31a corresponding to the respective elongated holes 71. By selecting a position on the substrate 31c to be fastened with the screw 72, the angle of the substrate 31c, that is, the angle of the first LED31b can be adjusted.

In the case of the configuration example shown in fig. 30, the first LED31b and the first LCD31d are integrated via the housing 31a, and the positional accuracy of each other is easily improved. However, manufacturing errors may occur, which is inevitable. When an error occurs, the substrate 31c may be rotated to minimize the error during the height measurement.

In addition, the above-described adjustment mechanism may be provided in the second light projecting part 32, the third light projecting part 33, and the fourth light projecting part 34.

(Change of Using section)

Each error can be estimated by the error estimating section 49a shown in fig. 2, and in this case, the projection error of the pattern light projected by the first LCD31d can also be estimated. Fig. 31A shows a state in which the relative position between the first LED31b and the first LCD31d is deviated, and the light emitted from the first LED31b reaches the outside of the first LCD31 d. In this case, the first LCD31d having a size larger than the irradiation range (0 ° to 45 °) of the first LED31b is used, and the effective section and the remaining section are ensured.

As shown in fig. 31B, the first LED31B is aligned with the designed field of view, and a projection pattern may be generated from a section located directly below the first LED 31B. The used section of the first LCD31d is determined based on the estimation result of the error estimating section 49a to correct the projection error of the pattern light projected by the first LCD31 d. This is performed by the use section determination section 49b shown in fig. 2.

The effective section in fig. 31B is a section that forms a pattern, and the remaining sections are sections that do not contribute to the pattern formation. The start position and the end position of the effective section may be arbitrarily set, and the start position and the end position of the effective section may be determined such that the projection error of the pattern light is minimized. Since the change of the use section can be made in software, it is easier than the adjustment by the physical adjustment mechanism. In the case of the phase shift method, it is only necessary to shift the phase while irradiating the entire region, and thus the start position and the end position of the effective section do not need to be the projection ends. In the case of the gray code, since there is always the all black pattern outside the irradiation range, in general, the boundary between the all black pattern and another space code may be a start position and an end position.

The use of the change of the section is also possible in the third light projecting part 33 and the fourth light projecting part 34.

(relationship between Lighting construction error and correction method)

Most of the errors described above are caused by the configuration of the illumination. For example, θ xy and θ zx between the first light projecting part 31 and the second light projecting part 32, and θ xy and θ zx of the light projecting parts 31 to 34, respectively, can be corrected by the above-described adjustment mechanism. Further, the field of view (θ 0, θ 1) can be corrected by changing the use section. In addition, the field of view (θ 0, θ 1) and the inclination angle θ yz can be corrected by absolute phase correction. Further, the distance (I) between the LEDs of the paired light projecting parts and the height difference (d) of the LEDs of the paired light projecting parts can be corrected by parameters at the time of height measurement.

(correction during operation)

In the case where there is a deviation in the height direction of the first LED31b and the second LED32b, or a deviation in the height direction of the third LED33b and the fourth LED34b due to the influence of the ambient temperature or the like, the deviation can be corrected during the operation of the image processing apparatus 1. For example, it is assumed that the height of the first LED31b is true, and calibration is performed by the least square method.

(period of operation of the image processing apparatus 1)

Next, the operation of the image processing apparatus 1 will be explained. Fig. 32 to 36 show a case where the measurement object W is a rectangular parallelepiped box and the measurement object W is measured by projecting pattern light from the first light projecting part 31 and the second light projecting part 32.

First, when a user places a measurement object W on the placement surface 100 and performs a measurement start operation or an inspection start operation, eight pattern lights for the phase shift method are sequentially generated from the first light projecting part 31 and the second light projecting part 32, respectively, and projected onto the measurement object W. The imaging device 2 captures an image at the timing of projecting each pattern light. The phase shift pattern image set shown in fig. 32 is an image obtained by imaging the pattern light projected onto the measurement object W from the first light projecting unit 31. When a phase image is generated based on the phase shift pattern image set shown in fig. 32, the phase image becomes a phase image as shown on the left side of fig. 33. When the intermediate image is generated from the phase image, the intermediate image becomes an image as shown on the right side of fig. 33. Note that the gray code pattern image is not shown.

On the other hand, the phase shift pattern image set shown in fig. 34 is an image obtained by imaging the pattern light projected onto the measurement object W from the second light projecting unit 32. When a phase image is generated based on the phase shift pattern image set shown in fig. 34, the phase image becomes a phase image as shown on the left side of fig. 35. When the intermediate image is generated from the phase image, the intermediate image becomes an image as shown on the right side of fig. 35.

When the intermediate image shown on the right side of fig. 33 and the intermediate image shown on the right side of fig. 35 are synthesized, a height image as shown on the left side of fig. 36 is generated. The sectional shape in the vertical direction of the height image can be displayed on the display section 5 through the user interface as shown on the right side of fig. 36.

When a measurement start operation or an inspection start operation is performed, eight pattern lights for the phase shift method are sequentially generated from the third light projecting part 33 and the fourth light projecting part 34, respectively, and are projected onto the measurement object W. The imaging device 2 captures an image at the timing of projecting each pattern light.

(effects of the embodiment)

As described above, according to the present embodiment, a plurality of first pattern images and a plurality of second pattern images can be generated by the first light projecting part 31 for projecting the first measurement pattern light, the second light projecting part 32 for projecting the second measurement pattern light, and the image pickup device 2. Based on the first pattern image and the second pattern image, a first angle image in which each pixel has irradiation angle information of the first measurement pattern light to the measurement object W and a second angle image in which each pixel has irradiation angle information of the second measurement pattern light to the measurement object W can be generated.

Since the relative position between the first light projecting part 31 and the second light projecting part 32 within the illumination housing 30 of the illumination device 3 is known, the height of the measurement object W in the direction of the central axis a of the illumination device 3 can be measured based on the relative position information, the irradiation angle information of each pixel of the first angle image, and the irradiation angle information of each pixel of the second angle image, regardless of the relative positional relationship between the imaging device 2 and the illumination device 3.

In summary, in the case where the illumination device 3 and the imaging device 2 are separately provided so that both can be mounted separately to increase the degree of freedom during mounting, the absolute shape of the measurement object W can be measured even without strictly adjusting the position of the imaging device 2 with respect to the illumination device 3. Therefore, the burden on the user during installation does not increase.

Further, since the liquid crystal panel is used for the illumination device 3 as a unit for generating the pattern light, a reflection optical system as in the case of using the DMD becomes unnecessary, and a driving system as in the case of moving the mask having the pattern becomes unnecessary. Therefore, the structure of the illumination device 3 is simplified, and the illumination device 3 is miniaturized. Particularly in the case of a reflective optical system, an optical system including a lens is expensive and requires strict accuracy. The pattern may be distorted due to the occurrence of deformation of the lens, which may cause accuracy deterioration. However, the present embodiment may have an effect that such a risk can be avoided.

Further, the first LED31b and the second LED32b are disposed apart from each other in the circumferential direction of the opening portion 30a of the illumination housing 30, and the first LCD31d and the second LCD32d, to which diffused light emitted from the first LED31b and the second LED32b is incident, are disposed on the same plane perpendicular to the central axis a of the opening portion 30a of the illumination housing 30, respectively. Therefore, it is possible to miniaturize the illumination device 3 capable of projecting pattern light to the measurement object W from a plurality of directions and improve the degree of freedom in mounting the illumination device 3 while suppressing occurrence of luminance unevenness corresponding to the position of the pattern light due to the respective angular characteristics of the LCD31d and the LCD32 d.

In addition, since the relative position between the first LCD31d and the first LED31b is set so that the diffused light emitted from the first LED31b is incident on the first LCD31d within the effective angular range of the first LCD31d, it is possible to miniaturize the illumination device 3 while suppressing the occurrence of luminance unevenness corresponding to the position of the pattern light due to the angular characteristics of the first LCD31 d.

Further, in the case where the size of the light emitting face of the first LED31b is equal to or smaller than the size of the half period of the wave of the pattern light, the plurality of first LEDs 31b are arranged to be deviated from each other in the direction in which the illuminance of the pattern light changes. Therefore, the apparent size of the light emitting surface can be made longer than the size of the half period of the wave of the pattern light. In this way, it is possible to generate wavy pattern light as desired and prevent the occurrence of high-frequency ripples in the height image.

Further, the positional deviation of the first LED31b can be corrected by providing an adjustment mechanism in the first light projecting part 31 and making it possible to change the use section. In this way, when the height of the measurement object W is measured based on the irradiation angle information of the first LED31b and the second LED32b and the relative position between the two LEDs 31b and 32b, an accurate measurement result can be obtained.

In addition, since the calibration object can be generated based on the pattern image obtained by receiving the measurement pattern light, the world coordinates can be associated with the coordinates of the imaging device 3 without using a calibration board.

The above-described embodiments are merely examples in all respects and should not be construed as limiting. Further, all changes and modifications that come within the scope of the appended claims are to be embraced within their scope.

For example, a program configured to be able to implement the above-described processes and functions on a computer may be provided to implement the above-described processes and functions on a user's computer.

Further, the form of providing the program is not particularly limited. For example, there are a method of providing by using a network line such as the internet, a method of providing a recording medium storing a program, and the like. In any of these providing methods, the above-described processes and functions may be implemented by installing a program on a computer of a user.

Further, as a device for realizing the above-described processing and functions, a general-purpose or special-purpose device in which the above-described program is installed in an executable state in the form of software, firmware, or the like is included. Further, a part of the above-described processes and functions may be realized in a form of being mixed with hardware such as a predetermined gate array (FPGA, ASCI) or a part of hardware modules that realize a part of elements of program software and hardware.

Further, the above-described processes and functions may be realized by a combination of steps (processes). In this case, the user executes the image processing method.

In the present embodiment, the case where the pattern light generating section is a liquid crystal panel has been described. However, this is not limited to this, and for example, the pattern light generating section may be a pattern light generating section using a DMD (digital micromirror device), or a pattern light generating section that moves a mask on which a pattern is physically formed. In addition, the light source is not limited to the light emitting diode.

As described above, the image processing apparatus according to the present invention can be used, for example, in the case of measuring the height of a measurement object or in the case of inspecting a measurement object.

65页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:线宽标准样片的线宽量值确定的方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!