Method, apparatus, system, and program for setting lighting condition, and storage medium

文档序号:1102547 发布日期:2020-09-25 浏览:23次 中文

阅读说明:本技术 用于设定照明条件的方法、装置、系统及程序以及存储介质 (Method, apparatus, system, and program for setting lighting condition, and storage medium ) 是由 成瀬洋介 栗田真嗣 于 2018-03-06 设计创作,主要内容包括:本公开涉及一种用于在检查对象时设定照明条件的方法、装置、系统及程序以及存储介质。所述方法包括:由能够改变照明参数的光源对所述对象进行照明,且由图像传感器以此种照明参数拍摄对象以获得所拍摄的图像,其中所述对象具有已知的标签数据;以及将对象的部分或全部所拍摄的图像及对应的标签数据应用于机器学习模型的学习,且基于机器学习模型的估计结果与标签数据之间的比较结果、同时通过优化照明参数及检查算法参数两者来设定机器学习模型的照明条件及检查算法参数。因此,会简化操作。(The present disclosure relates to a method, apparatus, system, and program for setting illumination conditions when inspecting an object, and a storage medium. The method comprises the following steps: illuminating the object by a light source capable of changing an illumination parameter, and capturing the object with such an illumination parameter by an image sensor to obtain a captured image, wherein the object has known tag data; and applying the partially or totally photographed image of the object and the corresponding tag data to learning of the machine learning model, and setting the illumination condition and the inspection algorithm parameter of the machine learning model by optimizing both the illumination parameter and the inspection algorithm parameter simultaneously based on a comparison result between an estimation result of the machine learning model and the tag data. Therefore, the operation is simplified.)

1. A method for setting lighting conditions when inspecting an object, wherein an inspection module comprises a machine learning model for inspecting the object, the machine learning model being generated by using learning data comprising images, the method being characterized by comprising:

illuminating the object by a light source capable of changing an illumination parameter, the illumination parameter specifying the illumination condition under which the object is photographed, and photographing the object by an image sensor with a plurality of illumination parameters to obtain photographed images corresponding to the plurality of illumination parameters, wherein the object has known tag data; and

applying part or all of the photographed images of the object corresponding to the plurality of illumination parameters and the corresponding tag data to learning of the machine learning model, and setting both the illumination condition and the inspection algorithm parameter of the machine learning model by simultaneously optimizing both the illumination parameter and the inspection algorithm parameter based on a comparison result between an estimation result of the machine learning model and the tag data of the object.

2. A method for setting lighting conditions when inspecting an object, wherein an inspection module comprises a machine learning model for inspecting the object, the machine learning model being generated by using learning data comprising images, the method being characterized by comprising:

illuminating the object by a light source capable of changing an illumination parameter that specifies the illumination condition under which the object is captured, and capturing the object by an image sensor with a plurality of illumination parameters to obtain a plurality of captured images corresponding to the plurality of illumination parameters, wherein the object has known tag data; and

applying part or all of the photographed images corresponding to the plurality of lighting parameters to the machine learning model having carried out learning, and setting the lighting condition by optimizing only selection of predetermined lighting parameters based on a result of comparison between an estimation result of the machine learning model and the tag data of the object.

3. The method of claim 2, wherein applying some or all of the captured images corresponding to the plurality of lighting parameters to the machine learning model on which learning has been effectuated comprises:

applying learning data comprising the captured image of the object and corresponding tag data to additional learning of the machine learning model to update some or all inspection algorithm parameters of the machine learning model, wherein the tag data represents inspection features of the object; and

optimizing the selection of both the illumination parameters and some or all of the inspection algorithm parameters of the machine learning model to reconcile the estimation of the machine learning model with the tag data.

4. The method of claim 2 or 3,

when the lighting condition is set, the number of captured images applied to the machine learning model that has carried out learning to find an optimum lighting condition is less than the number of captured images applied to the learning of the machine learning model.

5. The method of claim 1 or 2,

the lighting parameters comprise the light-emitting position and the light-emitting intensity of the light source.

6. The method of claim 1 or 2, wherein setting the lighting conditions under which the object is inspected using the inspection module comprises:

selecting the illumination parameter that minimizes a loss function representing the comparison result, wherein the illumination parameter is a variable of the loss function, wherein the selecting comprises: selecting the illumination parameter that minimizes the average of the losses of the loss function for a predetermined range of the illumination parameter.

7. An apparatus for setting lighting conditions when inspecting an object, wherein an inspection module includes a machine learning model for inspecting the object, the machine learning model being generated by using learning data including images, the apparatus characterized by comprising:

an acquisition unit that acquires a captured image of the subject, wherein the subject is illuminated by a light source capable of changing an illumination parameter that specifies the illumination condition when the subject is captured, and the subject is captured by an image sensor with a plurality of illumination parameters to obtain the captured image corresponding to the plurality of illumination parameters, wherein the subject has known tag data; and

a setting unit that applies part or all of the photographed images of the object corresponding to the plurality of illumination parameters and the corresponding tag data to learning of the machine learning model, and sets both the illumination conditions and the inspection algorithm parameters of the machine learning model by optimizing both the illumination parameters and the inspection algorithm parameters simultaneously based on a comparison result between an estimation result of the machine learning model and the tag data of the object.

8. An apparatus for setting lighting conditions when inspecting an object, wherein an inspection module includes a machine learning model for inspecting the object, the machine learning model being generated by using learning data including images, the apparatus characterized by comprising:

an acquisition unit that acquires a captured image of the subject, wherein the subject is illuminated by a light source capable of changing an illumination parameter that specifies the illumination condition when the subject is captured, and the subject is captured by an image sensor with a plurality of illumination parameters to obtain the captured image corresponding to the plurality of illumination parameters, wherein the subject has known tag data; and

a setting unit that applies part or all of the photographed images corresponding to the plurality of illumination parameters to the machine learning model on which learning has been carried out, and sets the illumination condition by optimizing only selection of a predetermined illumination parameter based on a result of comparison between an estimation result of the machine learning model and the tag data of the object.

9. The apparatus according to claim 8, wherein the setting unit:

applying learning data comprising the captured image of the object and corresponding tag data to additional learning of the machine learning model to update some or all inspection algorithm parameters of the machine learning model, wherein the tag data represents inspection features of the object; and

optimizing the selection of both the illumination parameters and some or all of the inspection algorithm parameters of the machine learning model to reconcile the estimation of the machine learning model with the tag data.

10. The apparatus of claim 7 or 8,

when the lighting condition is set, the number of captured images applied to the machine learning model that has carried out learning to find an optimum lighting condition is less than the number of captured images applied to the learning of the machine learning model.

11. The apparatus of claim 7 or 8,

the lighting parameters comprise the light-emitting position and the light-emitting intensity of the light source.

12. The apparatus according to claim 7 or 8, wherein the setting unit:

selecting the illumination parameter that minimizes a loss function representing the comparison result, wherein the illumination parameter is a variable of the loss function, wherein the selecting comprises: selecting the illumination parameter that minimizes the average of the losses of the loss function for a predetermined range of the illumination parameter.

13. A system for setting illumination conditions while inspecting an object, comprising: a processing unit performing the method of any one of claims 1 to 6.

14. A program, characterized in that it, when executed, performs the method of any one of claims 1 to 6.

15. A storage medium, characterized in that the storage medium has stored thereon a program which, when executed, performs the method of any one of claims 1 to 6.

Technical Field

The present disclosure relates to a method, apparatus, system, and program for setting lighting conditions during industrial inspection, and a storage medium.

Background

The visual inspection of products in production sites is one of the least implemented areas with robots, and is an important technical problem that must be solved in the future with regard to automation for reducing labor. In recent years, with the development of artificial intelligence and machine learning techniques typified by deep learning, inspection automation techniques have been dramatically improved. However, in visual inspection, machine vision, etc., the most cumbersome flow during the setup of the inspection system is the design of the imaging system, including the optimal design of the illumination. There are the following problems: when an operator manually carries out the optimum design of illumination, in order to deal with individual differences of workpieces, it is necessary to change the workpieces determined as objects, and alternately and repeatedly carry out the illumination optimization based on the manual adjustment and the adjustment of the inspection algorithm at the same time to achieve the intended detection performance, which is very time-consuming. Further, there are also the following problems: when the illumination is adjusted to be easily observed by the operator, the optimum inspection accuracy may not always be achieved.

In the conventional technique for solving these problems, a method for calculating an evaluation reference from a captured image and repeatedly calculating imaging and illumination parameters to maximize/minimize it is reported (patent document 1). However, according to the method, optimization of only an evaluation value calculated from a captured image of a single workpiece currently captured may be achieved, and optimization of a recognizer (recognizer) based on machine learning (e.g., learning of differences between a plurality of workpieces) may not be achieved.

Further, there are also the following problems: there may be a large number of combined imaging and illumination parameters, and it may take a relatively long time to change the imaging and illumination conditions, perform the imaging, and perform the optimization at the same time.

Further, the above-described problems exist not only during product appearance inspection in a production site, but also in other determination devices (such as a face recognition system) that can use a captured image of an illuminated object as an input to make various determinations by means of machine learning.

Disclosure of Invention

Technical problems to be solved

The present disclosure is directed to addressing at least some or all of the foregoing problems.

Means for solving the problems

The present disclosure discloses a method for optimizing parameters of an inspection algorithm based on machine learning, provided that the parameters of the inspection algorithm include design parameters of the lighting. Therefore, the user can directly perform the optimization of the lighting and the optimization of the inspection algorithm in a manner of maximizing a loss value (Lossvalue) of the inspection algorithm (accuracy under the judgment condition).

(1) According to an aspect of the present disclosure, a method for setting a lighting condition when inspecting an object is disclosed, wherein the object is inspected using an inspection module including a machine learning model, the machine learning model being generated by using learning data including an image, and the method is characterized by comprising: illuminating the object by a light source capable of changing an illumination parameter that specifies the illumination condition under which the object is captured, and capturing the object by an image sensor with a plurality of such illumination parameters to obtain a plurality of captured images corresponding to the plurality of illumination parameters, wherein the object has known tag data; and applying part or all of the plurality of photographed images of the object corresponding to the plurality of illumination parameters and the corresponding tag data to learning of the machine learning model, and setting both the illumination condition and the inspection algorithm parameter of the machine learning model by simultaneously optimizing both the illumination parameter and the inspection algorithm parameter based on a comparison result between an estimation result of the machine learning model and the tag data of the object.

Thus, setting the illumination conditions employed when inspecting the object using the inspection module based on the comparison result between the estimation result of the machine learning model and the tag data of the object can simultaneously carry out optimization of the illumination parameters and the inspection algorithm parameters of the machine learning model, and operation can be simplified.

(2) According to another aspect of the present disclosure, a method for setting a lighting condition when inspecting an object is disclosed, wherein the object is inspected using an inspection module including a machine learning model, the machine learning model is generated by using learning data including an image, and the method is characterized by comprising: illuminating the object by a light source capable of changing an illumination parameter that specifies the illumination condition under which the object is captured, and capturing the object by an image sensor with a plurality of such illumination parameters to obtain a plurality of captured images corresponding to the plurality of illumination parameters, wherein the object has known tag data; and applying part or all of the plurality of obtained images corresponding to the plurality of illumination parameters to the machine learning model on which learning has been carried out, and setting the illumination condition by optimizing only selection of a predetermined illumination parameter based on a result of comparison between an estimation result of the machine learning model and the tag data of the object.

In the foregoing method, in a manner of determining the inspection algorithm parameters first and then determining the inspection lighting parameters, the system computation amount during the machine learning model learning can be reduced, the system load can be reduced, and the setting operation of the lighting parameters can be simplified.

(3) In the foregoing method, the operation of applying part or all of the plurality of captured images corresponding to the plurality of lighting parameters to the machine learning model on which learning has been carried out includes: applying learning data comprising the captured image of the object and corresponding tag data to additional learning of the machine learning model to update some or all inspection algorithm parameters of the machine learning model, wherein the tag data represents inspection features of the object; and optimizing selection of both the illumination parameters and some or all of the inspection algorithm parameters of the machine learning model to reconcile the estimation of the machine learning model with the tag data.

In this way, some or all of the inspection algorithm parameters and the inspection illumination parameters may be optimized simultaneously in the second step of the method, so that the machine learning model may obtain better learning results.

(4) In the foregoing method, when the lighting condition is set, the number of the captured images applied to the machine learning model that has carried out learning to find an optimum lighting condition is smaller than the number of the captured images applied to learning of the machine learning model.

Therefore, the learning time can be shortened.

(5) In the foregoing method, the illumination parameters include a light emitting position and a light emitting intensity of the light source.

Therefore, both the light emitting position and the light emitting intensity of the light source can be changed to change the illumination parameter.

(6) In the foregoing method, the operation of setting the lighting condition employed when inspecting the subject using the inspection module includes: selecting an illumination parameter that minimizes a loss function representing the comparison result, wherein the illumination parameter is a variable of the loss function, wherein the selecting comprises: selecting the illumination parameter that minimizes the average of the losses of the loss function for a predetermined range of the illumination parameter.

Here, a predetermined range of illumination parameters is taken into account, so that robustness against environmental changes when inspecting an object can be improved.

(7) According to another aspect of the present disclosure, various apparatuses corresponding to each of the aforementioned methods and for setting lighting conditions when inspecting an object, which can achieve the same effects as each of the aforementioned methods, are also disclosed.

(8) According to another aspect of the present disclosure, a system for setting lighting conditions while inspecting an object is also disclosed, which may include a processing unit and may be configured to perform any of the foregoing methods.

(9) According to another aspect of the present disclosure, a program is also disclosed that is executed to perform any of the foregoing methods.

(10) According to another aspect of the present disclosure, there is also disclosed a storage medium having a program stored therein, the program being executed to perform any of the foregoing methods.

The system, the program, and the storage medium can also achieve the same effects as each of the aforementioned methods.

Technical effects

Essentially two effects of the present disclosure are listed. First, design time can be shortened, system design can be obtained, and individualized skills typified by performance-dependent employee skills are prevented. Second, from a performance perspective, direct optimization of the entire capture system and image processing system can be achieved only from an inspection accuracy (accepted/rejected product judgment or measurement) perspective, e.g., a good lighting design for the inspection algorithm and the inspection algorithm best suited for such lighting.

Drawings

The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this application. The exemplary embodiments of the present disclosure and their description are intended to be illustrative of the disclosure and should not be construed as unduly limiting the disclosure. In the drawings:

fig. 1 is a schematic diagram of a system composition example of an inspection system according to an implementation mode of the present disclosure.

Fig. 2 is a schematic diagram of a hardware composition of a defect inspection apparatus according to an implementation mode of the present disclosure.

Fig. 3 is a schematic diagram of functional modules of a defect inspection apparatus according to an implementation mode of the present disclosure.

Fig. 4 (a) and (b) are flowcharts of a learning phase and an inspection phase, respectively, of a machine learning model in a method for setting lighting conditions according to an implementation mode of the present disclosure.

Fig. 5 is a flow chart of a method for setting lighting conditions according to an implementation mode of the present disclosure.

Fig. 6 is a schematic flow chart of the method for setting the lighting conditions shown in fig. 4 and 5.

Fig. 7 (a) and (b) are flowcharts of a learning phase and an inspection phase, respectively, of a machine learning model in a method for setting lighting conditions according to another implementation mode of the present disclosure.

Fig. 8 (a) and (b) are flowcharts of a learning phase and an inspection phase, respectively, of the machine learning model in one modification of the method for setting lighting conditions shown in fig. 7.

Fig. 9 is a flowchart of a method for setting lighting conditions according to another mode of implementation of the present disclosure.

Fig. 10 is a flowchart of a method for setting lighting conditions according to another mode of implementation of the present disclosure.

Detailed Description

In order that those skilled in the art will better understand the present disclosure, the mode of implementation of the present disclosure is described below clearly and completely in conjunction with the accompanying drawings of the present disclosure. Obviously, the described implementation modes are only a part of the implementation modes of the present disclosure, and not all implementation modes. All other modes of realization that can be obtained by a person skilled in the art without inventive effort based on the modes of realization in the present disclosure shall fall within the scope of protection of the present disclosure.

In the present disclosure, for example, a subject may be photographed in advance under all available imaging conditions, and an appropriate image may be selected according to a desired setting condition. There is an advantage in that there is no limitation as long as the imaging conditions are discretized. In addition to the illumination mode, it can also change the aperture and shutter speed, etc. In addition, in the case of continuous imaging conditions, discretization may be performed. Further, assume that when the number of workpieces is M and the number of imaging conditions is N, M × N images are obtained, which may be stored in a memory, a server side, or a cloud side.

In the present disclosure, an object is illuminated by a plurality of light sources having variable illumination parameters. The illumination parameters may include, for example, the light emission position, the light emission intensity, and the chromaticity (chromaticity) of the light source. Under lighting conditions, a subject is photographed by an image sensor (e.g., a camera) to obtain a photographed image. The machine learning model is trained by using the captured images of the object, the corresponding illumination parameters, and the label data to impart the machine learning model with the ability to inspect the object. The captured images may be pre-correlated with the lighting parameters so that the lighting parameters and the inspection algorithm parameters may be adjusted simultaneously in the process of training the machine learning model. Here, "inspection algorithm parameters" refer to parameters of an inspection algorithm when an object is inspected by a machine learning model. Therefore, the method for inspecting an object in the present disclosure may simplify the operation and reduce the system load, compared to purely adjusting the illumination parameters and illuminating with the illumination parameters to obtain a photographed image for learning.

Modes of implementation of the present disclosure are described in detail with reference to the accompanying drawings. It is important to note that identical or corresponding parts in the drawings are labeled with identical labels, and the description will not be repeated.

First, a system composition example of the inspection system 1 according to an implementation mode of the present disclosure is described. The inspection system 1 according to the implementation mode inspects an object to be inspected based on a captured image generated by capturing the object to be inspected. The inspected object may be a workpiece on a production line. The inspection may be, for example, visual inspection or appearance inspection of the workpiece.

Fig. 1 is a schematic diagram of a system composition example of an inspection system 1 according to an implementation mode. Referring to fig. 1, an inspection system 1 performs image analysis processing on an input image obtained by imaging an object to be inspected (i.e., a workpiece 4) conveyed on, for example, a belt conveyor 2, thereby completing appearance inspection or appearance measurement of the workpiece 4. In the description to be made below, an application example of checking whether there is a defect in the surface of the workpiece 4 is described as a typical example of the image analysis processing. However, these do not form any limitation, and the inspection system may also be applied to specification of defect types or measurement of appearance shapes, or the like.

An upper portion of the belt conveyor 2 is provided with a camera 102 serving as an imaging portion, and an imaging view 6 of the camera 102 forms a predetermined area including the belt conveyor 2. Image data (hereinafter referred to as "input image") generated by imaging by the camera 102 is transmitted to the defect inspection apparatus 100. The imaging of the camera 102 is performed according to a time period or an event.

The defect inspection apparatus 100 may be provided with a learner, and the learner may be provided with a Convolutional Neural Network (CNN) engine. By the CNN engine, a feature detection image of each level is generated from an input image. Whether a defect exists in the target workpiece is judged based on the generated one or more feature detection images. Alternatively, the size, location, etc. of the defect may be detected.

The defect inspection apparatus 100 is connected to a Programmable Logic Controller (PLC) 10, a database apparatus 12, and the like via an upper network 8. The detection results in the defect inspection apparatus 100 may also be transmitted to the PLC 10 and/or the database apparatus 12. It is important to note that any device other than the PLC 10 and database device 12 can also be connected to the upper network 8.

A display 104 for displaying a processing state, a detection result, and the like, and a keyboard 106 and a mouse 108 serving as input portions for receiving user operations may be further connected to the defect inspection apparatus 100.

Next, the hardware composition of the defect inspection apparatus 100 included in the inspection system 10 according to the implementation mode of the present disclosure is described.

Fig. 2 is a schematic diagram of the hardware composition of the defect inspection apparatus 100 according to the implementation mode. The defect inspection apparatus 100 may be an example of the "system for setting illumination conditions when inspecting an object" in the present disclosure. Referring to fig. 2, as an example, the defect inspection apparatus 100 may be implemented according to a general-purpose computer configured by a general-purpose computer rack. The defect inspection apparatus 100 includes a processor 110, a main memory 112, a camera interface 114, an input interface 116, a display interface 118, a communication interface 120, and a memory 130. Typically, these components are connected by an internal bus 122 to communicate with each other.

The processor 110 executes the programs stored in the memory 130 in the main memory 112, thereby realizing the functions and processes described below. The main memory 112 is formed of a nonvolatile memory, and realizes the function of a work memory required for program execution by the processor 110.

The camera interface 114 is connected to the camera 102, and acquires an input image obtained by imaging by the camera 102. The camera interface 114 may further indicate imaging timing and the like to the camera 102.

The input interface 116 is connected with input portions such as the keyboard 106 and the mouse 108, and acquires an instruction represented by an operation of the input portions or the like by the user.

The display interface 118 is connected to the display 104, and outputs various processing results generated by program execution of the processor 110 to the display 104.

The communication interface 120 is responsible for handling communications with the PLC 10, the database device 12, and the like through the upper network 8.

Computer-enabled programs, such as an image processing program 132 and an Operating System (OS) 134 that implement the functions of the defect inspection apparatus 100, are stored in the memory 130, a learner parameter 136 configured to implement an image detection process, an input image acquired from the camera 102 (i.e., a photographed image 138), and an illumination parameter 140 of a light source when the workpiece 4 is photographed, which are mentioned below, may also be stored in the memory 130, the learner parameter 136 may include various parameters, such as an illumination parameter and an inspection algorithm parameter, which are applied to a learning stage and an inspection stage of a machine learning model, for example.

The image processing program 132 stored in the memory 130 may be installed in the defect inspection apparatus 100 via an optical recording medium such as a Digital Versatile Disc (DVD) or a semiconductor recording medium such as a Universal Serial Bus (USB) memory. Alternatively, the image processing program 132 may also be downloaded from a server apparatus on a network or the like.

During implementation in such a general-purpose computer, processing may be carried out by calling necessary ones of the software modules provided by the operating system 134 according to a predetermined order and/or opportunity, thereby implementing part of the functions according to the implementation mode. That is, the image processing program 132 according to the implementation mode does not include all software modules for implementing the functions according to the implementation mode, but may provide necessary functions by cooperating with the operating system.

The image processing program 132 according to the implementation mode may also be provided by being combined in a part of another program. In this condition, the image processing program 132 does not include a module included in the other programs to be combined, but cooperates with the other programs to perform processing. Therefore, the image processing program 132 according to the implementation mode may also be combined in other programs.

Fig. 2 shows an example of implementing the defect inspection apparatus 100 by means of a general-purpose computer. However, no limitation is formed. Some or all of the functions may also be implemented by a dedicated Circuit (e.g., an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA)). In addition, an external device connected to the network may also be responsible for part of the processing.

Fig. 3 is a schematic diagram of functional modules of the defect inspection apparatus 100 according to an implementation mode of the present disclosure. As shown in fig. 3, the defect inspection apparatus 100 may include a photographing part 141, a setting part 142, an inspecting part 143, and a storage part 144.

The image generating part 141, the setting part 142, and the inspecting part 143 of the defect inspection apparatus 100 may be implemented by means of one or more general-purpose processors. However, no limitation is formed. Some or all of the functionality can also be implemented in a dedicated circuit, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA). In addition, an external device connected to the network may also be responsible for part of the processing of these parts.

Here, the capturing section 141 is a specific example of the "acquiring section" in the present disclosure. As another example, the defect inspection apparatus 100 may also receive a photographed image of the workpiece 4 from the outside without including the photographing section 141. The setting portion 142 is a specific example of "setting portion" in the present disclosure. The combination of the photographing part 141 and the setting part 142 is a specific example of "a device for setting an illumination condition at the time of inspecting an object" in the present disclosure.

Further, the inspection portion 143 is a specific example of "inspection module" in the present disclosure. Learner 1431 is a specific example of an implementation mode of a "machine learning model" in this disclosure. The inspection portion 143 outputs the final inspection result with respect to the workpiece 4. For example, under the condition that the learner 1431 is a CNN configured to generate features extracted from an image, the inspection portion 143 may further include, for example, a determination device that applies a determination reference to the features extracted by the learner 1431 to generate a final inspection result.

The setting section 142 applies a part or all of the plurality of captured images of the workpiece 4 corresponding to the plurality of illumination parameters and the corresponding tag data to learning by the learner 1431, and sets the illumination parameters used when the inspection section 143 inspects the workpiece 4 by means of the learner 1431 which has carried out the learning, based on a result of comparison between an estimation result of the learner 1431 and the tag data of the workpiece. The setting method will be described in detail below.

The "tag data" herein is configured to represent an inspection feature of the workpiece 4. For example, the tag data may be data indicating whether the workpiece 4 is an accepted product or a rejected product, and may also be data indicating appearance characteristics (e.g., scratches and size) of the workpiece 4. The content of the tag data is not particularly limited as long as it represents the intended inspection feature of the workpiece 4.

The inspection section 143 inspects the workpiece 4 on the belt conveyor 2. The inspection section 143 can include a learner 1431 to inspect the workpiece 4 through a trained machine learning model.

The photographing section 141 photographs the workpiece 4 by an image sensor. The image sensor may be, for example, a camera, there may be one or more cameras, and further, its shooting parameters (such as aperture size and shutter speed) are variable.

The storage section 144 is configured to store programs or data necessary for the operation of the defect inspection apparatus 100. The defect inspection apparatus 100 may not include the storage section 144.

A method for setting lighting conditions according to an implementation mode of the present disclosure will be summarized below with reference to fig. 4. Fig. 4 (a) and (b) are flowcharts of a learning phase and an inspection phase, respectively, of a machine learning model in a method for setting lighting conditions according to an implementation mode of the present disclosure.

As shown in (a) of fig. 4, in step S410, the photographing part 141 photographs the workpiece. The workpiece may be photographed multiple times under illumination with different illumination parameters to obtain multiple photographed images of the workpiece. These captured images respectively correspond to a set of illumination parameters, and each set of illumination parameters may include parameters such as the light source turned on and the brightness of the light source turned on, for example.

For example, assuming that the index for identifying the workpiece is i (1. ltoreq. i.ltoreq.M), N sets of imaging conditions (illumination parameters, camera parameters) as candidates are represented as

Figure BDA0002632945500000091

Thus, all workpieces are photographed under all imaging conditions and their images are stored in the storage 144. The stored M x N images are represented as

Here, the illumination parameters of each light source may be changed. For example, the light emission position and/or the light emission intensity of each light source may be varied so that different illumination may be provided to the workpiece 4. Further, the workpiece 4 has corresponding tag data, and the tag data is configured to represent an inspection feature of the workpiece 4. For example, the tag data may be data indicating whether the workpiece 4 is an accepted product or a rejected product, and may also be data indicating appearance characteristics (e.g., scratches and size) of the workpiece 4. The content of the tag data is not particularly limited as long as it represents the intended inspection feature of the workpiece 4.

In step S412, the learner 1431 carries out learning by means of these captured images and the corresponding tag data. Since each captured image has an associated illumination parameter, through learning, the evaluation function can be optimized to obtain the optimum illumination parameter and inspection algorithm parameter, i.e., the optimum illumination parameter and inspection algorithm parameter are selected to optimize the accuracy of the inspection result output by the inspection section 143. Here, the "illumination parameter" refers to an illumination parameter used when the inspection portion 143 inspects the workpiece, and the "inspection algorithm parameter" refers to a parameter of an inspection algorithm used when the inspection portion 143 inspects the workpiece.

In the present disclosure, the checking algorithm is implemented by the learner 1431. The training evaluation reference of the learner 1431 is generally referred to as a loss value, and if a product judgment problem of acceptance/rejection is involved, a PASS/FAIL of correct rate (PASS/FAIL) is expressed by Cross Entropy (Cross Entropy) or the like. If the inspection content is a regression problem that measures the length of the work 4 or the like, the error occurrence distribution is modeled by a multidimensional normal distribution, and the log likelihood function thereof is used as the loss value.

Under general conditions, during learning of a machine learning-based inspection algorithm, pieces of teacher data (teacher data) (pass/fail in the case of a judgment problem and correct rate in the case of a regression problem) and learning sample images are input in advance, and parameter optimization is carried out in such a manner that a loss value, which is a reference for whether or not these learning sample images are correctly estimated, is minimized.

As shown in (b) of fig. 4, in step S420, illumination is carried out with the optimized illumination parameters obtained in step S412, and the photographing part 141 photographs the workpiece 4 to obtain a photographed image. In step S422, the learner 1431 inspects the workpiece 4 for defects by means of these images.

In the method for setting the lighting condition shown in fig. 4, the inspection lighting parameters and the inspection algorithm parameters are adjusted at the same time, that is, the lighting parameters are equivalently added to the algorithm parameters to be adjusted, so that the labor time can be reduced, the dependence on workers can be reduced, the calculation load of the whole system can be reduced, and the inspection efficiency can be improved.

Fig. 5 is a flowchart of the method for setting the lighting conditions after further explaining in detail the step S412 of the method for setting the lighting conditions shown in (a) of fig. 4. Fig. 6 is a schematic flow chart of the method for setting the lighting conditions shown in fig. 4 and 5.

Referring to fig. 5, in step S50, learning data including the captured image of the workpiece 4 and corresponding tag data representing the inspection feature of the workpiece 4 is applied to learning by the learner 1431.

In step S52, the parameters of the checking algorithm of the learner 1431 are optimized so that the estimation result of the learner 1431 coincides with the tag data.

During appearance inspection, the loss function of machine learning-based inspection algorithms is typically labeled L (u, v | θ)D) Wherein thetaDIs a vector of parameters of the checking algorithm, e.g. the checking algorithm includes all weights of the connecting lines during deep learning. Further, u is a photographed image, and v is a label of the image. Under general conditions, when a data set for learning is represented asThen, the optimum learning parameter is calculated by the formula (1)

Figure BDA0002632945500000112

Then, the image can be displayed by using M × N imagesThe optimal imaging conditions and the optimal inspection algorithm parameters are calculated by the following formula (2).

Using these results, it can be determined that the optimal imaging parameters are(corresponding to having an index)

Figure BDA0002632945500000117

The imaging conditions of (a). In the operational stage, for example, in step S422, the workpiece may be inspected using the calculated optimal imaging parameters.

In the method for setting lighting parameters according to the implementation mode of the present disclosure with reference to fig. 4 and the method for setting lighting parameters according to the implementation mode of the present disclosure with reference to fig. 5, the inspection lighting parameters and the inspection algorithm parameters are simultaneously optimized, so that labor time can be reduced, and further, direct optimization of the evaluation reference can be achieved only from the viewpoint of inspection accuracy (accepted/rejected product judgment or measured value). Furthermore, a direct optimization of the photographing system and the image processing system can be achieved for overall purposes, e.g. for a good illumination design of the inspection algorithm and the inspection algorithm best suited for such illumination.

Further, in the method for setting the illumination condition, there is no limitation as long as the imaging condition is discretized. Furthermore, it is not necessary to provide a method of estimating an image based on illumination parameters such as "illumination simulator".

In the present disclosure, each workpiece is photographed in all discretized imaging conditions in advance, and thus the number of photographed images is large. Further, in the case where the imaging conditions are constituted by many independent conditions (illumination parameters or shutter speed, aperture, etc.), there are a large number of combinations, and the number of shots will further increase. Therefore, as a modification to the present disclosure, a plurality of approximate imaging conditions may be combined into one and replaced with representative conditions. In particular, it can be achieved by vector quantization compression. The imaging conditions as candidates can be determined by appropriately defining a function defining the closeness between the imaging conditions and using an arbitrary clustering technique such as K-Means (K-Means).

As shown in fig. 6, for example, in the case where the learner 1431 carries out deep learning using a neural network, the output of the neural network is made to coincide with the tag data, and thus the inspection algorithm parameters and the lighting parameters are optimized at the same time, and thereby a neural network which has carried out learning and has the best lighting parameters can be obtained. The learner 1431, having performed learning, may select the illumination parameters to increase the accuracy of the test results for the workpiece 4.

Fig. 7 (a) and (b) are flowcharts of a learning phase and an inspection phase, respectively, of a machine learning model in a method for setting lighting conditions according to another implementation mode of the present disclosure. The principle of the method is first described below.

Under general conditions, the learning of machine learning-based inspection algorithms requires a large number of training images. This problem is particularly evident in methods with a large number of parameters, such as deep learning, where M x N images need to be obtained. When the value of M is very large, a large number of images need to be taken.

As a solution to the problem, only for an inspection algorithm that requires a large number of training images, inspection algorithm parameters may be calculated in advance by training images obtained under fixed shooting conditions. In other words, j and θ cannot be changed simultaneouslyDTo calculate the best solution, but rather at the time of θDThe imaging conditions are then calculated by changing the index j of the imaging conditions for fewer workpieces. Here, the "imaging condition" includes not only the illumination condition but also the photographing condition and the like. Under general conditions, this approach may be considered reasonable because many lighting parameters are relatively small.

Specifically, two data sets are preparedAndand for two-stage optimization, the optimal lighting parameters are calculated by formula (3) and formula (4):

and

in the foregoing method, M is set1>M2The number of captured images required for learning can be reduced from M x N to M1+M2× n. in the foregoing method, existing fixed inspection algorithm parameters can also be used, and only the lighting parameters are calculated.

The flow of the method is described in detail herein with reference to fig. 7. As shown in fig. 7 (a), in step S710, M is photographed with a fixed illumination parameter1A workpiece to obtain M1The captured image. In step S712, the learner 1431 passes M1Learning is performed on each captured image to optimize inspection algorithm parameters. In step S714, M is photographed under N types of illumination parameters1M in one workpiece2A workpiece to obtain M2N images taken.

In step S716, M is added2The N captured images are applied to the learner 1431 that has carried out learning, and the illumination condition employed when the inspection portion 143 inspects the workpiece 4 is set based on the result of comparison between the estimation result of the learner 1431 and the tag data. As an example of comparison, for example, after the learner 1431 has carried out learning using the workpiece images under N illumination parameters as test image inputs, the illumination parameter that maximizes the accuracy of the estimation result of the learner 1431 is selected as the illumination parameter used when inspecting the workpiece 4.

After the learning stage shown in (a) of fig. 7, as shown in (b) of fig. 7, in step S720, the workpiece is photographed under the inspection illumination parameters to obtain a photographed image. In step S722, the inspection portion 143 analyzes the captured image to obtain a detection result about the workpiece.

According to the inspection method described with reference to fig. 7, the amount of system computation during machine learning model learning can be reduced, the system load can be reduced, and the setting operation of the lighting parameters can be simplified.

Further, optionally, in the foregoing method, the number of captured images required for learning can also be reduced, thereby reducing labor time and simplifying the parameter optimization procedure.

There may be various modifications to the method for setting the lighting conditions described with reference to fig. 7, and the modifications will be described in detail below with reference to fig. 8. Fig. 8 (a) and (b) are flowcharts of a learning phase and an inspection phase, respectively, of the machine learning model in one modification of the method for setting lighting conditions shown in fig. 7. In fig. 8, the same or similar steps as those of fig. 7 are denoted by the same or similar reference numerals, and repeated explanation thereof is omitted.

During learning in step S816, divide by θLIn addition, a portion of the algorithm parameters (recorded as θ ') will be examined'D) Set to a variable value and readjust it as shown in equations (5) and (6). Therefore, additional learning can be performed only by the captured images of several workpiece samples, and on the other hand, the problem of not being able to form a learner that is most suitable for illumination can be solved locally:

Figure BDA0002632945500000131

and

similarly, the number of captured images required for learning can be reduced, and further, a learner most suitable for illumination to some extent can be formed.

Further, in the inspection method described with reference to fig. 8, all parameters of the inspection algorithm may also be adjusted again during learning in step S816. Under such conditions, the learning in step S712 has a pre-training function.

Fig. 9 is a flowchart of an inspection method according to another mode of implementation of the present disclosure. According to the inspection method described with reference to fig. 9, robustness to small variations of the illumination and imaging system can be improved.

When the same inspection is performed in parallel on a production line, it is difficult to make an identical copy of the entire imaging system, including the illumination. Under general conditions, there may be individual differences due to variations in the mounting positions of cameras, lighting, or the like.

When the optimal illumination parameters calculated by the method of the present disclosure are applied to a replica environment different from a photographing environment, performance may be impaired due to individual differences of the imaging system. To prevent such a problem, for a slight variation, a stable parameter may be calculated by carrying out an averaging evaluation function after noise is added to the illumination parameter. Specifically, the aforementioned loss function L is defined by L in equation (7)Instead, the lighting parameters and the inspection algorithm parameters are calculated from this:

Figure BDA0002632945500000141

wherein (j) is approximate to the imaging conditionAn index set of imaging conditions. Here, "approximation" is arbitrary, e.g., adjacent values within a discretized imaging condition, or may be defined using a particular euclidean distance. This is a direct application of the concept of "enhancement" (of the input image) to the illumination parameters during deep learning.

The flow of the method is described in detail herein with reference to fig. 9. As shown in fig. 9, in step S90, the workpiece is photographed to obtain a photographed image. The step may refer to S410. In step S92, the captured image is input into the learner 1431, and further the illumination variation condition is input for learning. The "illumination variation condition" referred to herein may include a slight variation in the environment, and may also include a slight variation in the light source and the image sensor. In step S94, the average value of the loss function of the learner 1431 is minimized for the lighting parameters within the predetermined range to obtain the optimized lighting parameters, wherein the lighting parameters are variables of the loss function.

In the inspection method described with reference to fig. 9, the environmental impact is taken into account, so that robustness to small variations of the illumination parameters can be improved and the performance of the entire inspection system is improved.

Fig. 10 is a flowchart of an inspection method according to another mode of implementation of the present disclosure. According to the inspection method described with reference to fig. 10, robustness against workpiece variations can be improved based on two evaluation functions.

To ensure the robustness of inspection performance to workpiece variations, under general conditions, there is a modification to add a workpiece sample image or to add a sample image by applying an enhancement technique.

According to the present disclosure, there is achieved an advantage that the accuracy (loss value) of the checking algorithm can be directly maximized. On the other hand, a reference for direct visual evaluation of the captured image is not included, and therefore it is not possible to distinguish whether the illumination parameter becomes an illumination parameter for actual illumination for inspecting a defect of the object or an abnormal illumination parameter optimized for a prescribed workpiece configuration. When the eye is used to evaluate whether an inspection image is good or not good, the performance of the inspection algorithm may not be directly maximized, but the evaluation is carried out based on a priori knowledge of human senses and experiences, thus achieving the advantage of ensuring robustness. Therefore, as shown in equation (8), a human-based subjective robust evaluation reference h (u) (e.g., contrast in a region) may be added to the loss function to achieve optimization:

L'(ui,j,viLD)=L(ui,j,viLD)+λh(ui,j) (8),

where λ is a balance parameter that determines an important factor to be emphasized in examining the performance (loss value) of the algorithm and in the instructor's evaluation reference.

The flow of the method is described in detail herein with reference to fig. 10. As shown in fig. 10, in step S100, a workpiece is photographed to obtain a photographed image. The step may refer to S410. In step S102, an estimated image obtained based on the captured image is input into the learner 1431, and an evaluation reference is further input for learning. The "evaluation reference" referred to herein may include the aforementioned evaluation reference based on prior knowledge of human senses and experiences, and may also include an evaluation reference based on an existing mathematical algorithm for image analysis or the like. In step S104, the loss function of the learner 1431 is minimized to obtain optimized inspection lighting parameters.

As a way of applying the method, it may have the following conversion. For example, in calculating θDIn a simplified method of post-calculating j, different values of λ or instructor evaluation reference values h (u) may be used in the step of calculating each parameter. For example, calculation of θ from only the accuracy (λ ═ 0) may be adoptedDAnd for thetaL(λ is set to a relatively large value) the weight of the instructor's evaluation reference is increased, and so on.

In the inspection method described with reference to fig. 10, the environmental impact is taken into account, and thus the robustness to workpiece variations can be improved, and the performance of the entire inspection system can be improved.

If implemented in the form of software functional units and sold or used as a stand-alone product, the devices and systems for setting lighting conditions when inspecting objects, or parts thereof, may be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the present disclosure may be implemented in the form of a software product, which is stored in a storage medium, including instructions for causing a piece of computer equipment (which may be a personal computer, a server, or network equipment) to perform all or part of the steps of the method according to each example of the present disclosure, in essence or in addition to some, all or part of the technical solutions contributing to the prior art. The aforementioned storage medium includes: various media capable of storing program code, such as a USB disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and furthermore, may also include a data stream downloaded from a server or the cloud.

The foregoing is merely a preferred mode of practicing the present disclosure and it should be noted that certain improvements and modifications may be made by those skilled in the art without departing from the principles of the present disclosure. Such improvements and modifications are intended to be within the scope of this disclosure.

Description of the symbols

1: inspection system

2: belt conveyor

4: workpiece

6: imaging view

8: upper network

10: programmable logic controller

12: database device

100: defect inspection device

102: camera with a camera module

104: display device

106: keyboard with a keyboard body

108: mouse (Saggar)

110: processor with a memory having a plurality of memory cells

112: main memory

114: camera interface

116: input interface

118: display interface

120: communication interface

122: internal bus

130: memory device

132: image processing program

134: operating system

136: learner parameters

138: captured image

140: parameters of illumination

141: photographing part

142: setting part

143: inspection section

144: storage section

1431: learning device

23页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于设备诊断的“智能”传感器数据分析

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类