Defect detection method, system and device

文档序号:1566344 发布日期:2020-01-24 浏览:13次 中文

阅读说明:本技术 缺陷检测方法、系统和装置 (Defect detection method, system and device ) 是由 苏业 矫函哲 聂磊 刘明浩 于 2019-10-22 设计创作,主要内容包括:本申请实施例公开了缺陷检测方法、系统和装置。该方法可应用于云计算领域,具体地,该方法的一具体实施方式包括:在多个拍摄条件下拍摄的图像,分别输入目标检测模型,得到从该目标检测模型输出的多个第一检测结果;对于该在多个拍摄条件下拍摄的图像中的各个图像,基于该目标检测模型输出的针对该图像的第一检测结果,判断该图像呈现的目标物品是否有缺陷,若判断有缺陷,将该图像作为疑似缺陷图像;将该疑似缺陷图像输入预先训练的神经网络模型进行检测,得到第二检测结果。本申请实施例能够通过快速的无监督模型,筛选出疑似缺陷图像,从而减少输入神经网络模型的图像的数量,提高缺陷检测的效率。(The embodiment of the application discloses a defect detection method, a system and a device. The method can be applied to the field of cloud computing, and specifically, a specific implementation manner of the method comprises the following steps: respectively inputting an object detection model to images shot under a plurality of shooting conditions to obtain a plurality of first detection results output from the object detection model; for each image in the images shot under the plurality of shooting conditions, judging whether a target object presented by the image has a defect or not based on a first detection result output by the target detection model and aiming at the image, and if so, taking the image as a suspected defect image; and inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result. According to the embodiment of the application, suspected defect images can be screened out through the quick unsupervised model, so that the number of images input into the neural network model is reduced, and the defect detection efficiency is improved.)

1. A method of defect detection, the method comprising:

acquiring images of a target object shot under a plurality of shooting conditions;

inputting the images shot under the plurality of shooting conditions into a target detection model respectively to obtain a plurality of first detection results output from the target detection model, wherein the target detection model comprises an unsupervised model, and the first detection results are used for representing whether the target object presented by the images is defective or not;

for each image in the images shot under the plurality of shooting conditions, judging whether a target article presented by the image is defective or not based on a first detection result output by the target detection model and aiming at the image, and if so, taking the image as a suspected defect image;

and inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result is used for representing whether the target object presented by the suspected defect image has defects or not.

2. The method of claim 1, wherein the shooting conditions include a shooting angle and/or lighting.

3. The method of claim 1, wherein the object detection model comprises at least two object detection models that are different from each other.

4. The method according to claim 3, wherein the determining, for each of the images captured under the plurality of capturing conditions, whether the target item represented by the image is defective based on the first detection result for the image output by the target detection model comprises:

and judging that the target object presented by the image is defective in response to the number of the first detection results which represent that the target object is defective and are larger than or equal to the number of the first detection results which represent that the target object is not defective in the first detection results respectively output by the at least two target detection models for the image.

5. The method according to claim 3, wherein the determining, for each of the images captured under the plurality of capturing conditions, whether the target item represented by the image is defective based on the first detection result for the image output by the target detection model comprises:

and inputting first detection results aiming at the image and respectively output by the at least two target detection models into a result summarizing model to obtain the corresponding relation of whether the target object presented by the image is defective or not, wherein the result summarizing model is used for representing the first detection results output by the at least two target detection models aiming at the image and the target object presented by the image is defective or not.

6. The method according to claim 3, wherein the determining, for each of the images captured under the plurality of capturing conditions, whether the target item represented by the image is defective based on the first detection result for the image output by the target detection model comprises:

acquiring preset values and weights corresponding to first detection results respectively output by the at least two target detection models aiming at the image, wherein the preset values corresponding to the first detection results representing the defects of the target object are different from the preset values corresponding to the first detection results representing the non-defects of the target object;

weighting preset values corresponding to first detection results respectively output by at least two target detection models to obtain weighted sums;

in response to determining that the weighted sum is greater than or equal to a preset weighted sum threshold, determining that the target item represented by the image is defective.

7. The method of claim 1, wherein after said obtaining a second detection result, the method further comprises:

and determining that the target article is defective in response to the second detection result indicating that the target article represented by the suspected-defect image is defective in the at least one suspected-defect image in each suspected-defect image.

8. The method of claim 1, wherein the neural network model is a deformable convolutional network;

the second detection result comprises defect information and the confidence of the defect information, wherein the defect information comprises a defect type and a defect position.

9. The method of claim 8, wherein the neural network model is obtained by retraining the trained neural network model;

retraining the trained neural network model by the following steps:

acquiring a sample image, wherein the sample image is different from defect information in an image of the neural network model obtained through prior training;

training the neural network model using the sample images.

10. The method of claim 1, wherein the method further comprises:

for each suspected defect image, determining at least two sub-images into which the suspected defect image is divided, wherein the number of the sub-images is greater than a preset number threshold; and

inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the method comprises the following steps:

respectively inputting the at least two sub-images into the neural network model to obtain a detection result of each sub-image in the at least two sub-images, wherein the detection result of each sub-image comprises defect information and confidence of the defect information, and the defect information comprises a defect type and a defect position;

and generating a second detection result for representing that the target object presented by the suspected defect image is defective in response to the fact that the confidence coefficient of the defect information in the detection result of any sub-image in the sub-images is greater than or equal to a preset confidence coefficient threshold value.

11. A defect detection system, the system comprising a defect detection module;

the defect detection module for performing the method according to one of claims 1 to 10.

12. The system of claim 11, wherein the system further comprises an acquisition module, a control module, and a distribution module;

the acquisition module is used for acquiring images of the object shot under a plurality of shooting conditions;

the control module is used for acquiring the image acquired by the acquisition module and sending the image to the defect detection module;

the defect detection module is further configured to determine a detection conclusion based on the second detection result for each suspected defect image, where the detection conclusion is used to characterize whether the target article is defective;

the control module is further used for generating a material distribution instruction according to the detection conclusion and sending the material distribution instruction to a material distribution module, wherein different material distribution instructions indicate that the target object is placed at different positions;

the material distribution module is used for receiving a material distribution instruction and placing a target article to a target position by using a mechanical arm, wherein the target position is the position indicated by the material distribution instruction, or the mechanical arm performs the operation indicated by the material distribution instruction and places the target article to the position.

13. The system of claim 11, wherein the system further comprises a training module;

the training module is used for carrying out the retraining.

14. A defect detection apparatus, the apparatus comprising:

an acquisition unit configured to acquire images of a target item taken under a plurality of photographing conditions;

a first detection unit, configured to input the images taken under the plurality of shooting conditions into an object detection model respectively, and obtain a plurality of first detection results output from the object detection model, wherein the object detection model comprises an unsupervised model, and the first detection results are used for representing whether the object presented by the images is defective or not;

a judging unit configured to judge, for each of the images captured under the plurality of capturing conditions, whether a target article represented by the image is defective or not based on a first detection result for the image output by the target detection model, and if judged to be defective, take the image as a suspected-defective image;

and the second detection unit is configured to input the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result is used for representing whether the target object represented by the suspected defect image has defects or not.

15. The apparatus of claim 14, wherein the photographing condition includes a photographing angle and/or illumination.

16. The apparatus of claim 14, wherein the object detection model comprises at least two object detection models that are different from each other.

17. The apparatus of claim 16, wherein the determining unit is further configured to:

and judging that the target object presented by the image is defective in response to the number of the first detection results which represent that the target object is defective and are larger than or equal to the number of the first detection results which represent that the target object is not defective in the first detection results respectively output by the at least two target detection models for the image.

18. The apparatus of claim 16, wherein the determining unit is further configured to:

and inputting first detection results aiming at the image and respectively output by the at least two target detection models into a result summarizing model to obtain the corresponding relation of whether the target object presented by the image is defective or not, wherein the result summarizing model is used for representing the first detection results output by the at least two target detection models aiming at the image and the target object presented by the image is defective or not.

19. The apparatus of claim 16, wherein the determining unit is further configured to:

acquiring preset values and weights corresponding to first detection results respectively output by the at least two target detection models aiming at the image, wherein the preset values corresponding to the first detection results representing the defects of the target object are different from the preset values corresponding to the first detection results representing the non-defects of the target object;

weighting preset values corresponding to first detection results respectively output by at least two target detection models to obtain weighted sums;

in response to determining that the weighted sum is greater than or equal to a preset weighted sum threshold, determining that the target item represented by the image is defective.

20. The apparatus of claim 14, wherein the apparatus further comprises:

and the determining unit is configured to, after the second detection result is obtained, determine that the target article is defective in response to that the second detection result of at least one suspected-defect image in each suspected-defect image indicates that the target article represented by the suspected-defect image is defective.

21. The apparatus of claim 14, wherein the neural network model is a deformable convolutional network;

the second detection result comprises defect information and the confidence of the defect information, wherein the defect information comprises a defect type and a defect position.

22. The apparatus of claim 21, wherein the neural network model is obtained by retraining the trained neural network model;

retraining the trained neural network model by the following steps:

acquiring a sample image, wherein the sample image is different from defect information in an image of the neural network model obtained through prior training;

training the neural network model using the sample images.

23. The apparatus of claim 14, wherein the apparatus further comprises:

the segmentation unit is configured to determine, for each suspected defect image, at least two sub-images into which the suspected defect image is segmented, wherein the number of the sub-images is greater than a preset number threshold; and

the second detection unit includes:

the detection module is configured to input the at least two sub-images into the neural network model respectively to obtain a detection result of each sub-image of the at least two sub-images, wherein the detection result of each sub-image comprises defect information and a confidence coefficient of the defect information, and the defect information comprises a defect type and a defect position;

the generating module is configured to generate a second detection result for representing that the target object presented by the suspected defect image is defective in response to that the confidence of the defect information in the detection result of any sub-image in the sub-images is greater than or equal to a preset confidence threshold.

24. An electronic device, comprising:

one or more processors;

a storage device for storing one or more programs,

when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-10.

25. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method according to any one of claims 1-10.

Technical Field

The embodiment of the application relates to the technical field of computers, in particular to the technical field of internet, and particularly relates to a defect detection method, system and device.

Background

In the traditional production activities of the manufacturing industry, the detection of the state of a product, such as the detection of the appearance state, is an important link for controlling the shipment quality of a manufacturer. A traditional manufacturer can determine whether the product has flaws and defects through product state detection, so that whether the produced notebook is qualified or not is judged.

Current methods of quality testing notebooks include manual testing and optical instrumentation. The manual detection is inefficient and has poor accuracy. The detection rules of the optical instrument are often solidified in the machine, and if the detection rules are changed, such as the detection object is changed, even if the detection object is slightly changed, the hardware of the optical instrument needs to be upgraded. The process is time-consuming and labor-consuming, and has high execution difficulty.

Disclosure of Invention

The embodiment of the application provides a defect detection method, a system and a device.

In a first aspect, an embodiment of the present application provides a defect detection method, including: acquiring images of a target object shot under a plurality of shooting conditions; respectively inputting images shot under a plurality of shooting conditions into a target detection model to obtain a plurality of first detection results output from the target detection model, wherein the target detection model comprises an unsupervised model, and the first detection results are used for representing whether a target object presented by the images is defective or not; for each image in the images shot under the plurality of shooting conditions, judging whether a target object presented by the image is defective or not based on a first detection result output by a target detection model and aiming at the image, and if so, taking the image as a suspected defect image; and inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result is used for representing whether the target object presented by the suspected defect image has defects or not.

In some embodiments, the shooting conditions include shooting angle and/or illumination.

In some embodiments, the object detection models comprise at least two object detection models that are different from each other.

In some embodiments, for each of the images captured under the plurality of capturing conditions, determining whether the target item represented by the image is defective based on the first detection result for the image output by the target detection model, includes: and judging that the target object presented by the image is defective in response to the number of the first detection results which represent that the target object is defective and are greater than or equal to the number of the first detection results which represent that the target object is not defective in the first detection results respectively output by the at least two target detection models for the image.

In some embodiments, for each of the images captured under the plurality of capturing conditions, determining whether the target item represented by the image is defective based on the first detection result for the image output by the target detection model, includes: and inputting first detection results aiming at the image and output by the at least two target detection models respectively into a result summarizing model to obtain whether the target object presented by the image is defective, wherein the result summarizing model is used for representing the corresponding relation between the first detection results output by the at least two target detection models aiming at the image and whether the target object presented by the image is defective.

In some embodiments, for each of the images captured under the plurality of capturing conditions, determining whether the target item represented by the image is defective based on the first detection result for the image output by the target detection model, includes: acquiring preset values and weights corresponding to first detection results respectively output by at least two target detection models aiming at the image, wherein the preset values corresponding to the first detection results representing the defects of the target object are different from the preset values corresponding to the first detection results representing the defects of the target object; weighting preset values corresponding to first detection results respectively output by at least two target detection models to obtain weighted sums; in response to determining that the weighted sum is greater than or equal to the preset weighted sum threshold, determining that the target item represented by the image is defective.

In some embodiments, after obtaining the second detection result, the method further comprises: and determining that the target article is defective in response to the second detection result indicating that the target article presented by the suspected-defect image is defective in at least one suspected-defect image in each suspected-defect image.

In some embodiments, the neural network model is a deformable convolutional network; the second detection result comprises defect information and confidence of the defect information, wherein the defect information comprises a defect type and a defect position.

In some embodiments, the neural network model is obtained by retraining the trained neural network model; retraining the trained neural network model by the following steps: acquiring a sample image, wherein the sample image is different from defect information in an image of a neural network model obtained through prior training; and training a neural network model by using the sample image.

In some embodiments, the method further comprises: for each suspected defect image, determining at least two sub-images obtained by dividing the suspected defect image, wherein the number of the sub-images is greater than a preset number threshold; inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result comprises the following steps: respectively inputting the at least two sub-images into a neural network model to obtain a detection result of each sub-image in the at least two sub-images, wherein the detection result of each sub-image comprises defect information and confidence of the defect information, and the defect information comprises a defect type and a defect position; and generating a second detection result for representing that the target object presented by the suspected defect image is defective in response to the fact that the confidence coefficient of the defect information in the detection result of any sub-image in the sub-images is greater than or equal to a preset confidence coefficient threshold value.

In a second aspect, an embodiment of the present application provides a defect detection system, which includes a defect detection module; a defect detection module for performing the method of any one of claims 1-10.

In some embodiments, the system further comprises an acquisition module, a control module, and a distribution module; the acquisition module is used for acquiring images shot by the target object under a plurality of shooting conditions; the control module is used for acquiring the image acquired by the acquisition module and sending the image to the defect detection module; the defect detection module is further used for determining a detection conclusion based on the second detection result aiming at each suspected defect image, and the detection conclusion is used for representing whether the target object is defective or not; the control module is also used for generating a material distribution instruction according to the detection conclusion and sending the material distribution instruction to the material distribution module, wherein different material distribution instructions indicate that the target object is placed at different positions; and the material distribution module is used for receiving a material distribution instruction and placing the target object to a target position by using the mechanical arm, wherein the target position is the position indicated by the material distribution instruction, or the mechanical arm performs the operation indicated by the material distribution instruction and places the target object to the position.

In some embodiments, the system further comprises a training module; and the training module is used for performing retraining.

In a third aspect, an embodiment of the present application provides a defect detection apparatus, including: an acquisition unit configured to acquire images of a target item taken under a plurality of photographing conditions; the first detection unit is configured to input images shot under a plurality of shooting conditions into a target detection model respectively to obtain a plurality of first detection results output from the target detection model, wherein the target detection model comprises an unsupervised model, and the first detection results are used for representing whether a target object presented by the images is defective or not; a judging unit configured to judge, for each of images captured under a plurality of capturing conditions, whether a target article represented by the image is defective or not based on a first detection result for the image output by a target detection model, and if judged to be defective, take the image as a suspected-defective image; and the second detection unit is configured to input the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result is used for representing whether the target object represented by the suspected defect image is defective or not.

In some embodiments, the shooting conditions include shooting angle and/or illumination.

In some embodiments, the object detection models comprise at least two object detection models that are different from each other.

In some embodiments, the determining unit is further configured to: and judging that the target object presented by the image is defective in response to the number of the first detection results which represent that the target object is defective and are greater than or equal to the number of the first detection results which represent that the target object is not defective in the first detection results respectively output by the at least two target detection models for the image.

In some embodiments, the determining unit is further configured to: and inputting first detection results aiming at the image and output by the at least two target detection models respectively into a result summarizing model to obtain whether the target object presented by the image is defective, wherein the result summarizing model is used for representing the corresponding relation between the first detection results output by the at least two target detection models aiming at the image and whether the target object presented by the image is defective.

In some embodiments, the determining unit is further configured to: acquiring preset values and weights corresponding to first detection results respectively output by at least two target detection models aiming at the image, wherein the preset values corresponding to the first detection results representing the defects of the target object are different from the preset values corresponding to the first detection results representing the defects of the target object; weighting preset values corresponding to first detection results respectively output by at least two target detection models to obtain weighted sums; in response to determining that the weighted sum is greater than or equal to the preset weighted sum threshold, determining that the target item represented by the image is defective.

In some embodiments, the apparatus further comprises: and the determining unit is configured to respond to the second detection result that at least one suspected-defect image exists in each suspected-defect image and the target article presented by the suspected-defect image is defective, and determine that the target article is defective.

In some embodiments, the neural network model is a deformable convolutional network; the second detection result comprises defect information and confidence of the defect information, wherein the defect information comprises a defect type and a defect position.

In some embodiments, the neural network model is obtained by retraining the trained neural network model; retraining the trained neural network model by the following steps: acquiring a sample image, wherein the sample image is different from defect information in an image of a neural network model obtained through prior training; and training a neural network model by using the sample image.

In some embodiments, the apparatus further comprises: the segmentation unit is configured to determine, for each suspected defect image, at least two sub-images into which the suspected defect image is segmented, wherein the number of the sub-images is greater than a preset number threshold; and a second detection unit including: the detection module is configured to input the at least two sub-images into the neural network model respectively to obtain a detection result of each sub-image of the at least two sub-images, wherein the detection result of each sub-image comprises defect information and confidence of the defect information, and the defect information comprises a defect type and a defect position; and the generating module is configured to generate a second detection result for representing that the target object presented by the suspected defect image is defective in response to that the confidence coefficient of the defect information in the detection result of any sub-image in the sub-images is greater than or equal to a preset confidence coefficient threshold value.

In a fourth aspect, an embodiment of the present application provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement a method as in any one of the embodiments of the defect detection method.

In a fifth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a method as in any one of the embodiments of the defect detection method.

According to the defect detection scheme provided by the embodiment of the application, firstly, images shot for a target object under a plurality of shooting conditions are obtained. And then, respectively inputting the images shot under the plurality of shooting conditions into a target detection model to obtain a plurality of first detection results output from the target detection model, wherein the target detection model comprises an unsupervised model, and the first detection results are used for representing whether the target object presented by the images is defective or not. Then, for each image in the images shot under the plurality of shooting conditions, whether the target object presented by the image is defective or not is judged based on a first detection result output by the target detection model and aiming at the image, and if the target object is judged to be defective, the image is taken as a suspected-defect image. And finally, inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result is used for representing whether the target object presented by the suspected defect image has defects or not. The embodiment of the application provides a method and a device for screening out suspected defect images through a quick unsupervised model, so that the number of images of a neural network model which is high in input precision and time-consuming is reduced, and the defect detection efficiency is improved. In addition, the unsupervised model and the neural network model are adopted to participate in detection together, so that the accuracy of defect detection can be improved, and whether the target object has defects or not can be accurately judged.

Drawings

Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:

FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present application may be applied;

FIG. 2 is a flow diagram of one embodiment of a defect detection method according to the present application;

FIG. 3 is a schematic diagram of an application scenario of a defect detection method according to the present application;

FIG. 4 is a schematic block diagram of one embodiment of a defect detection system according to the present application;

FIG. 5 is a schematic block diagram of one embodiment of a defect detection apparatus according to the present application;

FIG. 6 is a schematic block diagram of a computer system suitable for use in implementing an electronic device according to some embodiments of the present application.

Detailed Description

The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.

It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.

Fig. 1 shows an exemplary system architecture 100 to which embodiments of the defect detection method or defect detection apparatus of the present application may be applied.

As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.

The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. Various communication client applications, such as an image editing application, a video application, a live application, an instant messaging tool, a mailbox client, social platform software, and the like, may be installed on the terminal devices 101, 102, and 103.

Here, the terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.

The server 105 may be a server providing various services, such as a background server providing support for the terminal devices 101, 102, 103. The background server may analyze and perform other processing on the received data such as the image, and feed back a processing result (e.g., a second detection result) to the terminal device.

It should be noted that the defect detection method provided in the embodiment of the present application may be executed by the server 105 or the terminal devices 101, 102, and 103, and accordingly, the defect detection apparatus may be disposed in the server 105 or the terminal devices 101, 102, and 103.

It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.

With continued reference to FIG. 2, a flow 200 of one embodiment of a defect detection method according to the present application is shown. The defect detection method comprises the following steps:

step 201, images of a target object shot under a plurality of shooting conditions are acquired.

In this embodiment, an executing subject of the defect detection method (for example, a server or a terminal device shown in fig. 1) may acquire images photographed on a target article under a plurality of photographing conditions. The target object may be various physical objects such as a notebook computer, an automobile, a machine tool, a glass, and the like. The photographing condition may be a factor that may affect a photographed image, for example, a photographing apparatus.

In some optional implementations of the present embodiment, the shooting condition may include a shooting angle and/or illumination.

In these alternative implementations, the photographing condition may include at least one of a photographing angle and a lighting condition. Here, the photographing angle may be various. For example, if the target item is a notebook computer, an image of the top cover of the notebook computer, an image of each corner, an image of the side of the product, and the like may be acquired. Different lighting conditions can be obtained by illuminating the target item with a plurality of light sources of different angles, different colors and/or different intensities.

The realization modes can avoid the problem that the defect detection result is too one-sided due to shooting under a single shooting condition to a certain extent. The realization modes can shoot the target object under various shooting conditions, so that a more accurate shooting result is obtained.

Step 202, respectively inputting the images shot under the plurality of shooting conditions into a target detection model to obtain a plurality of first detection results output from the target detection model, wherein the target detection model comprises an unsupervised model, and the first detection results are used for representing whether the target object presented by the images is defective or not.

In this embodiment, the executing body may input each acquired image into the target detection model, and obtain a first detection result for the image output from the target detection model. The images of each input target detection model have first detection results with one-to-one correspondence relationship with the images. That is, the first detection result is a result of one object detection model being output for one image. The first detection result can adopt the characteristic images of '1', 'having' and the like to show that the target object has defects, and adopt the characteristic images of '0', 'having' and the like to show that the target object has no defects. In practice, if the target item is a laptop, the defect may be the presence of water spots, fuzz, and/or black marks on its surface, etc.

The number of object detection models is at least one, and in at least one object detection model, an unsupervised model is included. The unsupervised model has high processing speed and can realize high-efficiency detection on the image. The execution subject may be detected by using various unsupervised models, for example, the unsupervised model may be at least one of the following: unsupervised anomaly detection algorithm (HBOS) based on frequency Histogram, isolated Forest algorithm (Isolation Forest algorithm), neighborhood algorithm (KNN), principal component Analysis algorithm (principal component Analysis), and/or Maximum neighborhood Distance (MCD) correlation tracking algorithm, and the like.

And step 203, for each image in the images shot under the plurality of shooting conditions, judging whether the target object presented by the image is defective or not based on a first detection result output by the target detection model and aiming at the image, and if the target object is judged to be defective, taking the image as a suspected-defect image.

In this embodiment, for each of the acquired images, the executing body may determine whether the target item represented by the image is defective or not based on the first detection result for the image output by the target detection model. If the execution body judges that the image has a defect, the execution body can take the image as a suspected defect image.

In practice, the executing entity may determine whether the target item presented therein is defective or not for the image based on the first-step detection result in various ways. For example, if the number of the target detection models is one, if the output first detection result indicates that the target object represented by the image has a defect, the executing entity may determine that the target object represented by the image has a defect. Accordingly, if the output first detection result represents that the target object represented by the image has no defect, the execution subject may determine that the target object represented by the image has no defect. In addition, when the number of the object detection models is more than two, if the first detection result output by any one of the object detection models indicates that the object item represented by the image is defective, the execution main body may determine that the object item represented by the image is defective.

In some optional implementations of this embodiment, the object detection model includes at least two object detection models that are different from each other.

In these alternative implementations, the executing body may use at least two object detection models to detect the respective images, so as to obtain first detection results respectively output by the at least two object detection models. Here, each of the at least two object detection models is different.

According to the alternative implementation modes, different target detection models can be utilized to jointly detect the image, so that the detection accuracy is improved.

In some optional application scenarios of these implementations, the determining, in step 203, for each of the images captured under the multiple capturing conditions, whether the target item represented by the image is defective or not based on the first detection result for the image output by the target detection model may include: and judging that the target object presented by the image is defective in response to the number of the first detection results which represent that the target object is defective and are greater than or equal to the number of the first detection results which represent that the target object is not defective in the first detection results respectively output by the at least two target detection models for the image.

In these optional application scenarios, the execution subject may use, as the determination result, a content represented by a majority of the first detection results.

The application scenes can be judged according to the first detection results of the plurality of target detection models, so that the judgment result is prevented from being influenced by obvious deviation of the results of the individual target detection models.

In some optional application scenarios of these implementations, the determining, in step 203, for each of the images captured under the multiple capturing conditions, whether the target item represented by the image is defective or not based on the first detection result for the image output by the target detection model may include: and inputting first detection results aiming at the image and output by the at least two target detection models respectively into a result summarizing model to obtain whether the target object presented by the image is defective, wherein the result summarizing model is used for representing the corresponding relation between the first detection results output by the at least two target detection models aiming at the image and whether the target object presented by the image is defective.

In these optional application scenarios, the executing entity may input the first detection results output by the at least two target detection models into the result summarizing model, so as to process all the first detection results by using the result summarizing model to determine whether the target object represented by the image has defects. The first detection result here is obtained by detecting the image. The corresponding relation represented by the result summarizing model refers to a corresponding relation between a first detection result obtained by detecting a certain image by using the target detection model and whether a target article represented by the image is defective or not.

In practice, the result summary model may be a correspondence table that may characterize the above-described correspondence. In addition, the result summary model may also be a trained neural network model, such as a convolutional neural network. For example, the correspondence relationship may include not only a correspondence relationship between each first detection result obtained by detecting the image by at least two target detection models and the target object that is shown in the image is defective, but also a correspondence relationship between each first detection result and the target object that is shown in the image is not defective. Here, the first detection results corresponding to the defect and the non-defect are different.

The application scenes can utilize the result summarizing model to quickly and accurately judge based on each first detection result, so that the defect detection process is more efficient and accurate.

In some optional application scenarios of these implementations, the determining, in step 203, for each of the images captured under the multiple capturing conditions, whether the target item represented by the image is defective or not based on the first detection result for the image output by the target detection model may include: acquiring preset values and weights corresponding to first detection results respectively output by at least two target detection models aiming at the image, wherein the preset values corresponding to the first detection results representing the defects of the target object are different from the preset values corresponding to the first detection results representing the defects of the target object; weighting preset values corresponding to first detection results respectively output by at least two target detection models to obtain weighted sums; in response to determining that the weighted sum is greater than or equal to the preset weighted sum threshold, determining that the target item represented by the image is defective.

In these optional application scenarios, two preset values and a weight are preset for the first detection result output by each target detection model. The target product represented by the image represented by the first detection result has defects and no defects, and the defects and the no defects respectively correspond to one of the two preset values. For example, if the first detection result indicates that the target product has a defect, the preset value is 1, and if the first detection result indicates that the target product has no defect, the preset value is 0. The first detection results of different target detection models may be the same or different from each other in corresponding preset values. Specifically, the execution main body may weight a preset value corresponding to each first detection result to obtain a weighted sum of each preset value, and then obtain a determination result.

The execution body can accurately and properly utilize each first detection result through weighting, so that a more accurate defect detection process is realized.

And 204, inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result is used for representing whether the target object presented by the suspected defect image has defects.

In this embodiment, the execution subject may input the suspected defect image into a neural network model for detection, and obtain a second detection result output from the neural network model. The neural network model may be various neural network models for detecting an image, for example, the neural network model may be a convolutional neural network such as fast-regional neural network (fast-RCNN). The neural network model here is pre-trained. The samples used for training may include images of the articles marked with defective information.

In practice, after obtaining the second detection result, the performing body may determine a detection conclusion based on the second detection result for each suspected defect image, wherein the detection conclusion is used for indicating whether the target article is defective. For example, if the target article has only one suspected defect image, then the target article also has only one second detection result, and the executing body may directly determine whether the target article characterized by the second detection result is defective or not as the detection conclusion that the target article is defective or not. If the target object has at least two suspected defect images, correspondingly, at least two second detection results exist. The performing body may determine that the target article is defective in a case where both of the at least two second detection results indicate that the target article is defective.

In some optional implementation manners of this embodiment, after obtaining the second detection result, the method may further include: and determining that the target article is defective in response to the second detection result indicating that the target article presented by the suspected-defect image is defective in at least one suspected-defect image in each suspected-defect image.

In these alternative implementations, in at least one suspected-defect image of the target article, the executing body may determine that the target article is defective once the second detection result of any one suspected-defect image indicates that the target article represented by the suspected-defect image is defective. The target article may have one or more suspected defect images indicative of the target article being rendered defective. In practice, the executing subject may execute the implementation in a case where each of the at least one suspected photographed image is photographed under a different photographing condition.

The implementation modes can respond to that the image of the target object shot under any shooting condition is defective, determine that the target object is defective, and efficiently determine whether the target object is a qualified object.

In some optional implementations of this embodiment, the neural network model is a deformable convolutional network; the second detection result comprises defect information and confidence of the defect information, wherein the defect information comprises a defect type and a defect position.

In these alternative implementations, the convolutional layers in the neural network model are deformable convolutional layers, so that the neural network model is a deformable convolutional network, for example, a specified depth residual network (ResNet), such as ResNet 50. In practice, the deformable convolution layer may add an offset during convolution. After the neural network model is added with offset for learning, the size and the position of the deformable convolution kernel can be dynamically adjusted according to the image content which needs to be identified currently, and the visual effect is that the positions of the sampling points of the convolution kernels at different positions can be adaptively changed according to the image content, so that the neural network model is adaptive to the geometric deformation such as the shape, the size and the like of different objects.

And if the confidence coefficient of the defect information in the detection result of any sub-image in each sub-image is greater than or equal to a preset confidence coefficient threshold, the generated second detection result is used for representing that the target article presented by the suspected defect image is defective. That is, as long as at least one sub-image exists, and the confidence of the defect information in the detection result for the sub-image is greater than or equal to the preset confidence threshold, a second detection result indicating that the target article presented by the suspected defect image is defective is generated.

Specifically, the preset manner of division may be various. For example, the execution main body may acquire a preset range of the number, the length, and the width of the sub-images, so that the execution main body may divide the sub-images according to the preset content. Alternatively, the execution subject may acquire the size of each sub-image set according to the size of the suspected defect image, and divide the image. In addition, the execution body may further acquire a position of each preset dividing line in the suspected defect image, and perform the division.

In practice, different defect classes may be represented by different words, letters, and/or numbers, etc. Defect classes may include water spots, fuzz, and/or black marks, among others. The defect location may include a coordinate location of a certain vertex or center point of a rectangular box indicating the defect, and may further include a size of the rectangular box. For example, the size of the defect can be represented by the width and height of a rectangular frame.

These implementations may utilize a deformable convolutional network to improve defect recall and detection accuracy. In addition, the execution subject can more accurately represent the defect in the image through the defect type and the defect position.

In some optional application scenarios of these implementation manners, the neural network model is obtained by retraining the trained neural network model; retraining the trained neural network model by the following steps: acquiring a sample image, wherein the sample image is different from defect information in an image of a neural network model obtained through prior training; and training a neural network model by using the sample image.

In these alternative application scenarios, the sample images used to train the neural network model again may include different defect information than the sample images used to train the neural network model before. In particular, the sample images employed for training may be images of defects comprising different defect classes, images of different defects comprising the same defect class, and/or images of the same defect comprising the same defect class taken under different shooting conditions.

In these application scenarios, the executing agent may train the applicable neural network model again after training to iteratively train the neural network model, so as to update the neural network model, so that the neural network model can detect more accurately, or detect more types of defects. Further, compared with the prior art, the application scenarios do not need to be upgraded by time-consuming and labor-consuming hardware equipment, but can be used for rapidly and accurately upgrading the defect detection equipment through iteration of the model.

In some optional implementations of this embodiment, the method further includes: for each suspected defect image, determining at least two sub-images obtained by dividing the suspected defect image, wherein the number of the sub-images is greater than a preset number threshold; inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result comprises the following steps: respectively inputting the at least two sub-images into a neural network model to obtain a detection result of each sub-image in the at least two sub-images, wherein the detection result of each sub-image comprises defect information and confidence of the defect information, and the defect information comprises a defect type and a defect position; and generating a second detection result used for representing that the target object presented by the suspected defect image is defective for the detection result of the defect information with the confidence coefficient greater than or equal to the preset confidence coefficient threshold value in the detection results of the sub-images.

In these optional implementation manners, the execution subject may segment the suspected-defect image according to a preset manner, and input each segmented sub-image into the neural network model, so as to obtain a detection result for each sub-image. That is, the executing body may input the suspected-defect image into the neural network model by inputting each of the sub-images obtained by the segmentation into the neural network model.

The realization modes can segment the suspected defect image and detect the sub-image, thereby realizing the detection of the details in each area of the suspected defect image and avoiding the omission of the defect caused by the omission of the details in the image.

With continuing reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the defect detection method according to the present embodiment. In the application scenario of fig. 3, an executing agent 301 may obtain 2 images 302 of a target object (such as a laptop) captured under 2 capturing conditions (such as strong light and weak light). Respectively inputting 2 images 302 shot under 2 shooting conditions into a target detection model to obtain 2 first detection results 303 output from the target detection model, wherein the target detection model comprises an unsupervised model, and the first detection results are used for representing whether a target object represented by the images is defective or not. For each image 302 captured under a plurality of capturing conditions, the main body 301 determines whether the target object represented by the image is defective or not based on the first detection result 303 output by the target detection model for the image, and if it is determined that the target object is defective, the image is regarded as a suspected-defect image 304. And inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result 305, wherein the second detection result 305 is used for representing whether the target object presented by the suspected defect image has defects or not.

According to the method provided by the embodiment of the application, the suspected defect images can be screened out through the quick unsupervised model, so that the number of images input into the neural network model is reduced, and the defect detection efficiency is improved. In addition, the unsupervised model and the neural network model are adopted to participate in detection together, so that the accuracy of defect detection can be improved, and whether the target object has defects or not can be accurately judged.

With further reference to FIG. 4, a defect detection system 400 is illustrated. The defect detection system includes a defect detection module 410.

The defect detection module 410 for performing the method of any of claims 1-10.

In this embodiment, the defect detecting module 410 may be a module in which a neural network model operates in the terminal device, or may be a module in which a neural network model operates in the server.

The defect detection module 410 may be configured to perform the following method:

the photographing conditions include a photographing angle and/or illumination.

The object detection models include at least two object detection models different from each other.

For each image in the images shot under the plurality of shooting conditions, judging whether the target object presented by the image is defective or not based on a first detection result output by a target detection model and aiming at the image, wherein the judging comprises the following steps: and judging that the target object presented by the image is defective in response to the number of the first detection results which represent that the target object is defective and are greater than or equal to the number of the first detection results which represent that the target object is not defective in the first detection results respectively output by the at least two target detection models for the image.

For each image in the images shot under the plurality of shooting conditions, judging whether the target object presented by the image is defective or not based on a first detection result output by a target detection model and aiming at the image, wherein the judging comprises the following steps: and inputting first detection results aiming at the image and output by the at least two target detection models respectively into a result summarizing model to obtain whether the target object presented by the image is defective, wherein the result summarizing model is used for representing the corresponding relation between the first detection results output by the at least two target detection models aiming at the image and whether the target object presented by the image is defective.

For each image in the images shot under the plurality of shooting conditions, judging whether the target object presented by the image is defective or not based on a first detection result output by a target detection model and aiming at the image, wherein the judging comprises the following steps: acquiring preset values and weights corresponding to first detection results respectively output by at least two target detection models aiming at the image, wherein the preset values corresponding to the first detection results representing the defects of the target object are different from the preset values corresponding to the first detection results representing the defects of the target object; weighting preset values corresponding to first detection results respectively output by at least two target detection models to obtain weighted sums; in response to determining that the weighted sum is greater than or equal to the preset weighted sum threshold, determining that the target item represented by the image is defective.

After obtaining the second detection result, the method further comprises: and determining that the target article is defective in response to the second detection result indicating that the target article presented by the suspected-defect image is defective in at least one suspected-defect image in each suspected-defect image.

The neural network model is a deformable convolution network; the second detection result comprises defect information and confidence of the defect information, wherein the defect information comprises a defect type and a defect position.

The neural network model is obtained by retraining the trained neural network model; retraining the trained neural network model by the following steps: acquiring a sample image, wherein the sample image is different from defect information in an image of a neural network model obtained through prior training; and training a neural network model by using the sample image.

The method further comprises the following steps: for each suspected defect image, determining at least two sub-images obtained by dividing the suspected defect image, wherein the number of the sub-images is greater than a preset number threshold; inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result comprises the following steps: respectively inputting the at least two sub-images into a neural network model to obtain a detection result of each sub-image in the at least two sub-images, wherein the detection result of each sub-image comprises defect information and confidence of the defect information, and the defect information comprises a defect type and a defect position; and generating a second detection result for representing that the target object presented by the suspected defect image is defective in response to the fact that the confidence coefficient of the defect information in the detection result of any sub-image in the sub-images is greater than or equal to a preset confidence coefficient threshold value.

The system provided by the application can screen out suspected defect images through a quick unsupervised model, so that the number of images input into a neural network model is reduced, and the defect detection efficiency is improved. In addition, the unsupervised model and the neural network model are adopted to participate in detection together, so that the accuracy of defect detection can be improved, and whether the target object has defects or not can be accurately judged.

In some optional implementation manners of this embodiment, the system further includes an acquisition module, a control module, and a material distribution module; the acquisition module is used for acquiring images shot by the target object under a plurality of shooting conditions; the control module is configured to obtain the image acquired by the acquisition module and send the image to the defect detection module 410; the defect detection module 410 is configured to determine a detection conclusion based on the second detection result for each defect detection module 410, where the detection conclusion is used to represent whether the target article is defective; the control module is further configured to generate a material distribution instruction according to the detection conclusion, and send the material distribution instruction to the material distribution module, where different material distribution instructions indicate that the target object is placed at different positions; the material distribution module is used for receiving a material distribution instruction and placing a target article to a target position by using the mechanical arm, wherein the target position is the position indicated by the material distribution instruction, or the mechanical arm performs the operation indicated by the material distribution instruction and places the target article to the position.

In these alternative implementations, the acquisition module may perform acquisition of an image, and specifically, the acquisition module may include a camera. The acquisition module can acquire images shot for the target object under the plurality of shooting conditions. The control module can perform coordinated deployment among the modules and perform scheduling of information to initiate or terminate operations of the models, such as acquiring images taken under a plurality of shooting conditions from the acquisition module. The control module may receive the images collected by the collecting module, and send the images to the defect detecting module 410, so that the defect detecting module 410 performs detection to obtain a second detection result. Thereafter, the defect detection module 410 may determine a detection conclusion based on the second detection result for each suspected defect image. The process of determining the detection conclusion is set forth in step 204 and will not be described in detail herein.

In practice, the control module may generate the material distribution instruction in various ways according to the detection conclusion. For example, the control module may generate an instruction indicating that the target item is thrown into a magazine in which defective items are collected, in the case where the detection concludes that the target item is defective. In addition, the control module may generate an instruction to drop the target item into the magazine where the non-defective item is collected, in a case where the detection result indicates that the target item is not defective. In addition, the control module can also generate an operation instruction for the material distribution module to place the target object at different target positions or an instruction indicating the target positions based on different detection conclusions.

If the material distribution module receives the material distribution instruction, the target article can be placed at the target position by using the mechanical arm controlled by the material distribution module.

The realization modes can coordinate among all models through the control module, so that the whole process from defect detection to material distribution is automatically processed. Moreover, these implementations utilize the automated process to improve the efficiency and accuracy of defect detection. In addition, the execution main body can distribute materials according to the accurate result of the defect detection, so that the defective products and the qualified products are accurately distinguished, and the articles with different quality in the delivered articles are prevented from being mixed.

In some optional implementations of this embodiment, the system further includes a training module; and the training module is used for performing retraining.

In these alternative implementations, the system may further include a training module, and the training module may train the neural network model. In particular, a training engine may be included in the training module, such that the training module may be trained using the training engine, and retrained as described above. The control module may obtain the trained neural network model and send it to the defect detection module 410.

The implementation modes can utilize a special training module to train the neural network model, so that the defect detection and the retraining of the neural network model can be simultaneously carried out, and the iteration efficiency of the neural network model is improved.

With further reference to fig. 5, as an implementation of the method shown in the above figures, the present application provides an embodiment of a defect detection apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which may include the same or corresponding features or effects as the embodiment of the method shown in fig. 2, in addition to the features described below. The device can be applied to various electronic equipment.

As shown in fig. 5, the defect detecting apparatus 500 of the present embodiment includes: an acquisition unit 501, a first detection unit 502, a judgment unit 503, and a second detection unit 504. The acquiring unit 501 is configured to acquire images of a target object captured under a plurality of capturing conditions; a first detection unit 502 configured to input images captured under a plurality of capturing conditions into a target detection model respectively, and obtain a plurality of first detection results output from the target detection model, wherein the target detection model comprises an unsupervised model, and the first detection results are used for representing whether a target object represented by the images is defective or not; a judging unit 503 configured to judge, for each of the images captured under the plurality of capturing conditions, whether or not a target article represented by the image is defective based on a first detection result for the image output by the target detection model, and if judged to be defective, take the image as a suspected-defective image; the second detecting unit 504 is configured to input the suspected-defect image into a pre-trained neural network model for detection, so as to obtain a second detection result, where the second detection result is used to represent whether the target object represented by the suspected-defect image is defective.

In some embodiments, the acquisition unit 501 of the defect detection apparatus 500 may acquire images taken of the target object under a plurality of shooting conditions. The target object may be various physical objects such as a notebook computer, an automobile, a machine tool, a glass, and the like. The photographing condition may be a factor that may affect a photographed image, for example, a photographing apparatus.

In some embodiments, the first detection unit 502 may input each acquired image into the target detection model, resulting in a first detection result for the image output from the target detection model. The images of each input target detection model have first detection results with one-to-one correspondence relationship with the images. That is, the first detection result is a result of one object detection model being output for one image.

In some embodiments, the determining unit 503 may determine whether the target item represented by the image is defective based on the first detection result for the image output by the target detection model. If the execution body judges that the image has a defect, the execution body can take the image as a suspected defect image.

In some embodiments, the second detecting unit 504 may input the suspected defect image into a neural network model for detection, and obtain a second detection result output from the neural network model. The neural network model here may be various neural network models for detecting an image.

In some optional implementations of the present embodiment, the shooting condition includes a shooting angle and/or illumination.

In some optional implementations of this embodiment, the object detection model includes at least two object detection models that are different from each other.

In some optional implementations of this embodiment, the determining unit is further configured to: and judging that the target object presented by the image is defective in response to the number of the first detection results which represent that the target object is defective and are greater than or equal to the number of the first detection results which represent that the target object is not defective in the first detection results respectively output by the at least two target detection models for the image.

In some optional implementations of this embodiment, the determining unit is further configured to: and inputting first detection results aiming at the image and output by the at least two target detection models respectively into a result summarizing model to obtain whether the target object presented by the image is defective, wherein the result summarizing model is used for representing the corresponding relation between the first detection results output by the at least two target detection models aiming at the image and whether the target object presented by the image is defective.

In some optional implementations of this embodiment, the determining unit is further configured to: acquiring preset values and weights corresponding to first detection results respectively output by at least two target detection models aiming at the image, wherein the preset values corresponding to the first detection results representing the defects of the target object are different from the preset values corresponding to the first detection results representing the defects of the target object; weighting preset values corresponding to first detection results respectively output by at least two target detection models to obtain weighted sums; in response to determining that the weighted sum is greater than or equal to the preset weighted sum threshold, determining that the target item represented by the image is defective.

In some optional implementations of this embodiment, the apparatus further includes: and the determining unit is configured to respond to the second detection result that at least one suspected-defect image exists in each suspected-defect image and the target article presented by the suspected-defect image is defective, and determine that the target article is defective.

In some optional implementations of this embodiment, the neural network model is a deformable convolutional network; the second detection result comprises defect information and confidence of the defect information, wherein the defect information comprises a defect type and a defect position.

In some optional implementation manners of this embodiment, the neural network model is obtained by retraining the trained neural network model; retraining the trained neural network model by the following steps: acquiring a sample image, wherein the sample image is different from defect information in an image of a neural network model obtained through prior training; and training a neural network model by using the sample image.

In some optional implementations of this embodiment, the apparatus further includes: the segmentation unit is configured to determine, for each suspected defect image, at least two sub-images into which the suspected defect image is segmented, wherein the number of the sub-images is greater than a preset number threshold; and a second detection unit including: the detection module is configured to input the at least two sub-images into the neural network model respectively to obtain a detection result of each sub-image of the at least two sub-images, wherein the detection result of each sub-image comprises defect information and confidence of the defect information, and the defect information comprises a defect type and a defect position; and the generating module is configured to generate a second detection result for representing that the target object presented by the suspected defect image is defective in response to that the confidence coefficient of the defect information in the detection result of any sub-image in the sub-images is greater than or equal to a preset confidence coefficient threshold value.

As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.

In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium of the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a first detection unit, a judgment unit, and a second detection unit. The names of these units do not in some cases constitute a limitation of the unit itself, and for example, the acquisition unit may also be described as a "unit that acquires images of a target object taken under a plurality of photographing conditions".

As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring images of a target object shot under a plurality of shooting conditions; respectively inputting images shot under a plurality of shooting conditions into a target detection model to obtain a plurality of first detection results output from the target detection model, wherein the target detection model comprises an unsupervised model, and the first detection results are used for representing whether a target object presented by the images is defective or not; for each image in the images shot under the plurality of shooting conditions, judging whether a target object presented by the image is defective or not based on a first detection result output by a target detection model and aiming at the image, and if so, taking the image as a suspected defect image; and inputting the suspected defect image into a pre-trained neural network model for detection to obtain a second detection result, wherein the second detection result is used for representing whether the target object presented by the suspected defect image has defects or not.

The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种输电线路金具锈蚀检测方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!