Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium

文档序号:1102130 发布日期:2020-09-25 浏览:19次 中文

阅读说明:本技术 评价系统、评价装置、评价方法、评价程序以及记录介质 (Evaluation system, evaluation device, evaluation method, evaluation program, and recording medium ) 是由 中野可也 于 2018-10-04 设计创作,主要内容包括:评价系统(1)是使用评价对象物的拍摄图像(G)来评价评价对象物的覆盖范围的系统,具备:图像获取部(11),获取拍摄图像(G);修正部(13),通过修正拍摄图像(G)来生成评价用图像;评价部(22),基于评价用图像来评价覆盖范围;以及输出部(16),输出评价部的评价结果,修正部(13)基于拍摄图像(G)所包含的凹痕区域(De)的大小,从拍摄图像(G)提取评价区域(Re),并基于评价区域(Re)来生成评价用图像,凹痕区域(De)是在评价对象物上产生的凹痕的图像。(An evaluation system (1) is a system for evaluating the coverage of an evaluation object by using a shot image (G) of the evaluation object, and comprises: an image acquisition unit (11) that acquires a captured image (G); a correction unit (13) that generates an evaluation image by correcting the captured image (G); an evaluation unit (22) that evaluates the coverage area on the basis of the evaluation image; and an output unit (16) that outputs the evaluation result of the evaluation unit, wherein the correction unit (13) extracts an evaluation region (Re) from the captured image (G) on the basis of the size of a dent region (De) included in the captured image (G), and generates an image for evaluation on the basis of the evaluation region (Re), the dent region (De) being an image of a dent generated in the object to be evaluated.)

1. An evaluation system that evaluates a coverage area of an evaluation target object using a captured image of the evaluation target object, comprising:

an image acquisition unit that acquires the captured image;

a correction unit that generates an evaluation image by correcting the captured image;

an evaluation unit that evaluates the coverage area based on the evaluation image; and

an output unit that outputs an evaluation result based on the evaluation unit,

the correction unit extracts an evaluation area from the captured image based on the size of a dent area included in the captured image, generates the image for evaluation based on the evaluation area,

the dent region is an image of the dent generated in the evaluation object.

2. The evaluation system of claim 1,

the correction unit extracts the evaluation area from the captured image such that the evaluation area increases as the size of the dent area increases.

3. The evaluation system of claim 2,

the correction unit sets the size of the evaluation area by multiplying the size of the dent area by a predetermined constant, and extracts the evaluation area from the captured image.

4. The evaluation system according to any one of claims 1 to 3, wherein,

the correction unit enlarges or reduces the evaluation area so that the size of the dimple area matches a predetermined size.

5. The evaluation system according to any one of claims 1 to 4, wherein,

the correction unit corrects the color of the evaluation area based on the color of a reference area included in the captured image,

the reference region is an image of a reference body to which a specific color is attached.

6. The evaluation system according to any one of claims 1 to 5, wherein,

the correction unit removes specular reflection from the evaluation area.

7. The evaluation system according to any one of claims 1 to 6, wherein,

the evaluation unit evaluates the coverage using a neural network.

8. An evaluation device that evaluates a coverage area of an evaluation object using a captured image of the evaluation object, comprising:

an image acquisition unit that acquires the captured image;

a correction unit that generates an evaluation image by correcting the captured image;

an evaluation unit that evaluates the coverage area based on the evaluation image; and

an output unit that outputs an evaluation result based on the evaluation unit,

the correction unit extracts an evaluation area from the captured image based on the size of a dent area included in the captured image, generates the image for evaluation based on the evaluation area,

the dent region is an image of the dent generated in the evaluation object.

9. An evaluation method for evaluating a coverage area of an evaluation object using a captured image of the evaluation object, comprising:

acquiring the shot image;

generating an evaluation image by correcting the captured image;

evaluating the coverage area based on the evaluation image; and

a step of outputting the evaluation result in the step of evaluating the coverage,

in the step of generating the evaluation image, an evaluation area is extracted from the captured image based on the size of a dent area included in the captured image, and the evaluation image is generated based on the evaluation area,

the dent region is an image of the dent generated in the evaluation object.

10. An evaluation program for causing a computer to execute:

acquiring a captured image of an evaluation object;

generating an evaluation image by correcting the captured image;

evaluating a coverage of the evaluation object based on the evaluation image; and

a step of outputting the evaluation result in the step of evaluating the coverage,

in the step of generating the evaluation image, an evaluation area is extracted from the captured image based on the size of a dent area included in the captured image, and the evaluation image is generated based on the evaluation area,

the dent region is an image of the dent generated in the evaluation object.

11. A recording medium which is a computer-readable recording medium on which an evaluation program is recorded,

the evaluation program causes a computer to execute:

acquiring a captured image of an evaluation object;

generating an evaluation image by correcting the captured image;

evaluating a coverage of the evaluation object based on the evaluation image; and

a step of outputting the evaluation result in the step of evaluating the coverage,

in the step of generating the evaluation image, an evaluation area is extracted from the captured image based on the size of a dent area included in the captured image, and the evaluation image is generated based on the evaluation area,

the dent region is an image of the dent generated in the evaluation object.

Technical Field

The present disclosure relates to an evaluation system, an evaluation device, an evaluation method, an evaluation program, and a recording medium.

Background

In order to improve the strength of a machine part or the like, shot peening (shot peening) may be applied to the surface of the machine part or the like. A coverage (coverage) measuring apparatus for evaluating the degree of completion of such shot peening is known. For example, patent document 1 discloses a coverage measuring device that calculates a coverage based on an image obtained by imaging a processed surface and displays the coverage.

Patent document 1, Japanese patent laid-open publication No. 2011-152603

Shot blasting can be performed using shots of various sizes. Therefore, the size of the dimple formed on the processing surface varies depending on the size of the projected material. However, if the surface having the same area is used as the evaluation target for the shots having different sizes, the size of the shot may affect the evaluation of the coverage. For example, if the surface of the object to be evaluated does not have a sufficient area for the size of the projected material, the coverage area may not be evaluated as a whole (average coverage area) because the influence of one dent on the coverage area becomes large.

Disclosure of Invention

In the art, it is desired to improve the evaluation accuracy of the coverage.

An evaluation system according to an aspect of the present disclosure is a system that evaluates a coverage area of an evaluation target object using a captured image of the evaluation target object. The evaluation system includes: an image acquisition unit that acquires a captured image; a correction unit that generates an image for evaluation by correcting the captured image; an evaluation unit that evaluates the coverage area based on the evaluation image; and an output unit that outputs the evaluation result of the evaluation unit. The correction unit extracts an evaluation area from the captured image based on the size of the dent area included in the captured image, and generates an image for evaluation based on the evaluation area. The dent region is an image of a dent generated in the evaluation object.

An evaluation device according to another aspect of the present disclosure is a device that evaluates a coverage area of an evaluation target object using a captured image of the evaluation target object. The evaluation device is provided with: an image acquisition unit that acquires a captured image; a correction unit that generates an image for evaluation by correcting the captured image; an evaluation unit that evaluates the coverage area based on the evaluation image; and an output unit that outputs the evaluation result of the evaluation unit. The correction unit extracts an evaluation area from the captured image based on the size of the dent area included in the captured image, and generates an image for evaluation based on the evaluation area. The dent region is an image of a dent generated in the evaluation object.

An evaluation method according to still another aspect of the present disclosure is a method of evaluating a coverage area of an evaluation target object using a captured image of the evaluation target object. The evaluation method comprises: acquiring a shot image; a step of generating an image for evaluation by correcting the captured image; a step of evaluating the coverage based on the evaluation image; and outputting the evaluation result in the step of evaluating the coverage. In the step of generating the image for evaluation, an evaluation area is extracted from the captured image based on the size of the dent area included in the captured image, and the image for evaluation is generated based on the evaluation area. The dent region is an image of a dent generated in the evaluation object.

An evaluation program according to still another aspect of the present disclosure is a program for causing a computer to execute: acquiring a captured image of an evaluation object; a step of generating an image for evaluation by correcting the captured image; evaluating a coverage of the evaluation object based on the evaluation image; and outputting the evaluation result in the step of evaluating the coverage. In the step of generating the image for evaluation, an evaluation area is extracted from the captured image based on the size of the dent area included in the captured image, and the image for evaluation is generated based on the evaluation area. The dent region is an image of a dent generated in the evaluation object.

A recording medium according to still another aspect of the present disclosure is a computer-readable recording medium having an evaluation program recorded thereon, the evaluation program causing a computer to execute: acquiring a captured image of an evaluation object; a step of generating an image for evaluation by correcting the captured image; evaluating a coverage of the evaluation object based on the evaluation image; and outputting the evaluation result in the step of evaluating the coverage. In the step of generating the image for evaluation, an evaluation area is extracted from the captured image based on the size of the dent area included in the captured image, and the image for evaluation is generated based on the evaluation area. The dent region is an image of a dent generated in the evaluation object.

In these evaluation system, evaluation device, evaluation method, evaluation program, and recording medium, an evaluation area is extracted from a captured image of an evaluation target object, and an image for evaluation is generated based on the evaluation area. Then, the coverage is evaluated based on the evaluation image, and the evaluation result is output. The evaluation area is extracted from the captured image based on the size of the dent area, which is an image of the dent generated in the evaluation object. Therefore, for example, in the case where the dent region is large, the evaluation region can be extracted so that the area of the evaluation region becomes large. Thus, the coverage was evaluated with respect to the range corresponding to the size of the dent region. As a result, the evaluation accuracy of the coverage can be improved.

The correction unit may extract the evaluation area from the captured image such that the larger the size of the dent area, the larger the evaluation area. In this case, the error of the coverage due to the size of the dimple can be reduced. As a result, the evaluation accuracy of the coverage can be further improved.

The correction unit may set the size of the evaluation area by multiplying the size of the dent area by a predetermined constant, and extract the evaluation area from the captured image. In this case, since the range (area) to be evaluated can be sufficiently increased with respect to the size of the dimple region, the influence of one dimple on the coverage can be reduced. As a result, the evaluation accuracy of the coverage can be further improved.

The correction unit enlarges or reduces the evaluation area so that the size of the dimple area matches a predetermined size. In this case, the evaluation using the neural network can be appropriately performed.

The correction unit may correct the color of the evaluation area based on the color of the reference area included in the captured image. The reference region may be an image of a reference body to which a specific color is attached. Even with the same evaluation object, the color tone of the captured image may change depending on the color tone of the light source used for capturing. In some cases, the brightness of the captured image differs depending on the amount of light irradiation even with the same evaluation target object. In the above configuration, when the reference region has a color different from the specific color, it is considered that the color in the captured image is affected by light. Therefore, for example, the influence of light can be reduced by correcting the color of the evaluation region so that the color of the reference region becomes a specific color (for example, the original color). This can further improve the accuracy of evaluation of the coverage.

The correction unit may remove the specular reflection from the evaluation area. When a strong light is irradiated to the evaluation object, specular reflection may occur, and when the evaluation object is photographed in this state, overexposure may occur in the photographed image. In the areas where overexposure occurs, color information is lost. Thus, by removing specular reflection (overexposure), color information can be recovered. This can further improve the accuracy of evaluation of the coverage.

The evaluation section may evaluate the coverage using a neural network. In this case, the accuracy of evaluation of the coverage can be further improved by learning the neural network.

According to the aspects and embodiments of the present disclosure, the evaluation accuracy of the coverage can be improved.

Drawings

Fig. 1 is a schematic diagram showing the configuration of an evaluation system including an evaluation device according to a first embodiment.

Fig. 2 is a hardware configuration diagram of the user terminal shown in fig. 1.

Fig. 3 is a hardware configuration diagram of the evaluation device shown in fig. 1.

Fig. 4 is a sequence diagram showing an evaluation method performed by the evaluation system shown in fig. 1.

Fig. 5 is a flowchart showing the correction process shown in fig. 4 in detail.

Fig. 6 (a) to (f) are diagrams showing examples of the marker.

Fig. 7 is a diagram for explaining distortion correction.

Fig. 8 (a) and (b) are diagrams for explaining the extraction of the evaluation region.

Fig. 9 (a) and (b) are diagrams for explaining color correction.

Fig. 10 is a diagram showing an example of a neural network.

Fig. 11 is a diagram showing an example of the evaluation result.

Fig. 12 (a) and (b) are diagrams showing examples of the evaluation results.

Fig. 13 (a) and (b) are diagrams showing a modified example of the evaluation result.

Fig. 14 is a schematic diagram showing the configuration of an evaluation system including an evaluation device according to the second embodiment.

Fig. 15 is a sequence diagram showing an evaluation method performed by the evaluation system shown in fig. 14.

Fig. 16 schematically shows an evaluation system including an evaluation device according to a third embodiment.

Fig. 17 is a flowchart showing an evaluation method performed by the evaluation system shown in fig. 16.

Fig. 18 (a) to (d) are views showing modifications of the marks.

Fig. 19 is a diagram for explaining a modification of the method of extracting an evaluation region.

Fig. 20 is a diagram for explaining a modification of the evaluation area extraction method.

Detailed Description

Hereinafter, embodiments of the present disclosure will be described in detail with reference to the drawings. In the description of the drawings, the same elements are denoted by the same reference numerals, and redundant description is omitted.

(first embodiment)

Fig. 1 is a schematic diagram showing the configuration of an evaluation system including an evaluation device according to a first embodiment. An evaluation system 1 shown in fig. 1 is a system for evaluating a coverage of an evaluation target object. Examples of the object to be evaluated include an alman belt plate, a gear, and a spring. The coverage is a ratio of the area where the dent is generated by the impact to the entire surface area to be measured.

The evaluation system 1 includes one or more user terminals 10 and an evaluation device 20. The user terminal 10 and the evaluation device 20 are communicably connected to each other via a network NW. The network NW may be configured by any one of wired and wireless. Examples of the Network NW include the internet, a mobile communication Network, and a WAN (Wide Area Network).

The user terminal 10 is a terminal device used by a user. The user terminal 10 generates a captured image of the evaluation target object by capturing an image of the evaluation target object, and transmits the captured image to the evaluation device 20. The user terminal 10 receives the evaluation result from the evaluation device 20 and outputs the evaluation result to the user. The user terminal 10 may be applied to a mobile terminal having an imaging device built therein, or may be applied to a device capable of communicating with an imaging device. In the present embodiment, a mobile terminal having an imaging device built therein is used as the user terminal 10. Examples of the mobile terminal include a smartphone, a tablet terminal, and a notebook PC (Personal Computer).

Fig. 2 is a hardware configuration diagram of the user terminal shown in fig. 1. As shown in fig. 2, the user terminal 10 may be physically configured as a computer having one or more hardware such as a processor 101, a main storage device 102, an auxiliary storage device 103, a communication device 104, an input device 105, an output device 106, and an imaging device 107. As the processor 101, a processor having a high processing speed is used. Examples of the processor 101 include a GPU (Graphics Processing Unit) and a CPU (Central Processing Unit). The main Memory 102 is constituted by a RAM (Random access Memory) and a ROM (Read Only Memory). Examples of the auxiliary storage device 103 include a semiconductor memory and a hard disk device.

The communication device 104 is a device that transmits and receives data to and from other devices via the network NW. An example of the communication device 104 is a network card. The transmission and reception of data via the network NW may use encryption. In other words, the communication device 104 may encrypt the data and send the encrypted data to other devices. The communication device 104 may also receive encrypted data from other devices and decrypt the encrypted data. The encryption may be performed by a public key encryption scheme such as triple DES (Data encryption standard) and Rijndael, or by a public key encryption scheme such as RSA and ElGamal.

The input device 105 is a device used when the user operates the user terminal 10. Examples of the input device 105 include a touch panel, a keyboard, and a mouse. The output device 106 is a device that outputs various information to the user of the user terminal 10. Examples of the output device 106 include a display, a speaker, and a vibrator.

The photographing device 107 is a device for photographing (imaging). The photographing device 107 is, for example, a camera module. Specifically, the photographing device 107 includes: members of a plurality of optical systems such as lenses and imaging elements; a plurality of control system circuits for driving and controlling the plurality of control systems; and a circuit unit of a signal processing system for converting an electric signal representing a captured image generated by the imaging element into an image signal that is a digital signal.

The functions shown in fig. 1 of the user terminal 10 are realized by: the hardware such as the main storage device 102 is caused to read one or more predetermined computer programs, whereby each hardware is caused to operate under the control of the one or more processors 101, and data in the main storage device 102 and the auxiliary storage device 103 is read and written.

The user terminal 10 functionally includes an image acquisition unit 11, a correction unit 13, a transmission unit 14, a reception unit 15, an output unit 16, and a correction information acquisition unit 17.

The image acquisition unit 11 is a part for acquiring a captured image including an evaluation target object. The image acquisition unit 11 is realized by, for example, the imaging device 107. The captured image may be a still image or a moving image. The captured image is acquired as image data representing pixel values of respective pixels (pixels), for example, but is expressed as a captured image for convenience of explanation. In a case where the user terminal 10 does not have the imaging device 107, the image acquisition unit 11 acquires a captured image by receiving a captured image captured by another device (for example, a terminal having a camera function) from the other device. For example, when the image acquisition unit 11 receives a captured image from another device via the network NW, a portion (the communication device 104 or the like in fig. 2) that performs the captured image reception process functions as the image acquisition unit 11. The image acquisition unit 11 outputs the captured image to the correction unit 13.

The correction unit 13 is a part for generating an evaluation image by correcting the captured image. The correction unit 13 extracts an evaluation area from the captured image, and generates an image for evaluation based on the evaluation area. The evaluation area is defined based on the size of the dent area, which is the image of the dent included in the captured image. The correction unit 13 performs, for example, size correction, distortion correction, color correction, specular reflection removal, noise removal, and vibration correction on the captured image. The details of each correction process will be described later. The correction unit 13 outputs the evaluation image to the transmission unit 14.

The transmission unit 14 is a part for transmitting the evaluation image to the evaluation device 20. The transmission unit 14 transmits the evaluation image to the evaluation device 20 via the network NW. The transmission unit 14 also transmits the correction information acquired by the correction information acquisition unit 17 to the evaluation device 20. The transmission unit 14 is realized by, for example, the communication device 104. The receiving unit 15 is a part for receiving the evaluation result from the evaluation device 20. The receiving unit 15 receives the evaluation result from the evaluation device 20 via the network NW. The receiving unit 15 is realized by, for example, the communication device 104.

The output unit 16 is a unit for outputting the evaluation result. The output unit 16 is realized by an output device 106, for example. When the evaluation result is output from an output device such as a display provided in another device, the output unit 16 transmits the evaluation result to the other device via the network NW, for example. In this case, a part (such as the communication device 104 in fig. 2) that performs the transmission processing of the evaluation result functions as the output unit 16.

The correction information acquisition unit 17 is a part for acquiring correction information of the evaluation result. For example, the user may use the input device 105 to correct the evaluation result after confirming the evaluation result output by the output unit 16. At this time, the correction information acquisition unit 17 acquires the evaluation result after correction as the correction information. The correction information acquisition unit 17 outputs the correction information to the transmission unit 14.

The evaluation device 20 is a device that evaluates the coverage of an evaluation object using a captured image (evaluation image) of the evaluation object. The evaluation device 20 is constituted by an information processing device (server device) such as a computer.

Fig. 3 is a hardware configuration diagram of the evaluation device shown in fig. 1. As shown in fig. 3, the evaluation device 20 may be physically configured as a computer including one or more hardware such as a processor 201, a main storage device 202, an auxiliary storage device 203, and a communication device 204. As the processor 201, a processor having a high processing speed is used. Examples of the processor 201 include a GPU and a CPU. The main storage 202 is constituted by a RAM, a ROM, and the like. Examples of the auxiliary storage device 203 include a semiconductor memory and a hard disk device.

The communication device 204 is a device that transmits and receives data to and from other devices via the network NW. An example of the communication device 204 is a network card. The transmission and reception of data via the network NW may use encryption. In other words, the communication device 204 may encrypt the data and transmit the encrypted data to other devices. The communication device 204 may also receive encrypted data from other devices and decrypt the encrypted data. The encryption may be performed by a public key encryption method such as triple DES and Rijndael, or a public key encryption method such as RSA and ElGamal.

Further, the communication device 204 may perform user authentication for determining whether the user of the user terminal 10 is an authorized user or an unauthorized user. In this case, the evaluation device 20 may evaluate the coverage area when the user is an authorized user, or may not evaluate the coverage area when the user is an unauthorized user. The user authentication uses, for example, a user id (identifier) and a password registered in advance. The user authentication may also use a one-time pad (one-time password).

The functions of the evaluation device 20 shown in fig. 1 are realized as follows: the hardware such as the main storage 202 is caused to read one or more predetermined computer programs, whereby each hardware is caused to operate under the control of the one or more processors 201, and data in the main storage 202 and the auxiliary storage 203 is read and written.

The evaluation device 20 functionally includes a receiving unit 21, an evaluation unit 22, and a transmitting unit 23.

The receiving unit 21 is a part for receiving an image for evaluation from the user terminal 10. The receiving unit 21 receives an image for evaluation from the user terminal 10 via the network NW. The receiving unit 21 also receives correction information from the user terminal 10. The receiving unit 21 is realized by, for example, the communication device 204. The receiving unit 21 outputs the evaluation image and the correction information to the evaluating unit 22.

The evaluation unit 22 is a portion for evaluating the coverage of the object based on the evaluation image. The evaluation unit 22 evaluates the coverage of the evaluation target object using a neural network. The Neural Network may be a Convolutional Neural Network (CNN) or a Recurrent Neural Network (RNN). The evaluation unit 22 outputs the evaluation result to the transmission unit 23.

The transmitter 23 is a part for transmitting the evaluation result to the user terminal 10. The transmission unit 23 transmits the evaluation result to the user terminal 10 via the network NW. The transmission unit 23 is realized by, for example, the communication device 204. The transmission unit 23 outputs (transmits) the evaluation result to the user terminal 10, and thus may be regarded as an output unit.

Next, an evaluation method performed by the evaluation system 1 will be described with reference to fig. 4 to 13 (b). Fig. 4 is a sequence diagram showing an evaluation method performed by the evaluation system shown in fig. 1. Fig. 5 is a flowchart showing the correction process shown in fig. 4 in detail. Fig. 6 (a) to (f) are diagrams showing examples of the marker. Fig. 7 is a diagram for explaining distortion correction. Fig. 8 (a) and (b) are diagrams for explaining the extraction of the evaluation region. Fig. 9 (a) and (b) are diagrams for explaining color correction. Fig. 10 is a diagram showing an example of a neural network. Fig. 11 is a diagram showing an example of the evaluation result. Fig. 12 (a) and (b) are diagrams showing examples of the evaluation results. Fig. 13 (a) and (b) are diagrams showing a modified example of the evaluation result.

The series of processing in the evaluation method shown in fig. 4 starts, for example, when the user of the user terminal 10 uses the imaging device 107 to image the evaluation target object. First, the image acquiring unit 11 acquires a captured image of the evaluation target (step S01). For example, the image acquiring unit 11 acquires an image of the evaluation target object generated by the imaging device 107 as a captured image. Then, the image acquisition unit 11 outputs the acquired captured image to the correction unit 13.

Before acquiring a captured image of the evaluation object, the marker MK may be attached to the evaluation object. The marker MK is used to correct a captured image in image processing described later. Marker MK has a shape that enables orientation of marker MK to be determined. The mark MK is asymmetric in at least one of the vertical direction and the width direction. Specifically, as shown in (a) to (f) of fig. 6, mark MK includes a white region Rw and a black region Rb. The marker MK has a square side F1 to facilitate image processing described later. The side F1 is a side of the region Rb. As shown in fig. 6 (b) to (F), mark MK may be enclosed by frame F2, and a gap Rgap may be provided between frame F2 and region Rb.

Marker MK is depicted on the sheet member. For example, the user of the user terminal 10 attaches a sheet member including the marker MK directly to the evaluation target object. The user may attach a sheet member including the marker MK to the evaluation target object using an UAV (Unmanned Aerial Vehicle) or a telescopic stick or the like.

Note that the marker MK may be composed of 2 or more regions to which colors different from each other are attached. For example, the color attached to the region Rw may be not white, or may be gray. The color attached to the region Rb may be not black, but may be a color having chroma. In this embodiment, marker MK shown in fig. 6 (a) is used.

Next, the correction unit 13 corrects the captured image (step S02). As shown in fig. 5, in the correction processing in step S02, the correction unit 13 first performs distortion correction to correct distortion of the captured image (step S21). The captured image may be distorted as compared with an image obtained by capturing an evaluation object from the front. For example, when the imaging device 107 is a depth camera, the distances between the imaging device 107 and the positions of the evaluation target object are obtained. In this case, the correction unit 13 performs distortion correction by converting the captured image into an image obtained by capturing an image of the evaluation target object from the front, based on the distance between the imaging device 107 and each position of the evaluation target object. When the evaluation object is a structure having a curved surface such as a spring, the correction unit 13 may perform the curved surface correction as the distortion correction.

The correction unit 13 may perform distortion correction using the marker MK. The captured image of the evaluation target to which the marker MK is attached includes a marker region Rm which is an image (image region) of the marker MK. In this case, first, the correction unit 13 extracts the mark region Rm from the captured image. The correction unit 13 extracts the mark region Rm by performing object detection processing or edge detection processing on the captured image, for example. In the case where the mark MK has a simple shape, the edge detection processing can be used because the detection accuracy is higher and the processing speed is faster than the object detection processing.

Then, the correction unit 13 checks whether or not the extracted marker region Rm is an image of the marker MK. The correction unit 13 performs, for example, an averaging process of the histogram on the mark region Rm, and then performs a binarization process on the mark region Rm. The correction unit 13 compares the binarized marker region Rm with the marker MK, and determines that the marker region Rm is an image of the marker MK when the two regions match. Thereby, the vertex coordinates of marker MK in the captured image are acquired. When the two images do not match, the correction unit 13 determines that the marker region Rm is not an image of the marker MK, and extracts the marker region Rm again.

Then, the correction unit 13 calculates the orientation of the marker MK in the captured image using the marker region Rm. Since the marker MK is asymmetric in at least one of the vertical direction and the width direction, the orientation of the marker MK in the captured image can be calculated. As shown in fig. 7, the correction unit 13 performs projective transformation on the captured image so as to restore the original shape of the marker MK from the coordinates and orientation of the vertex of the marker MK in the captured image, thereby transforming the captured image into an image obtained by capturing an image of the evaluation target object from the front. Specifically, the correction unit 13 sets the vertex Pm1 as the origin, sets the direction from the vertex Pm1 to the vertex Pm2 as the X1 axial direction, and sets the direction from the vertex Pm1 to the vertex Pm4 as the Y1 axial direction. The correction unit 13 converts the X1-Y1 coordinate system into an X-Y orthogonal coordinate system to restore the shape of the marker MK. Thereby, distortion correction is performed.

Next, the correction unit 13 extracts the evaluation region Re from the captured image (step S22). Since the shot blasting is performed once with the shot materials being made uniform in size, the sizes of the dents are the same. However, the kind of the projection material used for the shot blasting is, for example, a projection material having a diameter (particle diameter) of about 0.1mm to 1 mm. Therefore, the size of the shots used in one shot may be different from the size of the shots used in another shot. In the case where the coverage is evaluated for the shots with the same area, the influence that one dent gives to the evaluation of the coverage differs depending on the size (diameter) of the shots. Therefore, as shown in fig. 8 (a) and (b), the correction unit 13 extracts the evaluation region Re from the captured image G based on the size of the dent region De included in the captured image G, and generates an evaluation image based on the evaluation region Re. The dent region De is an image of a dent generated in the evaluation object.

The size of the dent region De uses, for example, the average size (e.g., average diameter) of the plurality of dent regions De included in the captured image G. The correction unit 13 detects a plurality of dent regions De included in the captured image G by object detection, for example. The correction unit 13 calculates an average size (for example, an average diameter) of the plurality of dent regions De included in the captured image G, and extracts the evaluation region Re from the captured image G such that the evaluation region Re becomes larger as the average size of the dent regions De becomes larger. Specifically, the correction unit 13 multiplies the average size (average diameter) of the dimple region De by a predetermined magnification (for example, 5 to 10 times) to set the size of the evaluation region Re. For example, the correction unit 13 extracts a square region having a length with one side as the multiplication result from the captured image as the evaluation region Re.

Next, the correction unit 13 corrects the size of the evaluation region Re (step S23). The size of the evaluation region Re may vary depending on the size of the dent region De. Therefore, in the size correction, the correction unit 13 performs expansion and contraction processing of the evaluation region Re so that the size of the dent region De matches a predetermined size (reference grain size). Thus, the size of the evaluation region Re agrees with the predetermined evaluation size. The evaluation size is a size of a reference image (teacher data) used for learning by the neural network NN.

In the expansion/contraction process, the correction unit 13 first compares the size (average diameter) of the dent region De with the reference particle diameter to determine whether to perform any one of the expansion process and the reduction process. The correction unit 13 performs an enlarging process when the average diameter of the dent region De is smaller than the reference particle size, and performs a reducing process when the average diameter of the dent region De is larger than the reference particle size. In other words, the correction unit 13 enlarges or reduces the evaluation area Re to match the size of the evaluation image with the evaluation size. The expansion process uses, for example, bilinear interpolation. The reduction processing uses, for example, an average pixel method. Other scaling algorithms may be used for the expansion processing and the reduction processing, but it is preferable to keep the image state even if the scaling processing is performed.

Next, the correction unit 13 corrects the color of the evaluation region Re (step S24). Even with the same evaluation object, the brightness of the image may vary depending on the imaging environment. In addition, if the colors of the light sources used for imaging are different, the colors of the images may be different. In order to reduce the influence of the shooting environment, color correction is performed. The correction unit 13 corrects the color of the evaluation region Re based on the color of the reference region included in the captured image. The reference area is an image (image area) to which a reference body having a specific color is attached.

As shown in fig. 9 (a), a region Rw in which marker MK is used as a reference body. In this case, the color of the region Rw of the marker MK is measured in advance by a colorimeter or the like, and a reference value indicating the measured color is stored in a memory, not shown. As the value representing the color, an RGB value, an HSV value, and the like are used. As shown in fig. 9 (b), the correction unit 13 acquires the value of the color of the region Rw in the marker region Rm included in the captured image (evaluation region Re), compares the acquired value with the reference value, and performs color correction so that the difference therebetween becomes small (for example, zero). The color correction uses gamma correction or the like. As the color correction, a difference may be added to each pixel value (offset processing).

As a reference, marker MK may not be used. In this case, a sample (for example, a gray plate) whose color is measured in advance may be used as a reference body, and the color of the evaluation region Re may be corrected by imaging the reference body together with the evaluation target, in the same manner as when the marker MK is used. The correction section 13 performs color correction based on the gray patch.

Next, the correction unit 13 removes the specular reflection from the evaluation region Re (step S25). Specular reflection may occur when the evaluation target object has a metallic luster. There are cases where specular reflection occurs depending on the state of the coating film of the object to be evaluated. In an image, a portion where specular reflection occurs generally appears as a strong white color. In other words, the portion causing the specular reflection generates overexposure in the image. Since a portion causing the specular reflection may be detected as a white portion after the color correction, the correcting section 13 removes the specular reflection using the image after the color correction (evaluation region Re).

Therefore, the correction unit 13 specifies the specular reflection portion based on the pixel value of each pixel included in the evaluation region Re. For example, when all of the RGB pixel values are larger than a predetermined threshold value, the correction unit 13 determines that the pixel is a part of the specular reflection portion. The correcting section 13 may convert the pixel value into HSV, and perform the same threshold processing on the luminance (V) or both the luminance (V) and the saturation (S) to determine the specular reflection portion.

Then, the correcting unit 13 removes the specular reflection from the specular reflection portion to restore the original image information (pixel value). The correcting section 13 automatically interpolates (restores) the image information of the specular reflection portion with the image information in the vicinity of the specular reflection portion by, for example, a method using a navier-stokes equation, a fast-marching method of Alexandru Telea, or the like. The correcting unit 13 may restore the image information of the specular reflection portion by previously learning images having values of various coverage ranges through mechanical learning. Mechanical learning uses, for example, GAN (genetic adaptive Network: Generative confrontation Network). Further, the correction unit 13 may restore the image information to an area in which the outer edge of the specular reflection portion is enlarged (in other words, an area including the specular reflection portion and larger than the specular reflection portion).

Next, the correction unit 13 removes noise from the evaluation region Re (step S26). The correction unit 13 removes noise from the evaluation region Re using an interference suppression filter (interference suppression function) such as a gaussian filter or a low-pass filter, for example.

Next, the correction unit 13 corrects the vibration of the evaluation region Re (step S27). When a user performs shooting using the user terminal 10, vibration such as hand shake may occur. The correction unit 13 corrects the image for vibration using, for example, a Wiener filter and a blind deconvolution algorithm.

The correction process shown in fig. 5 is an example, and the correction process performed by the correction unit 13 is not limited to this. A part or all of steps S21, S23 to S27 may be omitted. Steps S21 to S27 may be performed in any order. In the case where the specular reflection removal is performed after the color correction as described above, the specular reflection portion appears as a strong white color in the image, and therefore the accuracy of specifying the specular reflection portion is improved.

As shown in fig. 7, the correction unit 13 may be configured to determine the coordinates of the vertices of each block by using the coordinates of 4 vertices of the marker region Rm (marker MK) by configuring the marker region Rm from a plurality of blocks arranged in a grid pattern. Thus, the correction unit 13 can divide the mark region Rm into a plurality of blocks and process the blocks. For example, the correction unit 13 determines whether or not the marker region Rm is an image of the marker MK using each block. The correction unit 13 may use any one of the blocks as a reference area for color correction. The correction unit 13 may calculate the degree of distortion of the captured image from the coordinates of each block, and perform calibration of the imaging device 107.

Next, the correction unit 13 outputs the captured image corrected by the correction processing of step S02 to the transmission unit 14 as an image for evaluation, and the transmission unit 14 transmits the image for evaluation to the evaluation device 20 via the network NW (step S03). At this time, the transmission unit 14 transmits the evaluation image to the evaluation device 20 together with a terminal ID that can uniquely identify the user terminal 10. As the terminal ID, for example, an IP (Internet Protocol) address can be used. The receiving unit 21 receives the evaluation image transmitted from the user terminal 10, and outputs the evaluation image to the evaluating unit 22. In addition, when the sharpness of the evaluation image is insufficient, the correction unit 13 may not output the evaluation image to the transmission unit 14. The transmission unit 14 may encrypt the evaluation image as described above and transmit the encrypted evaluation image to the evaluation device 20. In this case, the receiving unit 21 receives the encrypted image for evaluation from the user terminal 10, decrypts the encrypted image for evaluation, and outputs the image for evaluation to the evaluating unit 22.

Next, the evaluation unit 22 evaluates the coverage of the evaluation target object based on the evaluation image (step S04). In this example, the evaluation unit 22 evaluates the coverage of the evaluation target object using the neural network NN shown in fig. 10. Upon receiving the evaluation image, the evaluation unit 22 assigns an image ID that can uniquely identify the evaluation image to the evaluation image.

The neural network NN inputs the evaluation image and outputs the matching rate of each category. As the category, a value obtained by integrating coverage ranges into a predetermined scale unit may be used. For example, when the coverage is expressed as a percentage, the category is set to 0 to 98% in 10%. Further, as the specifications related to the coverage, JIS B2711 and SAEJ2277 are listed. As an example, in SAE J2277, the upper limit of the coverage that can be measured is set to 98% (full coverage). The category is not limited to 10% units, and may be set in 5% units or 1% units.

As shown in fig. 11, in the present embodiment, as the category, a value obtained by summarizing the coverage to 0 to 98% in units of 10% is used. In this example, for convenience of explanation, the category "100%" is used. The coincidence rate indicates the probability that the coverage of the evaluation target object belongs to the category. This means that the larger the matching rate is, the higher the possibility that the coverage of the evaluation target object belongs to the category is.

The evaluation unit 22 may separate the evaluation image into one or more channels, and may input image information (pixel values) of each channel as an input to the neural network NN. The evaluation unit 22 separates the image for evaluation into components of a color space, for example. When the RGB color space is used as the color space, the evaluation unit 22 separates the evaluation image into a pixel value of the R component, a pixel value of the G component, and a pixel value of the B component. When the HSV color space is used as the color space, the evaluation unit 22 separates the image for evaluation into a pixel value of the H component, a pixel value of the S component, and a pixel value of the V component. The evaluation unit 22 may convert the evaluation image into a grayscale, and may input the converted image to the neural network NN.

As shown in fig. 10, the neural network NN has an input layer L1, an intermediate layer L2, and an output layer L3. The input layer L1 is located at the entrance of the neural network NN, and M input values x are input to the input layer L1i(i is an integer of 1 to M). The input layer L1 has a plurality of neurons 41. Neuron 41 and input value xiCorrespondingly arranged, the number of neurons 41 is associated with the input value xiThe total number M of (a) is equal. In other words, the number of neurons 41 is equal to the total number of pixels included in each channel of the evaluation image. The ith neuron 41 inputs the value xiTo each neuron 421 of the first intermediate layer L21 of the intermediate layer L2. The input layer L1 includes a node 41 b. The node 41b outputs a deviation value b for each neuron 421j(j is an integer of 1 to M1).

The intermediate layer L2 is located between the input layer L1 and the output layer L3. The intermediate layer L2 is referred to as a hidden layer since it is hidden from the outside of the neural network NN. The intermediate layer L2 includes one or more layers. In the example shown in FIG. 10, intermediate layer L2 is wrappedIncluding a first intermediate layer L21 and a second intermediate layer L22. The first intermediate layer L21 has M1 neurons 421. In this case, the j-th neuron 421 is represented by the formula (1) and is weighted by the weight coefficient wijWeighting each input value xiThe sum of the obtained values plus the deviation bjTo obtain a calculated value zj. In addition, when the neural network NN is a convolutional neural network, the neuron element 421 sequentially performs, for example, calculation using convolution, an activation function, and pooling. In this case, the activation function is, for example, a ReLU function.

[ number 1]

Figure BDA0002627044160000141

Then, the jth neuron 421 outputs the calculated value z to each neuron 422 of the second intermediate layer L22j. The first intermediate layer L21 includes a node 421 b. The node 421b outputs an offset value for each neuron 422. Thereafter, each neuron performs the same calculation as the neuron 421, and outputs the calculated value to each neuron in the subsequent layer. The neuron element in the final stage of the intermediate layer L2 (here, the neuron element 422) outputs a calculated value to each neuron element 43 of the output layer L3.

The output layer L3 is located at the outlet of the neural network NN and outputs the output value yk(k is an integer of 1 to N). Output value ykThe assigned category is a value corresponding to the matching rate of the category. The output layer L3 has a plurality of neurons 43. Neuron 43 and output value ykCorrespondingly, the number of neurons 43 and the output value ykThe total number N of (a) is equal. In other words, the number of neurons 43 is equal to the number of classes representing the coverage. Each neuron element 43 performs the same calculation as the neuron element 421, and calculates an activation function using the calculation result as an argument to obtain an output value yk. Examples of activation functions include softmax function, ReLU function, hyperbolic function, double-curved function, identity function, and step function. In the present embodiment, a softmax function is used. Thus, each output value ykIs normalized to make N output values ykThe total of (2) becomes 1. In other words, by the pair outputValue ykAnd multiplied by 100 to obtain a coincidence rate (%).

Subsequently, the evaluation unit 22 compares the N output values ykThe evaluation result is output to the transmission unit 23 together with the image ID of the evaluation image as the evaluation result of the evaluation image. Predetermining N output values ykOf the respective output values ykAnd establishing correspondence with any one of the N categories. The evaluation unit 22 may also output the N output values ykThe largest output value in (1) is evaluated together with the class name or index (corresponding to the "number" shown in fig. 11) corresponding to the output value. Here, the array of output values corresponding to the matching rate shown in fig. 11 is output to the transmission unit 23 as the evaluation result. In this case, the user terminal 10 can determine how the user outputs.

Then, the transmission unit 23 transmits the evaluation result to the user terminal 10 via the network NW (step S05). At this time, the transmission unit 23 identifies the user terminal 10 of the transmission destination based on the terminal ID transmitted from the user terminal 10 together with the evaluation image, and transmits the evaluation result to the user terminal 10. Then, the receiving unit 15 receives the evaluation result transmitted from the evaluation device 20, and outputs the evaluation result to the output unit 16. The transmission unit 23 may encrypt the evaluation result as described above and transmit the encrypted evaluation result to the user terminal 10. In this case, the receiving unit 15 receives the encrypted evaluation result from the evaluation device 20, decrypts the encrypted evaluation result, and outputs the evaluation result to the output unit 16.

Next, the output unit 16 generates output information for notifying the user of the evaluation result, and outputs the evaluation result to the user based on the output information (step S06). The output unit 16 displays, for example, the name of the category having the highest match rate (good coverage), and the match rate. The output unit 16 may calculate the coverage by adding the results of multiplying the values of the respective categories by the matching rates, and display the calculation result as the evaluation result. In the example of fig. 11, the coverage is 45% (═ 40% × 0.5+ 50% × 0.5).

As shown in fig. 12 (a) and (b), the output unit 16 may display the evaluation result of the coverage on a graph using an arrow Pa. The output unit 16 may display the evaluation result in text. For example, the output section 16 displays as "result: coverage 45% "etc. The output unit 16 may display the names of all categories and their matching rates in text.

The output unit 16 may notify the user of whether the shot peening process is acceptable or unacceptable using the evaluation result. The output unit 16 may output the evaluation result by sound or may output the evaluation result by vibration. The manner of output by the output unit 16 may be set by the user.

Next, the correction information acquisition unit 17 determines whether or not the user has performed the correction operation of the evaluation result. For example, after confirming the evaluation result output by the output unit 16, the user operates the input device 105 to display a screen for correcting the evaluation result.

For example, as shown in fig. 13 (a) and (b), the user operates the input device 105 to move the arrow Pa using the pointer MP, thereby designating the coverage on the graph. In other words, the user determines the coverage by visually inspecting the evaluation object, and the user moves the arrow Pa to indicate the numerical value corresponding to the coverage determined by the user.

To specify coverage, the user may use a text box. The user may use objects such as radio buttons, drop down menus, or sliders to select a category.

When the correction information acquisition unit 17 determines that the correction operation is not performed, a series of processes of the evaluation method of the evaluation system 1 is ended. On the other hand, when determining that the correction operation has been performed by the input device 105, the correction information acquisition unit 17 acquires information indicating the type after the correction as the correction information together with the image ID of the evaluation image on which the correction operation has been performed (step S07).

Then, the correction information acquisition unit 17 outputs the correction information to the transmission unit 14, and the transmission unit 14 transmits the correction information to the evaluation device 20 via the network NW (step S08). The receiving unit 21 receives the correction information transmitted from the user terminal 10, and outputs the correction information to the evaluation unit 22. The transmission unit 14 may encrypt the correction information as described above and transmit the encrypted correction information to the evaluation device 20. In this case, the reception unit 21 receives the encrypted correction information from the user terminal 10, decrypts the encrypted correction information, and outputs the correction information to the evaluation unit 22.

Subsequently, the evaluation unit 22 performs learning based on the correction information (step S09). Specifically, the evaluation unit 22 sets the corrected category and the set of images for evaluation as the teacher data. The evaluation unit 22 may perform learning of the neural network NN by any one of an online learning method, a small batch learning method, and a batch learning method. On-line learning is a method of performing learning using new teacher data each time the teacher data is acquired. The small-lot learning is a method of learning with a certain amount of teacher data as 1 unit and using 1 unit of teacher data. Batch learning is a method of learning using all teacher data. Learning uses algorithms such as back propagation. In addition, the learning of the neural network NN means that the weight coefficients and bias values used by the neural network NN are updated to better values.

As described above, the series of processes of the evaluation method of the evaluation system 1 is completed.

Each of the functional units in the user terminal 10 and the evaluation device 20 is realized by executing a program module for realizing each function in a computer constituting the user terminal 10 and the evaluation device 20. The evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory. The evaluation program may be provided as a data signal via a network.

In the evaluation system 1, the evaluation device 20, the evaluation method, the evaluation program, and the recording medium described above, the evaluation area Re is extracted from the captured image of the evaluation target object, and an image for evaluation is generated based on the evaluation area Re. Then, the coverage is evaluated based on the evaluation image, and the evaluation result is output. The evaluation region Re is extracted from the captured image based on the size of the dent region De, which is an image of the dent generated in the evaluation object. Specifically, the evaluation region Re is extracted from the captured image such that the larger the dent region De, the larger (the area of) the evaluation region Re. Thus, the coverage area is evaluated with respect to the range corresponding to the size of the dent region De, so that the influence of one dent on the coverage area can be reduced. As a result, the evaluation accuracy of the coverage can be improved.

More specifically, the size of the evaluation region Re is set by multiplying the size (e.g., average diameter) of the dent region De by a predetermined constant. Therefore, the range (area) of the evaluation region Re can be sufficiently increased for the size of the dent region De, so that the influence of one dent on the coverage can be reduced. As a result, the evaluation accuracy of the coverage can be improved.

The evaluation region Re is enlarged or reduced so that the size of the dent region De matches a predetermined size (for example, a reference grain size). Therefore, the neural network NN can be appropriately evaluated. In addition, since the coverage can be evaluated on the basis of a common standard for the shots having different particle diameters, the accuracy of evaluating the coverage can be improved.

Even with the same evaluation object, the color tone of the captured image may change depending on the color tone of the light source used for capturing. In addition, even with the same evaluation object, the brightness of the captured image may vary depending on the amount of light irradiation. Therefore, the color of the evaluation region Re is corrected based on the color of the reference region (for example, the region Rw in the marker region Rm) included in the captured image. In the case where the color of the region Rw in the marker region Rm is different from the color (white) of the region Rw in the marker MK, it is considered that the color in the captured image is affected by light. Therefore, the color of the evaluation region Re is corrected so that the color of the region Rw in the marker region Rm becomes the color of the region Rw in the marker MK. This can reduce the influence of light. As a result, the evaluation accuracy of the coverage can be further improved.

When a strong light is irradiated to the evaluation object, specular reflection may occur, and when the evaluation object is photographed in this state, overexposure may occur in the photographed image. Color information is lost in the areas where overexposure occurs. Therefore, the color information can be restored by removing the specular reflection (overexposure) from the evaluation region Re. This can further improve the accuracy of evaluation of the coverage.

Coverage was evaluated using a neural network NN. The pattern generated on the surface of the evaluation object by shot blasting is amorphous. Therefore, it is difficult to determine the position and state of an irregular object in general object detection. In addition, pattern recognition is not suitable for recognizing innumerable existing patterns. In contrast, by learning the neural network NN, the coverage can be evaluated, and the accuracy of evaluating the coverage can be further improved.

(second embodiment)

Fig. 14 is a schematic diagram showing the configuration of an evaluation system including an evaluation device according to the second embodiment. The evaluation system 1A shown in fig. 14 is different from the evaluation system 1 mainly in that a user terminal 10A is provided instead of the user terminal 10, and an evaluation device 20A is provided instead of the evaluation device 20.

The user terminal 10A is different from the user terminal 10 mainly in that it does not include the correction unit 13 and in that it transmits a captured image to the evaluation device 20A instead of an image for evaluation. Further, in the user terminal 10A, the image acquisition unit 11 outputs the captured image to the transmission unit 14. The transmission unit 14 outputs the captured image to the evaluation device 20A.

The evaluation device 20A is different from the evaluation device 20 mainly in that it receives a captured image from the user terminal 10A instead of an image for evaluation, and further includes a correction unit 24. The receiving unit 21 receives the captured image from the user terminal 10A, and outputs the captured image to the correcting unit 24. Further, since the receiving section 21 acquires the captured image from the user terminal 10A, it can be regarded as an image acquiring section. The correction unit 24 has the same function as the correction unit 13. In other words, the correction unit 24 extracts the evaluation area from the captured image and generates an image for evaluation based on the evaluation area. Then, the correction unit 24 outputs the evaluation image to the evaluation unit 22.

Next, an evaluation method performed by the evaluation system 1A will be described with reference to fig. 15. Fig. 15 is a sequence diagram showing an evaluation method performed by the evaluation system shown in fig. 14. First, the image acquiring unit 11 acquires a captured image of the evaluation target (step S31). For example, the image acquiring unit 11 acquires an image of the evaluation object generated by the image capturing device 107 as a captured image in the same manner as in step S01.

Then, the image acquiring unit 11 outputs the acquired captured image to the transmitting unit 14, and the transmitting unit 14 transmits the captured image to the evaluation device 20A via the network NW (step S32). At this time, the transmission unit 14 transmits the captured image to the evaluation device 20A together with a terminal ID that can uniquely identify the user terminal 10A. The receiving unit 21 receives the captured image transmitted from the user terminal 10A, and outputs the captured image to the correcting unit 24. As described above, the transmission unit 14 may encrypt the captured image and transmit the encrypted captured image to the evaluation device 20A. In this case, the reception unit 21 receives the encrypted captured image from the user terminal 10A, decrypts the encrypted captured image, and outputs the captured image to the correction unit 24.

Next, the correction unit 24 corrects the captured image (step S33). Since the processing of step S33 is the same as the processing of step S02, detailed description thereof is omitted. The correction unit 24 outputs the captured image corrected by the correction process of step S33 to the evaluation unit 22 as an evaluation image. Hereinafter, the processing of steps S34 to S39 is the same as the processing of steps S04 to S09, and therefore detailed description thereof is omitted. As described above, the series of processes of the evaluation method of the evaluation system 1A is completed.

Each of the functional units in the user terminal 10A and the evaluation device 20A is realized by executing a program module for realizing each function in a computer constituting the user terminal 10A and the evaluation device 20A. The evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory. The evaluation program may be provided as a data signal via a network.

The evaluation system 1A, the evaluation device 20A, the evaluation method, the evaluation program, and the recording medium according to the second embodiment also exhibit the same effects as the evaluation system 1, the evaluation device 20, the evaluation method, the evaluation program, and the recording medium according to the first embodiment. In the evaluation system 1A, the evaluation device 20A, the evaluation method, the evaluation program, and the recording medium according to the second embodiment, since the user terminal 10A does not include the correction unit 13, the processing load of the user terminal 10A can be reduced.

(third embodiment)

Fig. 16 is a schematic diagram showing the configuration of an evaluation system including an evaluation device according to a third embodiment. The evaluation system 1B shown in fig. 16 is different from the evaluation system 1 mainly in that the user terminal 10B is provided instead of the user terminal 10 and the evaluation device 20 is not provided. The user terminal 10B is different from the user terminal 10 mainly in that it further includes the evaluation unit 18 and does not include the transmission unit 14 and the reception unit 15. In this case, the user terminal 10B may be a stand-alone system type evaluation device.

In the user terminal 10B, the correction unit 13 outputs the evaluation image to the evaluation unit 18. The correction information acquisition unit 17 outputs the correction information to the evaluation unit 18. The evaluation unit 18 has the same function as the evaluation unit 22. In other words, the evaluation unit 18 evaluates the coverage of the evaluation target object based on the evaluation image. Then, the evaluation unit 18 outputs the evaluation result to the output unit 16.

Next, an evaluation method performed by the evaluation system 1B (user terminal 10B) will be described with reference to fig. 17. Fig. 17 is a flowchart showing an evaluation method performed by the evaluation system shown in fig. 16.

First, the image acquiring unit 11 acquires a captured image of the evaluation target in the same manner as in step S01 (step S41). Then, the image acquisition unit 11 outputs the captured image to the correction unit 13. Next, the correction unit 13 corrects the captured image (step S42). Since the processing of step S42 is the same as the processing of step S02, detailed description thereof is omitted. Then, the correction unit 13 outputs the captured image corrected by the correction process of step S42 to the evaluation unit 18 as an evaluation image.

Next, the evaluation unit 18 evaluates the coverage of the evaluation target object based on the evaluation image (step S43). Since the processing of step S43 is the same as the processing of step S04, detailed description thereof is omitted. Then, the evaluation unit 18 outputs the evaluation result to the output unit 16. Next, the output unit 16 generates output information for notifying the user of the evaluation result, and outputs the evaluation result to the user based on the output information (step S44). Since the processing of step S44 is the same as the processing of step S06, detailed description thereof is omitted.

Next, the correction information acquisition unit 17 determines whether or not the user has performed the correction operation of the evaluation result (step S45). When the correction information acquisition unit 17 determines that the correction operation has not been performed (no in step S45), the series of processing in the evaluation method of the evaluation system 1B is terminated. On the other hand, when determining that the correction operation has been performed (yes in step S45), the correction information acquiring unit 17 acquires information indicating the type of image after correction together with the image ID of the evaluation image on which the correction operation has been performed as the correction information. Then, the correction information acquisition unit 17 outputs the correction information to the evaluation unit 18.

Subsequently, the evaluation unit 18 performs learning based on the correction information (step S46). Since the processing of step S46 is the same as the processing of step S09, detailed description thereof is omitted. As described above, the series of processing of the evaluation method of the evaluation system 1B is completed.

Each functional unit in the user terminal 10B is realized by executing a program module for realizing each function in a computer constituting the user terminal 10B. The evaluation program including these program modules is provided by a computer-readable recording medium such as a ROM or a semiconductor memory. The evaluation program may be provided as a data signal via a network.

The evaluation system 1B, the user terminal 10B, the evaluation method, the evaluation program, and the recording medium according to the third embodiment also provide the same effects as the evaluation system 1, the evaluation device 20, the evaluation method, the evaluation program, and the recording medium according to the first embodiment. In the evaluation system 1B, the user terminal 10B, the evaluation method, the evaluation program, and the recording medium according to the third embodiment, since data transmission and reception via the network NW are not required, a time lag associated with communication via the network NW does not occur, and the response speed can be improved. In addition, the traffic and communication costs of the network NW can be reduced.

The evaluation system, evaluation device, evaluation method, evaluation program, and recording medium according to the present disclosure are not limited to the above-described embodiments.

For example, when the user does not correct the evaluation result, the user terminals 10, 10A, and 10B may not include the correction information acquiring unit 17.

In addition, in the neural network NN, batch normalization may be performed. Batch normalization is a process of converting the output values of the respective layers in such a manner that dispersion becomes constant. In this case, since the offset value is not required, the nodes (the node 41b, the node 421b, and the like) outputting the offset value can be omitted.

The evaluation units 18 and 22 may evaluate the coverage based on the evaluation image by a method other than the neural network.

The output unit 16 may output the evaluation result to a memory (storage device), not shown, and store the evaluation result in the memory. The output unit 16 creates management data in which a management number that can uniquely identify the evaluation result, the date when the evaluation was performed, and the like are associated with the evaluation result, and stores the management data.

The shape of marker MK is not limited to a square. The shape of marker MK may be rectangular.

In the above-described embodiment, marker MK has a shape in which the orientation of marker MK can be determined, but the shape of marker MK is not limited to a shape having directivity. The shape of marker MK may be an omnidirectional shape. For example, as shown in fig. 18 (a), the region Rb may have a square shape, and the region Rw may have a square shape smaller than the region Rb by one turn. The center point of the region Rb and the center point of the region Rw overlap each other, and the sides of the region Rb and the sides of the region Rw are parallel to each other. When marker MK has an omnidirectional shape, the shape of marker MK is simple, and therefore, creation of marker MK can be facilitated. Further, since the orientation of marker MK is not important, the user can easily photograph the evaluation target.

As shown in fig. 18 (b), marker MK may have an opening Hm. The opening Hm is a through hole that penetrates the sheet member in which the drawing mark MK is drawn. The opening area of the opening Hm is sufficiently larger than the area of the extractable evaluation region Re. Therefore, the correction units 13 and 24 may extract the region exposed through the opening Hm from the captured image as a pre-process for extracting the evaluation region Re. The correction units 13 and 24 may extract the evaluation region Re from the extracted region based on the size of the dent region De included in the extracted region.

When the marker MK not surrounded by the frame F2 is used, the boundary between the marker region Rm and the region of the evaluation object may be unclear due to reflection of light or the like. In such a case, in the edge detection process, an edge may not be detected. In object detection, if the determination threshold is excessively decreased, false detection increases, and if the determination threshold is excessively increased, detection omission increases. In addition, the orientation (angle) of the mark region Rm cannot be obtained in the object detection itself. Further, in the case where the mark region Rm is extracted by the object detection process, the edge enhancement process is performed, and the edge detection process is further performed, the detection accuracy is improved, but in the case where the color of the outer edge portion of the mark region Rm and the color of the periphery of the mark region Rm are hardly changed, detection omission may occur.

On the other hand, in marker MK shown in fig. 6 (b) to (F), and fig. 18 (c) and (d), marker MK is surrounded by frame F2, and gap Rgap is provided between frame F2 and region Rb. The gap Rgap surrounds the region Rb along the edge F1. The color of the gap Rgap is different from the color of the outer edge portion of the mark MK (in other words, the region Rb). Therefore, even if the color of the periphery of the mark region Rm (outside the frame F2) is similar to the color of the outer edge portion (region Rb) of the mark region Rm, the outer edge of the mark region Rm (edge F1) is clear, and the outer edge of the mark region Rm can be detected. For example, when the edge enhancement processing is performed after the marker region Rm is extracted by the object detection processing, and the edge detection processing is further performed, the vertices (vertices Pm1 to Pm4) of the region Rb can be detected more reliably. Therefore, the mark region Rm can be extracted at high speed and with high accuracy. As a result, the evaluation accuracy of the coverage can be further improved. In order to secure the gap Rgap, the distance between the frame F2 and the region Rb (the width of the gap Rgap) may be one tenth or more of the side of the mark MK. For example, in consideration of ease of use of the marker MK, the distance between the frame F2 and the region Rb (the width of the gap Rgap) may be equal to or less than half of one side of the marker MK.

As shown in fig. 18 (c) and (d), frame F2 may not be a frame that completely encloses marker MK. In other words, the drop-off portion Fgap may be provided at block F2. For example, the block F2 is not limited to a solid line, but may be a broken line. In this case, the frame F2 has a shape in which the frame line of the frame F2 is interrupted halfway. When the dropping portion Fgap is provided in the frame F2, the probability of detecting the region surrounded by the frame F2 as the mark region Rm can be reduced by edge detection processing or the like, and therefore the detection accuracy of the mark region Rm is improved. In other words, since the possibility of detecting the vertex of the frame F2 can be reduced, the vertex of the mark region Rm (region Rb) can be detected more reliably. As a result, the evaluation accuracy of the coverage can be further improved.

As shown in fig. 19, when extracting the evaluation region Re of a size set based on the dent region De from the captured image G, the correction units 13 and 24 may randomly determine the evaluation region Re from the captured image G and extract the determined evaluation region Re. In this case, first, the correction units 13 and 24 calculate the maximum value of the coordinates of the reference point Pr in the evaluation region Re. The reference point Pr is one of the four vertices of the evaluation region Re, and here, is the vertex closest to the origin of the X-Y coordinates among the four vertices of the evaluation region Re. For example, when the length of one side of the evaluation region Re is 100 pixels, the maximum value X of the X coordinate of the reference point Prcrop_maxAnd the maximum value Y of the Y coordinatecrop_maxRepresented by the following formula (2). In addition, the vertex Pg1 of the captured image G is located at the origin (0, 0), and the vertex Pg2 is located at (X)g0), and vertex Pg3 is located at (X)g,Yg) With vertex Pg4 at (0, Y)g)。

[ number 2]

(xcrop_max,ycrop_max)=(Xg-100,Yg-100)…(2)

The correction units 13 and 24 randomly determine the coordinates (x) of the reference points of the evaluation region Re using the formula (3)crop,ycrop). The function random (minimum value, maximum value) is a function for returning an arbitrary value included in a range from the minimum value to the maximum value.

[ number 3]

(xcrop,ycrop)=(random(0,xcrop_max),random(0,ycrop_max))…(3)

When the determined evaluation region Re and the mark region Rm overlap, the correction units 13 and 24 may determine the coordinates of the reference points of the evaluation region Re again.

As shown in fig. 20, the correction units 13 and 24 may specify an extraction direction for the marker region Rm and extract the evaluation region Re from the captured image G. In this case, first, the correction units 13 and 24 calculate the coordinates (x) of the center position Cg of the captured image Gcg,ycg) And coordinates (x) of the center position Cm of the mark region Rmcm,ycm). Then, as shown in equation (4), the correction units 13 and 24 calculate a vector V from the center position Cm toward the center position Cg.

[ number 4]

V=(xcg-xcm,ycg-ycm)=(xv,yv)…(4)

The correction units 13 and 24 determine the position of the evaluation region Re in the direction indicated by the vector V from the mark region Rm. The correction units 13 and 24 determine the position of the evaluation region Re such that the reference point Pr of the evaluation region Re is located in the direction of the vector V from the center position Cm, for example. Here, the reference point Pr is the vertex closest to the marker region Rm among the four vertices of the evaluation region Re. The correction units 13 and 24 determine the position of the evaluation region Re so as not to overlap with the mark region Rm, for example. Specifically, the correction units 13 and 24 calculate the coordinates (x) of the reference point Pr _ max farthest from the mark region Rm among the coordinates of the reference point Prcrop_max,ycrop_max) And the coordinate (x) of the reference point Pr _ min nearest to the mark region Rmcrop_min,ycrop_min). Furthermore, it is possible to provide a liquid crystal display device,the correction units 13 and 24 determine the position of the evaluation region Re so that the reference point Pr is located on the line segment between these 2 points.

Description of reference numerals

1. 1A, 1B … evaluation system; 10. 10A, 10B … user terminals; 11 … an image acquisition unit; 13. 24 … correcting part; 16 … output; 17 … a correction information acquisition unit; 18. 22 … evaluation unit; 20. 20a … evaluation equipment; 21 … a receiving section (image acquiring section); 23 … sending part (output part); de … dent area; g … shooting images; NN … neural network; re … evaluation area.

37页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:设置有与低压蓄能器永久连接的控制室的旋转冲击式液压穿孔器

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!