Apparatus and method for automatic failure threshold detection of images

文档序号:1026939 发布日期:2020-10-27 浏览:11次 中文

阅读说明:本技术 用于图像的自动故障阈值检测的设备和方法 (Apparatus and method for automatic failure threshold detection of images ) 是由 J.加西亚 于 2018-03-13 设计创作,主要内容包括:在至少一个实施方案中,提供一种体现在非暂时性计算机可读介质中的计算机程序产品,所述计算机程序产品被编程以检测一个或多个摄影机的性能阈值。所述计算机程序产品包括用于以下操作的指令:从一个或多个摄影机捕获多个图像以将每个捕获图像内的对象与预定对象进行比较以确定所述对象是否已经被正确识别;以及从每个捕获图像中提取所述对象。所述计算机程序产品包括用于以下操作的指令:将至少一个梯度应用于每个提取的对象以生成多个梯度图像。所述计算机程序产品包括用于以下操作的指令:将所述提取的对象与所述预定对象进行比较;以及确定由所述至少一个梯度修改的所述提取的对象是否已经被正确识别。所述计算机程序产品包括用于以下操作的指令:建立所述一个或多个摄影机的性能阈值。(In at least one embodiment, a computer program product embodied in a non-transitory computer readable medium is provided that is programmed to detect a performance threshold of one or more cameras. The computer program product includes instructions for: capturing a plurality of images from one or more cameras to compare an object within each captured image to a predetermined object to determine whether the object has been correctly identified; and extracting the object from each captured image. The computer program product includes instructions for: at least one gradient is applied to each extracted object to generate a plurality of gradient images. The computer program product includes instructions for: comparing the extracted object with the predetermined object; and determining whether the extracted object modified by the at least one gradient has been correctly identified. The computer program product includes instructions for: establishing a performance threshold for the one or more cameras.)

1. A computer program product embodied in a non-transitory computer readable medium, the computer program product programmed to detect a performance threshold of one or more cameras, the computer program product comprising instructions for:

capturing a plurality of images from one or more cameras;

comparing an object within each captured image to a predetermined object to determine whether the object has been correctly identified;

extracting the object from each captured image determined to be correctly recognized;

applying at least one gradient to each extracted object to generate a plurality of gradient images, wherein each gradient image comprises extracted objects modified by the at least one gradient;

comparing the extracted object modified by the at least one gradient with the predetermined object;

determining whether the extracted object modified by the at least one gradient has been correctly identified based on a comparison of the extracted object modified by the at least one gradient with the predetermined object; and

establishing a performance threshold for the one or more cameras after determining that the extracted object modified by the at least one gradient has been correctly identified.

2. The computer program product of claim 1, wherein the instructions for establishing the performance threshold of the one or more cameras further comprise instructions for: establishing the performance threshold for a performance gradient of the one or more cameras after determining whether the extracted object modified by the at least one gradient has been correctly identified.

3. The computer program product of claim 2, wherein the performance gradient comprises one of saturation, distortion, contrast, blur, sharpness, and brightness.

4. The computer program product of claim 1, further comprising instructions for: filtering each object from each captured image prior to extracting the object from each captured image determined to be correctly recognized.

5. The computer program product of claim 4, further comprising instructions for: a bounding box is applied around each object from each captured object prior to filtering the object from each captured image determined to be correctly recognized.

6. The computer program product of claim 5, further comprising instructions for: extracting the object before applying the at least one gradient.

7. The computer program product of claim 1, further comprising instructions for: the at least one gradient is received via a user interface.

8. The computer program product of claim 1, further comprising instructions for: displaying the performance threshold of one or more cameras via a user interface.

9. The computer program product of claim 1, wherein the instructions for capturing the plurality of images from the one or more cameras further comprise instructions for: a plurality of images of an exterior of the vehicle are captured from the one or more cameras.

10. An apparatus for detecting performance thresholds of one or more cameras, the apparatus comprising:

a memory device; and

an image detector comprising the memory device and configured to:

capturing a plurality of images from one or more cameras;

comparing an object within each captured image with predetermined objects stored in the memory device to determine whether the object has been correctly identified;

extracting the object from each captured image determined to be correctly recognized;

applying at least one gradient to each extracted object to generate a plurality of gradient images, wherein each gradient image comprises extracted objects modified by the at least one gradient;

comparing the extracted object modified by the at least one gradient with the predetermined object;

determining whether the extracted object modified by the at least one gradient has been correctly identified based on a comparison of the extracted object modified by the at least one gradient with the predetermined object; and is

Establishing a performance threshold for the one or more cameras after determining that the extracted object modified by the at least one gradient has been correctly identified.

11. The apparatus of claim 10, wherein the image detector is configured to establish the performance threshold of performance gradients of the one or more cameras after determining whether the extracted object modified by the at least one gradient has been correctly identified.

12. The apparatus of claim 11, wherein the performance gradient comprises one of saturation, distortion, contrast, blur, sharpness, and brightness.

13. The apparatus of claim 10, wherein the image detector is further configured to filter each object from each captured image prior to extracting the image from each captured image determined to be correctly identified.

14. The apparatus of claim 13, wherein the image detector is further configured to apply a bounding box around each object from each captured image prior to extracting the object from each captured image.

15. The apparatus of claim 14, wherein the image detector is further configured to extract the object prior to applying the at least one gradient.

16. The apparatus of claim 10, wherein the image detector is configured to receive the at least one gradient via a user interface.

17. The apparatus of claim 10, wherein the image detector is further configured to display the performance threshold of the one or more cameras via a user interface.

18. The apparatus of claim 10, wherein the image detector is further configured to capture a plurality of images of the exterior of the vehicle from the one or more cameras.

19. An apparatus for detecting performance thresholds of one or more cameras, the apparatus comprising:

a memory device; and

an image detector comprising the memory device and configured to:

capturing a plurality of images of an exterior of the vehicle from one or more cameras;

comparing an object within each captured image with predetermined objects stored in the memory device to determine whether the object has been correctly identified;

extracting the object from each captured image determined to be correctly recognized;

applying at least one gradient to each extracted object to generate a plurality of gradient images, wherein each gradient image comprises extracted objects modified by the at least one gradient;

comparing the extracted object modified by the at least one gradient with the predetermined object;

determining whether the extracted object modified by the at least one gradient has been correctly identified based on a comparison of the extracted object modified by the at least one gradient with the predetermined object; and is

Establishing a performance threshold for the one or more cameras after determining that the extracted object modified by the at least one gradient has been correctly identified.

20. The apparatus of claim 19, wherein the image detector is further configured to establish the performance threshold of performance gradients for the one or more cameras after determining whether the extracted object modified by the at least one gradient has been correctly identified.

Technical Field

Aspects disclosed herein relate generally to an apparatus and method for providing automatic failure threshold detection of images. These and other aspects will be discussed in greater detail herein.

Background

Vehicles often have a need to recognize images of the exterior of the vehicle for a number of functions performed by the vehicle. Deep learning/neural network techniques may be trained to identify images captured by a camera on a vehicle. However, it may be difficult to determine threshold points in the event of failure of such deep learning/neural network techniques or other techniques. For example, a hammer may be tested to determine its tensile strength by incrementally increasing the shear force to find the location where the hammer fails. However, there is no equivalent method on the market to detect the performance threshold of camera systems that use deep learning techniques to detect objects.

Today, test engineers may use camera lenses from road tests to detect objects. In a particular situation a large amount of video is required to find the edge situation. In many cases, road testing is required to be conducted over millions of miles to capture as many scenes as possible. In some cases, road tests may be conducted within billions or trillions of miles to capture every possible road condition. Typically, deep learning techniques require thousands of edge case scenarios (or training data) to know what is the correct result (i.e., to know what is the correct image).

Disclosure of Invention

In at least one embodiment, a computer program product embodied in a non-transitory computer readable medium is provided that is programmed to detect a performance threshold of one or more cameras. The computer program product includes instructions for: capturing a plurality of images from one or more cameras to compare an object within each captured image to a predetermined object to determine whether the object has been correctly identified; and extracting the object determined to be correctly recognized from each captured image. The computer program product includes instructions for: at least one gradient is applied to each extracted object to generate a plurality of gradient images. Each gradient image includes the extracted object modified by the at least one gradient. The computer program product includes instructions for: comparing the extracted object modified by the at least one gradient with the predetermined object; and determining whether the extracted object modified by the at least one gradient has been correctly identified based on the comparison of the extracted object modified by the at least one gradient with the predetermined object. The computer program product includes instructions for: establishing a performance threshold for the one or more cameras after determining whether the extracted object modified by the at least one gradient has been correctly identified.

In at least another embodiment, an apparatus for detecting a performance threshold of one or more cameras is provided. The apparatus includes a memory device and an image detector. The image detector includes the memory device and is configured to: capturing a plurality of images from one or more cameras; and comparing the object within each captured image with a predetermined object stored on the memory device to determine whether the object has been correctly identified. The image detector is further configured to: extracting an object from each captured image determined to be correctly recognized; and applying at least one gradient to each extracted object to generate a plurality of gradient images, wherein each gradient image comprises the extracted object modified by the at least one gradient. The image detector is further configured to: comparing the extracted object modified by the at least one gradient with the predetermined object; and determining whether the extracted object modified by the at least one gradient has been correctly identified based on the comparison of the extracted object modified by the at least one gradient with the predetermined object. The image detector is further configured to establish a performance threshold for the one or more cameras after determining whether the extracted object modified by the at least one gradient has been correctly identified.

In at least another embodiment, an apparatus for detecting a performance threshold of one or more cameras is provided. The apparatus includes a memory device and an image detector. The image detector includes the memory device and is configured to: capturing a plurality of images from one or more cameras; and comparing the object within each captured image with a predetermined object stored on the memory device to determine whether the object has been correctly identified. The image detector is further configured to: extracting an object from each captured image determined to be correctly recognized; and applying at least one gradient to each extracted object to generate a plurality of gradient images, wherein each gradient image comprises the extracted object modified by the at least one gradient. The image detector is further configured to: comparing the extracted object modified by the at least one gradient with the predetermined object; and determining whether the extracted object modified by the at least one gradient has been correctly identified based on the comparison of the extracted object modified by the at least one gradient with the predetermined object. The image detector is further configured to establish a performance threshold for the one or more cameras after determining whether the extracted object modified by the at least one gradient has been correctly identified.

Drawings

Embodiments of the present disclosure are particularly pointed out in the appended claims. However, other features of the various embodiments will become more apparent and will be best understood by referring to the following detailed description in conjunction with the accompanying drawings, in which:

fig. 1 depicts an apparatus for automatic failure threshold detection of one or more cameras according to one embodiment;

FIG. 2 depicts a set of images detected by a device according to one embodiment;

FIG. 3 depicts an extracted image according to one embodiment;

fig. 4 depicts a plurality of gradient images with corresponding gradient levels according to an embodiment;

FIG. 5 generally depicts a composite image generated by a device according to one embodiment;

FIG. 6 depicts a method of performing automatic fault threshold detection for one or more cameras according to one embodiment;

FIG. 7 depicts an image from the proper identification of a captured image according to one embodiment;

FIG. 8 depicts a plurality of filtered captured images according to one embodiment;

FIG. 9 depicts an image of an object extracted from a bounding box according to one embodiment;

FIG. 10 depicts a plurality of generated gradient images according to one embodiment;

FIG. 11 depicts a composite image of each generated gradient image according to one embodiment; and is

Fig. 12 depicts a new set of gradient images relative to the initial image, in accordance with one embodiment.

Detailed Description

Detailed embodiments of the present invention are disclosed herein as needed; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention that may be embodied in various and alternative forms. The figures are not necessarily to scale; some features may be exaggerated or minimized to show details of particular components. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the present invention.

Embodiments of the present disclosure generally provide a plurality of circuits or other electrical devices. All references to such circuits and other electrical devices and the functionality they provide are not intended to be limited to encompassing only what is illustrated and described herein. While specific labels may be assigned to the various circuits or other electrical devices disclosed, such labels are not intended to limit the operating range of the circuits and other electrical devices. Such circuits and other electrical devices may be combined and/or separated from one another in any manner based on the particular type of electrical implementation desired. It should be recognized that any circuit or other electrical device disclosed herein may include any number of microcontrollers, Graphics Processor Units (GPUs), integrated circuits, memory devices (e.g., FLASH memory (FLASH), Random Access Memory (RAM), Read Only Memory (ROM), Electrically Programmable Read Only Memory (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), or other suitable variations thereof), and software that cooperate with one another to perform the operations disclosed herein. Additionally, any one or more of the electrical devices may be configured to execute a computer program embodied in a non-transitory computer readable medium programmed to perform any number of the disclosed functions.

Aspects disclosed herein generally provide an apparatus and method that provides a failure threshold for any given object detection algorithm (or computer-based techniques when executed on an electrical-based device or electronic device) to determine the failure rate or range for any number of cameras on a vehicle. For example, the device may use a target object detector to evaluate a given set of images captured from at least one camera on the vehicle. The device may filter correctly identified captured images and separate correctly identified captured images from incorrectly identified images. The correctly identified image may correspond to, for example, a stop sign (e.g., any street or road sign for that matter) that is correctly identified by the device as a stop sign. The erroneously identified image may correspond to, for example, a sign that may have been initially identified as a yield sign but is actually a stop sign. The device may generate a bounding box around the object of each correctly identified image and extract the content of the object enclosed by the bounding box. The apparatus may then apply any number of special effects or gradients (e.g., saturation, distortion, contrast, blur, sharpness, brightness, etc.) of different levels (e.g., 0 to 100) to the correctly identified image to generate a plurality of gradient images. The device may composite the multiple gradient images back into the initial set of correctly identified images. The device re-evaluates the multiple gradient images for the initial captured image and then determines a pass or fail rate to establish a threshold range (or performance threshold) in which the camera may fail (or alternatively the camera provides sufficient performance). For example, the device may determine that a particular camera may successfully identify objects within an image for a certain performance gradient (or gradient of interest), such as a contrast level from 56 to 87 (e.g., a range of 0 to 100). Thus, in this case, the device may determine that any image captured by the camera may successfully detect an image of an object outside the vehicle when the contrast level of the image is 56 to 87. Thus, any system level engineer may associate a corresponding contrast level (or any other performance gradient) for a particular camera and essentially understand the performance characteristics of the camera.

Fig. 1 depicts an apparatus 10 for automatic failure threshold detection of one or more cameras 20 a-20 n ("20"), according to one embodiment. The apparatus 10 generally includes an image detector 11 and may be positioned in a vehicle 13. The image detector 11 comprises an object detection block 12, a controller 14, a memory 16 and a user interface 18. The object detection block 12 is electrically coupled to the controller 14. One or more cameras 20 (hereinafter "cameras 20") may be electrically coupled to the image detector 11. The camera 20 is generally configured to capture an image of the exterior of the vehicle 13 and transmit it to the image detector 11. Although not shown, it should be appreciated that the image detector 11 may be implemented in any of the cameras 20. Alternatively, the image detector 11 may be located in any hardware-based electronic control unit located in the vehicle 13. In addition, the image detector 11 may be distributed over any number of integrated circuits. The image detector 11 may be located outside the vehicle (i.e., on a server) and in wireless communication with the vehicle 13 via a cloud-based implementation. It should also be appreciated that one or more elements of the object detector block 12 may be located outside the vehicle 13, such as on a server remote from the vehicle 13, and in wireless communication with the vehicle 13 via a cloud-based implementation. Similarly, it should also be appreciated that the controller 14 and/or memory 16 may be located outside of the vehicle, such as on a server remote from the vehicle 13, and in wireless communication with the vehicle 13 via a cloud-based implementation.

Image detector 11 is generally configured to determine various thresholds for any of cameras 20. For example, image detector 11 may determine that camera 20 has the ability to successfully identify images of objects within a particular gradient range (or performance gradient). It is recognized that the gradient may correspond to any one or more of saturation, distortion, contrast, blur, sharpness, etc. For example, the specified gradient range may correspond to any saturation level of 0 to 100, any distortion level of 0 to 100, any contrast level of 0 to 100, any ambiguity level of 0 to 100, any sharpness level of 0 to 100, and so forth. Specifically, the image detector 11 may determine, for example, that the camera 20 may successfully identify contrast levels of 36 to 67 external to the vehicle 13 and objects for any other performance gradients of interest (such as saturation, distortion, contrast, blur, sharpness, brightness, etc.). Such information may be used by a camera system designer to understand the performance characteristics of each camera 20 on the vehicle 13.

The image detector 11 comprises a detector block 30, a filter 32, an extraction block 34, a gradient generator 36, a synthesizer block 38, an evaluation block 40 and an analyzer block 42. The particular camera 20 provides the captured image to the detector block 30. The detector block 30 executes an object detection algorithm to detect corresponding objects captured by the camera 20 in a predetermined number of images. For example, detector block 30 may determine that the corresponding object in the captured image is, but is not limited to, a road sign, such as a stop sign, speed limit sign, yield sign, and the like. The detector block 30 may also determine whether an object detected in the corresponding image has been correctly or incorrectly identified by comparing the object to a predetermined object.

For example, assume that the camera 20 has provided 100 captured images of a speed limit sign object having a certain value corresponding to a speed limit to the detector block 30. Further assume that the actual speed limit on the speed limit sign is 25 mph. The captured image may correctly identify the 25mph speed limit or, alternatively, incorrectly identify a different speed limit. The detector block 30 executes an object detection algorithm and assigns a score to each corresponding object in the captured image. The score typically corresponds to whether an object (e.g., a speed limit sign) detected in each of the 100 captured images has been correctly or incorrectly identified by the detector block 30. The detector block 30 may assign a value of "1" to each image of the speed limit sign of 25mph that has been correctly recognized from the 100 captured images. The detector block 30 may assign a value of "0" to each image for which a 25mph speed limit sign (i.e., perhaps 35mph) has been erroneously identified from the 100 captured images. The detector block 30 determines whether a detected object (e.g., a speed limit marker) in the corresponding image (or Image Under Test (IUT)) has been correctly or incorrectly identified by: the object in the image is compared to a previously stored object (or predetermined object) of the object (e.g., the previously stored object may correspond to a speed limit sign or other object established with a set of surface real images as established by another device or person), and then a determination is made as to whether the object in the image is similar to the previously stored object based on the comparison. Fig. 2 generally illustrates a correctly recognized image 50 and a misrecognized image 52. The apparatus 10 provides a bounding box 35 around each image before determining whether the object is correctly or incorrectly identified. As shown in FIG. 2, the correctly recognized image 50 illustrates a 25mph speed limit sign, and the incorrectly recognized image 52 illustrates a 35mph speed limit sign. For purposes of illustration, it will be appreciated that the actual speed limit sign corresponds to 25mph, and for some reason, the detector block 30 determines that the speed limit sign corresponds to 35mph due to distortion or other problems associated with the particular camera 20.

Referring back to fig. 1, the filter 32 separates correctly recognized images (e.g., images in which the detected object has a value of "1") from incorrectly recognized images (e.g., images in which the detected object has a value of "0"). Again, referring to the example above where 100 images were captured, assume for example that the detector block 30 determines that 50 images have correctly recognized objects and 50 images have incorrectly recognized objects. In this case, the filter 32 separates 50 correctly recognized images from 50 incorrectly recognized images.

The extraction block 34 evaluates the objects for each object correctly identified in the bounding box 35. In the above example, there are 50 bounding boxes 35 for 50 correctly identified images. Alternatively, the extraction block 34 may place 4 coordinate points around each correctly identified image. Next, the extraction block 34 extracts an object within the bounding box 35. Fig. 3 generally illustrates an extracted image 54 corresponding to the object extracted by the extraction block 34.

Referring back to fig. 1, the gradient generator 36 applies any number of special effects or gradients (e.g., saturation, distortion, contrast, blur, sharpness, etc.) of different levels (e.g., 0 to 100) to the extracted objects of the correctly identified image to generate a plurality of gradient images. In this case, consider an example in which it is desirable to understand the corresponding contrast level for a particular camera 20. Specifically, it may be desirable to determine the corresponding contrast level at which a particular camera 20 successfully (or unsuccessfully) recognizes an object in an image. The user may select image detector 11 via user interface 28 to determine the corresponding contrast level for a particular camera 20. In this case, the controller 14 controls the gradient generator 36 to apply a contrast level of 0 to 100 to each correctly identified image output from the extraction block 34. For example, consider again 50 bounding boxes 35 of the extraction block 34 that generate 50 correctly identified images. In this case, the gradient generator 36 applies 100 different contrast levels to each of the 50 bounding boxes. Thus, the gradient generator 36 generates 5000 gradient images (e.g., 100 contrast levels) and 50 correctly identified bounding box images provided by the extraction block 34. It should be appreciated that a user may control the image detector 11 via the user interface 18 to apply any one or more gradients at different levels to determine a gradient of interest (or a performance gradient) for a particular camera 20. Fig. 4 depicts an example of one correctly identified image 56 at a corresponding contrast level. Fig. 4 also depicts an example of one correctly identified image 58 at a corresponding distortion level.

The synthesizer block 38 composites the multiple gradient images back to the initial set of correctly identified images. For the example noted above, the compositor block 38 composites 5000 gradient images back to the initial set of correctly identified images generated by the filter 32. In this case, the synthesizer block 38 synthesizes each gradient image into a predetermined size (e.g., 1 "x 1"). In other words, the compositor block 38 inserts the new image parameter progressive image into the initially captured image at the coordinates established by the bounding box and planarizes each image layer to provide a composite image. The device 10 may perform a resizing operation based on the initial "base" image. Each base image may have its own bounding box coordinates, and the extracted bounding box images may be of different scales (Wx H). These different scales must be resized to fit the bounding box dimensions (or coordinates) prior to compositing. Figure 5 depicts the image inserted within the coordinates of the bounding box (note that the bounding box does not actually exist) and this image layer is planarized to provide a composite image. The synthesizer block 38 inserts the new image parameter progressive image (e.g., speed limit sign) into the image initially captured by the camera 20. This is performed on each of the generated gradient images generated by the gradient generator 36. In the example noted above, the compositor block 38 inserts each of such new bounding boxes 60 into the initially captured image to provide a composite image. Each of the multiple gradient images (i.e., gradient levels of 0 to 100) is placed as an overlay on the initial base image in the bounding box coordinates (e.g., gradient value 0+ base image ═ composite image 0, gradient value 1+ base image ═ composite image 1,. until gradient value 100+ base image). Next, the combiner block 38 flattens each image to form a base image having a single gradient object.

The evaluation block 40 compares the object (e.g., with the applied gradient level) in each composite image with previously stored information corresponding to the object (or with a predetermined object that may correspond to a speed limit sign or other object established with a set of surface real images as established by another device or person) to determine whether the content of the gradient image can be correctly identified (e.g., determine whether a speed limit sign with a corresponding gradient can be correctly identified). Similarly, as described above in connection with the detector block 30, the evaluation block 40 executes an object detection algorithm and assigns a score to each corresponding object in the captured image. The score typically corresponds to whether an object (e.g., a speed limit sign) in the composite image has been correctly or incorrectly identified by the detector block 40. For example, the evaluation block 40 may assign a value of "1" to each image of the speed limit sign of 25mph that has been correctly identified from the 5000 composite images. The evaluation block 40 may assign a value of "0" to each image for which a 25mph speed limit sign (i.e., perhaps 35mph) has been erroneously identified from 5000 composite images. The evaluation block 40 records the particular gradient levels (e.g., saturation, distortion, contrast, blur, sharpness, etc.) as applied by the gradient generator 36 to the images that have been correctly identified and the images that have been incorrectly identified.

The evaluation block 40 determines whether the content of each gradient image (e.g., speed limit sign) in the corresponding new bounding box 60 has been correctly or incorrectly identified by: the content of each gradient image is compared to a previously stored representation of the object (e.g., a previously stored image of the speed limit sign) and then a determination is made as to whether the content in the gradient image is similar to the previously stored object based on the comparison.

The analyzer block 42 determines the failure rate of the corresponding camera 20 by determining which of the particular gradient images has been misidentified by the evaluation block 40. The analyzer block 42 receives information corresponding to a particular gradient level corresponding to the erroneously identified gradient image and provides a failure rate. For example, consider that the gradient of interest is the contrast level of a particular camera 20, and the analyzer block 42 determines that the failure rate of the camera is at a contrast level of 56 or less (a level from 0 to 100) and a contrast level of 87 or more. Next, analyzer block 42 may determine that images captured by camera 20 having contrast levels of 56 or lower and 87 or higher will not be correctly recognized by camera 20. For example, if there are 10 base images (and 1000 composite images) and the gradient of interest is contrast, there will be 10 minimum gradient pass levels (minimum GPL) and 10 maximum gradient levels (maximum GPL). The 10GPL minimum is 10, 15, 20, 25, 30, and 30. The camera contrast GPL minimum can be interpreted as 20 with a standard deviation of 6.7.

Analyzer block 42 may also determine any images having contrast levels between 57 and 86 that may correctly identify camera of interest 20. The analyzer block 42 may then provide the fault rate information and/or success rate information for the contrast level to the user interface 18 to provide such information to the user. Thus, any system level engineer may associate a corresponding contrast level for a particular camera and essentially understand the performance characteristics of the camera (i.e., the failure rate or success rate of the image at correctly identifying a particular gradient of interest).

Fig. 2 depicts a method 80 of performing automatic fault threshold detection for one or more cameras 20 according to one embodiment.

In operation 82, the image detector 11 receives any number of captured images from each camera 20 positioned on the vehicle 13.

In operation 84, the detector block 30 executes an object-based algorithm to detect corresponding objects (e.g., speed limit signs as present in a field of view outside the vehicle 13) positioned within the predetermined number of captured images. The detector block 30 identifies objects within the captured image and performs an object detection algorithm to determine whether the identified objects in the captured image have been correctly identified. The detector block 30 determines whether the content of the detected object (e.g., speed limit sign) in the corresponding image has been correctly or incorrectly identified by: the object is compared to a previously stored representation of the object (e.g., a previously stored image of the speed limit sign) and then a determination is made as to whether the object in the image is similar to the previously stored object based on the comparison. The detector block 30 assigns a value of "1" to each image in which an object in the image is correctly recognized, and assigns a value of "0" to each image in which an object in the image is erroneously recognized. Fig. 7 illustrates two separate images 102a and 102b that have been captured by camera 20 and that each include an object that has been correctly identified (i.e., in this case the speed limit is 45 mph).

In operation 86, the filter 32 separates a correctly recognized image (e.g., an image in which the detected object has a value of "1") from an incorrectly recognized image (e.g., an image in which the detected object has a value of "0"). For example, assume that the detector block 30 determines that 50 images have correctly identified objects and that 50 images have incorrectly identified objects. In this case, the filter 32 separates 50 correctly recognized images from 50 incorrectly recognized images. FIG. 8 illustrates a plurality of filtered captured images from any image that has been misrecognized. As noted above, the device 10 generates and applies a bounding box 35 on each image before determining whether the object was correctly or incorrectly identified.

In operation 88, the extraction block 34 extracts the object from within the bounding box 35. As shown in fig. 9, the bounding box 35 is placed around the object of the correctly recognized image 102 a. Fig. 9 illustrates the presence of a bounding box 35 around the correctly recognized image 102 a. However, it should be appreciated that the bounding box 35 may also be placed around the correctly identified image 102b of FIG. 8.

In operation 90, the gradient generator 36 applies any number of special effects or gradients (e.g., saturation, distortion, contrast, blur, sharpness, etc.) of different levels (e.g., 0 to 100) to the extracted objects of the correctly identified image to generate a plurality of gradient images. The multiple gradient images are images having a single image parameter (or a single gradient), which may be, for example, brightness. Fig. 10 illustrates corresponding gradient levels (e.g., 0, 25, 50, 75, 100) applied to a correctly identified image 102a for contrast (see generally 110) and corresponding gaussian noise (see generally 112). As shown generally at 112, corresponding gaussian noise is applied to each contrast level (e.g., 0, 25, 50, 75, and 100) of the image 102 a. In general, if an image parameter transformation can be defined as a scalable modification to one of many parameters (or gradients) of an image (e.g., brightness, contrast, etc.), gaussian noise is considered an image parameter transformation, which can be similar to changing brightness, for example.

In operation 92, the synthesizer block 38 composites the multiple gradient images back to the initial set of correctly identified images. For example, the synthesizer block 38 composites multiple gradient images back to the initial set of correctly identified images generated by the filter 32. Each of the 100 gradient images is placed as an overlay on the initial base image in the bounding box coordinates (e.g., gradient value 0+ base image — composite image 0, gradient value 1+ base image — composite image 1, etc.). The combiner block 38 then flattens the image to form a base image with a single gradient object. Fig. 11 depicts a post-synthesis composite image of each generated gradient image of image 102 a. The combiner block 38 inserts each new image parameter progressive image into the initially captured image at the bounding box coordinates and planarizes each image layer to provide a composite image. In addition, the device 10 performs a resizing operation based on the initial "base" image. This may be performed in operation 92. Each base image has its own bounding box coordinates, and the extracted bounding box images may have different scales (e.g., wxh) and before compositing. These different scales are sized to fit the bounding box size. Typically, the composite image will not have a bounding box injected into the image. As new composite images are generated, there should not be any assistance to the method 90 to locate the object (e.g., on the marker in this case).

In operation 94, the evaluation block 40 compares the object (e.g., with the applied gradient level) in each composite image with previously stored information corresponding to the object to determine whether the content of the gradient image can be correctly recognized (e.g., to determine whether the speed limit sign with the corresponding gradient can be correctly recognized). As noted above, the evaluation block 40 may assign a value of "1" to each image for which an object with an applied gradient level has been correctly identified from the entire set of composite images. The evaluation block 40 may assign a value of "0" to each image for which an object with an applied gradient level has not been correctly identified from the entire set of composite images. The evaluation block 40 records the particular gradient levels (e.g., saturation, distortion, contrast, blur, sharpness, etc.) as applied by the gradient generator 36 to the images that have been correctly identified and the images that have been incorrectly identified.

In operation 96, the analyzer block 42 determines the failure rate of the corresponding camera 20 by determining which of the particular gradient images has been misidentified by the evaluation block 40. As noted above, the analyzer block 42 receives information corresponding to a particular gradient level corresponding to a misrecognized gradient image and provides a failure rate (or range of failure rates for the camera 20). Again, for example, consider that the gradient of interest is the contrast level of a particular camera 20, and the analyzer block 42 determines that the failure rate of the camera is at a contrast level of 56 or less (a level from 0 to 100) and a contrast level of 87 or more. Next, analyzer block 42 may determine that images captured by camera 20 having contrast levels of 56 or lower and 87 or higher will not be correctly recognized by camera 20. Analyzer block 42 may also determine any images having contrast levels between 57 and 86 that may correctly identify camera of interest 20. Alternatively, analyzer block 42 may receive information corresponding to a particular gradient level corresponding to a correctly identified gradient object and provide a success rate for a particular camera 20 (or a success rate range for camera 20). The analyzer block 42 may determine a failure rate range for a particular gradient of interest after determining the success rate range. The analyzer block 42 may then provide the fault rate information and/or success rate information for the contrast level to the user interface 18 to provide such information to the user. Thus, any system level engineer may associate a corresponding contrast level for a particular camera and essentially understand the performance characteristics of the camera (i.e., the failure rate or success rate of the image at correctly identifying a particular gradient of interest).

In operation 98, the user interface 18 provides the user interface 18 with failure rate information and/or success rate information for the contrast level (or gradient of interest) to provide such information to the user.

In general, it may be advantageous to do with correctly identified images, since object detection may be classically defined as detecting valid objects within an image. The live view image may not contain a valid object. Proper removal or cleaning of the image requires the following a priori knowledge: the degree of inefficiency of the object relative to algorithms executed via hardware that may not be available. Data can be easily added or injected into the image. Thus, it may be simple to inject data into the image to gradually transform valid objects into invalid objects. Finally, it may be advantageous to determine a threshold for degradation performance of camera 20 when the image transitions from a pass threshold to a fail threshold.

Fig. 12 generally depicts a new set of gradient images relative to an initial image and various classifications as identified based on Image Parameter Transformation (IPT) values. For example, assume that in all IPTs, "0" corresponds to the initial image and "100" corresponds to the "full parametric transform". Such identification as now defined illustrates that the darkest image (referenced to the leftmost image) as generally shown at 110 is 100 and the unmodified image is 0 (see the rightmost image).

For completeness, the error classification corresponds to an estimated error type of the object as determined by the method 80 (e.g., the speed limit of the marker is 40mph versus 45mph), and the error bounding box indicates that the method 80 determined a different location of the marker (or object) in the image.

Element 120 generally indicates the lowest IPT value for the misclassification, which generally corresponds to the minimum value of the image parameter transformation that caused the misclassification. Thus, for element 120, the IPT value is 25, and all other misclassifications occur at higher IPT values (these are not shown in fig. 12).

Element 122 generally indicates the highest IPT value with the correct classification, regardless of any other misclassification. In this case, the highest value of the image parameter transformation is correctly classified 31. All other correct classification occurs at lower IPT values (these are not shown in figure 12).

Element 124 generally indicates the lowest IPT value of the error bounding box within a specified tolerance of the coordinate error. For example, the element 124 corresponds to a minimum value of IPT that causes the method 80 to falsely detect a marker (or object) elsewhere in the image. In this case, the lowest IPT value of the error bounding box is 60, and all other error bounding boxes occur at higher IPT values (these are not shown in fig. 12).

Element 126 generally indicates the highest IPT value of the correct bounding box that is within a specified tolerance of coordinate errors and is independent of any other false bounding boxes. For example, element 126 corresponds to a maximum value of IPT that causes the method 80 to correctly detect a marker in an image. In this case, the highest IPT value is 90, and all other correct bounding boxes occur at lower IPT values.

While exemplary embodiments are described above, it is not intended that these embodiments describe all possible forms of the invention. Rather, the words used in the specification are words of description rather than limitation, and it is understood that various changes may be made without departing from the spirit and scope of the invention. In addition, features of various implementing embodiments may be combined to form further embodiments of the invention.

20页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:对三维内容系统中的图像进行阴影处理

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!