Three-dimensional detection method based on structured light and multi-light-field camera

文档序号:780248 发布日期:2021-04-09 浏览:10次 中文

阅读说明:本技术 一种基于结构光和多光场相机的三维检测方法 (Three-dimensional detection method based on structured light and multi-light-field camera ) 是由 金欣 周思瑶 于 2020-12-07 设计创作,主要内容包括:本发明公开了一种基于结构光和多光场相机的三维检测方法,包括:搭建包括结构光光源和多个光场相机的三维检测系统,将待检测目标物的参考物放置在所述三维检测系统的工作范围内,对所述参考物进行基于结构光的三维重建得到参考物三维模型,再将待检测目标物放置在所述三维检测系统的工作范围内,对所述待检测目标物进行基于结构光的三维重建得到待检测目标物三维模型;然后对所述参考物三维模型和所述待检测目标物三维模型做三维检测,输出所述待检测目标物的关键点的三维位置。本发明充分利用光场相机和结构光在近景三维重建领域的优势,能够精确且高效地完成对工作范围内目标物的三维检测。(The invention discloses a three-dimensional detection method based on structured light and a multi-light field camera, which comprises the following steps: the method comprises the steps of building a three-dimensional detection system comprising a structured light source and a plurality of light field cameras, placing a reference object of a target object to be detected in the working range of the three-dimensional detection system, carrying out three-dimensional reconstruction based on structured light on the reference object to obtain a reference object three-dimensional model, then placing the target object to be detected in the working range of the three-dimensional detection system, and carrying out three-dimensional reconstruction based on structured light on the target object to be detected to obtain a target object three-dimensional model; and then carrying out three-dimensional detection on the reference object three-dimensional model and the target object three-dimensional model to be detected, and outputting the three-dimensional position of the key point of the target object to be detected. The invention fully utilizes the advantages of the light field camera and the structured light in the field of close-range three-dimensional reconstruction, and can accurately and efficiently complete the three-dimensional detection of the target object in the working range.)

1. A three-dimensional detection method based on structured light and a multi-light field camera is characterized by comprising the following steps: the method comprises the steps of building a three-dimensional detection system comprising a structured light source and a plurality of light field cameras, placing a reference object of a target object to be detected in the working range of the three-dimensional detection system, carrying out three-dimensional reconstruction based on structured light on the reference object to obtain a reference object three-dimensional model, then placing the target object to be detected in the working range of the three-dimensional detection system, and carrying out three-dimensional reconstruction based on structured light on the target object to be detected to obtain a target object three-dimensional model; and then carrying out three-dimensional detection on the reference object three-dimensional model and the target object three-dimensional model to be detected, and outputting the three-dimensional position of the key point of the target object to be detected.

2. The three-dimensional detection method according to claim 1, wherein the structured light-based three-dimensional reconstruction step specifically comprises:

s1: correspondingly collecting a plurality of light fields through a plurality of light field cameras, and calibrating homography matrixes of light rays emitted by the structured light source to the plurality of light fields respectively;

s2: dividing the working range of the three-dimensional detection system into a plurality of sub-fields, and registering the corresponding light field for each sub-field by using the homography matrix obtained in the step S1 to obtain a plurality of sub-field light fields;

s3: and performing three-dimensional reconstruction based on structured light on each sub-field light field.

3. The three-dimensional inspection method according to claim 1, wherein the step S1 of calibrating the homography matrix of the light rays emitted from the structured light source respectively reaching the plurality of light fields specifically comprises:

calculating the position and pose parameters of the light field camera in the space relative to the calibration plate and the internal parameters of the light field camera, and determining the corresponding relation between the three-dimensional space coordinate of the surface of the target object to be detected and the four-dimensional coordinate point in the light field collected by the light field camera to obtain homography matrixes of the light rays emitted by the structured light source to the plurality of light fields respectively.

4. The three-dimensional inspection method according to claim 1, wherein the step S1 of calibrating the homography matrix of the light rays emitted from the structured light source respectively reaching the plurality of light fields specifically comprises:

collecting calibration plate image with structured light stripe, and extracting calibration plate imageThe angular points and the central characteristic points of the structured light striations are screened and matched, and then the conversion relation between a world coordinate system and a light field biplane coordinate system is solved according to the following first to third conversion relation formulas to obtain a homography matrix of the light rays emitted by the structured light source to a plurality of light fields respectively

The first conversion relation is an intersection relation of the ray and a space point under a camera coordinate system:

where (i, j, X, y) are ray coordinates parameterized by a physical biplane coordinate system in free space, (X)C,YC,ZC) Is the coordinate of the object point in the free space under the corresponding camera coordinate system, and f is the focal length of the light field camera;

the second conversion relation is the object point (X) in free space under the world coordinate systemW,YW,ZW) Corresponding to the object point (X) in free space under the coordinate system of the cameraC,YC,ZC) The conversion relation between:

wherein R is a rotation matrix and T is a translation vector;

the third conversion relation is a conversion relation from the decoded light field biplane coordinate system to the physical biplane coordinate system:

wherein (u, v, s, t) represents the coordinates of the light field pixel points under the light field biplane coordinate system, ki,kj,ku,kv,u0,v0Are 6 independent camera intrinsic parameters.

5. The three-dimensional inspection method according to claim 1, wherein the step S2 is to divide the working range of the three-dimensional inspection system into a plurality of sub-fields, and the step S1 of registering the corresponding light field with the homography matrix for each of the sub-fields specifically comprises:

dividing the working range of the three-dimensional detection system into M sub-fieldsFor N light fields collected by N light field cameras, the distribution condition is formed by a logic matrix [ a ] with the dimension of M multiplied by Nmn]M×NRepresentation, matrix element amnIs defined as:

wherein, FoVnFor the field of view of the nth light field, FoVmIs the range of the mth subfield; a ismnWhen 1, the light field collected by the nth light field camera corresponds to the mth subfield, amnIf the number is 0, the light field acquired by the nth light field camera does not correspond to the mth sub-field;

for the mth subfield FmIn the corresponding ofSelecting a reference light field L from the individual light fieldsrRegistering the light fields corresponding to the sub-fields of view using the homography matrix obtained in step S1, transforming each light field to a reference light field LrThe biplane coordinate system of the light field is as follows:

Ln'=HnHr -1Ln

wherein the content of the first and second substances,the number of light field cameras corresponding to the mth sub-field of view; l isn' is the n-th light field LnLight field after homography matrix registration, HnIs the homography matrix corresponding to the nth light field, HrIs a homography matrix corresponding to the reference light field.

6. The three-dimensional inspection method according to claim 5, wherein step S2 further comprises traversing each pixel of the registered light field to remove highlights after the corresponding light field is registered with the homography matrix obtained in step S1 for each of the sub-fields of view.

7. The three-dimensional detection method of claim 6, wherein traversing each pixel of the post-registration light field to remove highlights comprises: whether the mean square error is larger than a preset threshold value T is adopted to evaluate whether highlight exists at each pixel position:

wherein λ ism(u0,v0,s0,t0) Registering the pixel position (u) in the light field representing the mth subfield0,v0,s0,t0) Whether there is high reflection, lambdam(u0,v0,s0,t0) A pixel position (u) in the registered light field representing the corresponding mth subfield when 10,v0,s0,t0) Where there is high reflection, λm(u0,v0,s0,t0) A pixel position (u) in the registered light field representing the corresponding mth subfield when 00,v0,s0,t0) There is no high reflection;

if high reflection exists, the non-zero pixel values of other light fields are averaged after the pixel value larger than the non-zero pixel average value is removed, and the pixel position is assigned with a value, so that the light field fusion is completed to remove the high reflection.

8. The three-dimensional detection method according to claim 1, wherein the performing of the three-dimensional reconstruction based on the structured light for each of the sub-field light fields in step S3 specifically includes:

projecting the pixel point of each sub-field light field to a world coordinate system to generate a sub-field space point cloud, and then reconstructing the three-dimensional surface geometric texture of the key part of the target object to be detected in the sub-field by using a Delaunay triangulation method, wherein the coordinates of the sub-field space point cloud are calculated by using the following formula:

[XW,YW,ZW,1]T=Ηr -1[u,v,s,t,1]T

wherein (X)W,YW,ZW) Representing the coordinates of object points in free space under a world coordinate system, (u, v, s, t) representing the coordinates of light field pixel points under a light field biplane coordinate system, and HrIs a homography matrix corresponding to the reference light field.

9. The three-dimensional inspection method of claim 1, wherein the three-dimensional inspection step comprises model differencing and keypoint extraction;

further, the model difference step specifically includes: respectively extracting characteristic points of the reference object three-dimensional model and the target object three-dimensional model to be detected, matching and screening to obtain characteristic point pairs, calculating a homography matrix between the reference object three-dimensional model and the target object three-dimensional model to be detected according to the characteristic point pairs, registering the reference object three-dimensional model and the target object three-dimensional model to be detected according to the homography matrix between the reference object three-dimensional model and the target object three-dimensional model to be detected, and then taking a difference model of the reference object three-dimensional model and the target object three-dimensional model to be detected;

further, the key point extracting step specifically includes: and extracting the three-dimensional position of the key point of the target object to be detected by using morphological processing and adaptive threshold segmentation according to the difference model.

10. A computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to perform the steps of the three-dimensional inspection method of any one of claims 1 to 9.

Technical Field

The invention relates to the field of computer vision and digital image processing, in particular to a three-dimensional detection system and a three-dimensional detection method based on structured light and a multi-light-field camera.

Background

The optical three-dimensional detection technology is an important non-contact detection technology, has the advantages of non-contact, high efficiency, moderate precision and the like, and is widely applied to the fields of industrial detection, aerospace, agricultural production and the like. The optical three-dimensional detection can be divided into an active method and a passive method according to different system illumination modes. Structured light three-dimensional detection is a commonly used active three-dimensional detection method, structured light is projected on the surface of a target object, and then object surface information is reconstructed through a two-dimensional image acquired by a camera. The multi-view stereo vision detection is a typical passive three-dimensional detection method, and the method solves the three-dimensional coordinates of an object according to the position relation among a plurality of cameras and by combining the parallax principle. The method has the advantages of strong universality and low detection precision.

The above background disclosure is only for the purpose of assisting understanding of the concept and technical solution of the present invention and does not necessarily belong to the prior art of the present patent application, and should not be used for evaluating the novelty and inventive step of the present application in the case that there is no clear evidence that the above content is disclosed at the filing date of the present patent application.

Disclosure of Invention

In order to solve the technical problems, the invention provides a three-dimensional detection system and a three-dimensional detection method based on structured light and a multi-light-field camera, which fully utilize the advantages of the light-field camera and the structured light in the field of close-range three-dimensional reconstruction and can accurately and efficiently complete the three-dimensional detection of a target object in a working range.

In order to achieve the purpose, the invention adopts the following technical scheme:

one embodiment of the invention discloses a three-dimensional detection method based on structured light and a multi-light-field camera, which comprises the following steps: the method comprises the steps of building a three-dimensional detection system comprising a structured light source and a plurality of light field cameras, placing a reference object of a target object to be detected in the working range of the three-dimensional detection system, carrying out three-dimensional reconstruction based on structured light on the reference object to obtain a reference object three-dimensional model, then placing the target object to be detected in the working range of the three-dimensional detection system, and carrying out three-dimensional reconstruction based on structured light on the target object to be detected to obtain a target object three-dimensional model; and then carrying out three-dimensional detection on the reference object three-dimensional model and the target object three-dimensional model to be detected, and outputting the three-dimensional position of the key point of the target object to be detected.

Preferably, the structured light-based three-dimensional reconstruction step specifically includes:

s1: correspondingly collecting a plurality of light fields through a plurality of light field cameras, and calibrating homography matrixes of light rays emitted by the structured light source to the plurality of light fields respectively;

s2: dividing the working range of the three-dimensional detection system into a plurality of sub-fields, and registering the corresponding light field for each sub-field by using the homography matrix obtained in the step S1 to obtain a plurality of sub-field light fields;

s3: and performing three-dimensional reconstruction based on structured light on each sub-field light field.

Preferably, the step S1 of calibrating the homography matrix of the light rays emitted by the structured light source to the plurality of light fields respectively specifically includes:

calculating the position and pose parameters of the light field camera in the space relative to the calibration plate and the internal parameters of the light field camera, and determining the corresponding relation between the three-dimensional space coordinate of the surface of the target object to be detected and the four-dimensional coordinate point in the light field collected by the light field camera to obtain homography matrixes of the light rays emitted by the structured light source to the plurality of light fields respectively.

Preferably, the step S1 of calibrating the homography matrix of the light rays emitted by the structured light source to the plurality of light fields respectively specifically includes:

collecting a calibration plate image with structured light stripes, extracting angular points of the calibration plate image and central characteristic points of the structured light stripes, screening and matching, and solving a conversion relation between a world coordinate system and a light field biplane coordinate system according to a first to a third conversion relation formulas to obtain a homography matrix from light rays emitted by a structured light source to a plurality of light fields respectively

The first conversion relation is an intersection relation of the ray and a space point under a camera coordinate system:

where (i, j, X, y) are ray coordinates parameterized by a physical biplane coordinate system in free space, (X)C,YC,ZC) Is the coordinate of the object point in the free space under the corresponding camera coordinate system, and f is the focal length of the light field camera;

the second conversion relation is the object point (X) in free space under the world coordinate systemW,YW,ZW) Corresponding to the object point (X) in free space under the coordinate system of the cameraC,YC,ZC) The conversion relation between:

[XC YC ZC 1]T=R[XW YW ZW 1]T+T

wherein R is a rotation matrix and T is a translation vector;

the third conversion relation is a conversion relation from the decoded light field biplane coordinate system to the physical biplane coordinate system:

wherein (u, v, s, t) represents the coordinates of the light field pixel points under the light field biplane coordinate system, ki,kj,ku,kv,u0,v0Are 6 independent camera intrinsic parameters.

Preferably, the step S2 of dividing the working range of the three-dimensional detection system into a plurality of sub-fields, and the registering, for each of the sub-fields, the corresponding light field by using the homography matrix obtained in the step S1 specifically includes:

dividing the working range of the three-dimensional detection system into M sub-fieldsFor N light fields collected by N light field cameras, the distribution condition is formed by a logic matrix [ a ] with the dimension of M multiplied by Nmn]M×NRepresentation, matrix element amnIs defined as:

wherein, FoVnFor the field of view of the nth light field, FoVmIs the range of the mth subfield; a ismnWhen 1, the light field collected by the nth light field camera corresponds to the mth subfield, amnIf the number is 0, the light field acquired by the nth light field camera does not correspond to the mth sub-field;

for the mth subfield FmIn the corresponding ofSelecting a reference light field L from the individual light fieldsrRegistering the light fields corresponding to the sub-fields of view using the homography matrix obtained in step S1, transforming each light field to a reference light field LrThe biplane coordinate system of the light field is as follows:

Ln′=HnHr -1Ln

wherein the content of the first and second substances,the number of light field cameras corresponding to the mth sub-field of view; l isn' is the n-th light field LnLight field after homography matrix registration, HnIs the homography matrix corresponding to the nth light field, HrIs a homography matrix corresponding to the reference light field.

Preferably, step S2 further includes traversing each pixel of the registered light field to remove highlights after the corresponding light field is registered using the homography matrix obtained in step S1 for each of the sub-fields of view.

Preferably, traversing each pixel of the post-registration light field to remove highlights comprises: whether the mean square error is larger than a preset threshold value T is adopted to evaluate whether highlight exists at each pixel position:

wherein λ ism(u0,v0,s0,t0) Registering the pixel position (u) in the light field representing the mth subfield0,v0,s0,t0) Whether there is high reflection, lambdam(u0,v0,s0,t0) A pixel position (u) in the registered light field representing the corresponding mth subfield when 10,v0,s0,t0) Where there is high reflection, λm(u0,v0,s0,t0) A pixel position (u) in the registered light field representing the corresponding mth subfield when 00,v0,s0,t0) There is no high reflection;

if high reflection exists, the non-zero pixel values of other light fields are averaged after the pixel value larger than the non-zero pixel average value is removed, and the pixel position is assigned with a value, so that the light field fusion is completed to remove the high reflection.

Preferably, the step S3 of performing three-dimensional reconstruction based on structured light on each of the sub-field-of-view light fields specifically includes:

projecting the pixel point of each sub-field light field to a world coordinate system to generate a sub-field space point cloud, and then reconstructing the three-dimensional surface geometric texture of the key part of the target object to be detected in the sub-field by using a Delaunay triangulation method, wherein the coordinates of the sub-field space point cloud are calculated by using the following formula:

[XW,YW,ZW,1]T=Ηr -1[u,v,s,t,1]T

wherein (X)W,YW,ZW) Representing the coordinates of object points in free space under a world coordinate system, (u, v, s, t) representing the coordinates of light field pixel points under a light field biplane coordinate system, and HrIs a homography matrix corresponding to the reference light field.

Preferably, the three-dimensional detection step comprises model differentiation and keypoint extraction;

further, the model difference step specifically includes: respectively extracting characteristic points of the reference object three-dimensional model and the target object three-dimensional model to be detected, matching and screening to obtain characteristic point pairs, calculating a homography matrix between the reference object three-dimensional model and the target object three-dimensional model to be detected according to the characteristic point pairs, registering the reference object three-dimensional model and the target object three-dimensional model to be detected according to the homography matrix between the reference object three-dimensional model and the target object three-dimensional model to be detected, and then taking a difference model of the reference object three-dimensional model and the target object three-dimensional model to be detected;

further, the key point extracting step specifically includes: and extracting the three-dimensional position of the key point of the target object to be detected by using morphological processing and adaptive threshold segmentation according to the difference model.

Another embodiment of the invention discloses a computer-readable storage medium storing computer-executable instructions that, when invoked and executed by a processor, cause the processor to implement the steps of the three-dimensional inspection method described above.

Compared with the prior art, the invention has the beneficial effects that: according to the three-dimensional detection method based on the structured light and the multiple light field cameras, the three-dimensional detection system comprising the structured light source and the multiple light field cameras is built, and the three-dimensional reconstruction based on the structured light is carried out on the reference object and the target object to be detected in the working range, wherein the light field cameras can provide a large number of accurate key points for the close-range three-dimensional reconstruction, the structured light has the advantages of large information amount, rapid processing and the like, and the detection precision can be improved; thereby realizing accurate three-dimensional detection of the surface of the target object.

In a further scheme, after the corresponding light field is registered to the sub-field, each pixel position of the registered light field is traversed to further remove highlight, so that a complete and high-quality sub-field light field can be obtained, and the precision of three-dimensional detection is further improved.

In a further scheme, the three-dimensional detection comprises model difference and key point extraction, the obtained three-dimensional model of the reference object and the three-dimensional model of the target object to be detected are registered, and the difference model of the reference object and the three-dimensional model of the target object to be detected is obtained, so that the contained information is less, and the three-dimensional position of the key point can be quickly and accurately extracted by adopting morphological processing and adaptive threshold segmentation.

Drawings

FIG. 1 is a block diagram of a hybrid multi-light field camera and structured light three-dimensional inspection system for single-sided inspection of an object;

FIG. 2 is a three-dimensional inspection system architecture diagram of a hybrid multi-light field camera and structured light for two-sided inspection of an object;

fig. 3 is a flow chart of the steps of the structured light based three-dimensional reconstruction of the preferred embodiment of the present invention.

Detailed Description

In order to make the technical problems, technical solutions and advantageous effects to be solved by the embodiments of the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and the embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention; the particular methods employed in the practice are illustrative only and the scope of the invention includes, but is not limited to, the following methods.

The light field camera realizes the simultaneous recording of the direction and intensity information of light rays in one shot by inserting a micro lens array between a main lens and an image sensor. The light field camera can be regarded as a camera array, light field data can be decoded into a sub-view image array, parallax exists between sub-view images, and depth information of a shot object can be acquired according to the parallax, so that the light field is more accurate and efficient when applied to close-range three-dimensional reconstruction than common images.

The preferred embodiment of the invention discloses a three-dimensional detection method for a hybrid multi-light-field camera and structured light, which comprises the following steps: the method comprises the steps of building a three-dimensional detection system comprising a structured light source and a plurality of light field cameras, placing a reference object of a target object to be detected in the working range of the three-dimensional detection system, carrying out three-dimensional reconstruction on the reference object based on structured light to obtain a reference object three-dimensional model, taking the reference object away from the working range, placing the target object to be detected in the working range of the three-dimensional detection system, and carrying out three-dimensional reconstruction on the target object to be detected based on structured light to obtain the target object three-dimensional model; and then carrying out surface three-dimensional detection on the reference object three-dimensional model and the target object three-dimensional model to be detected, and outputting the three-dimensional position of the key point of the target object to be detected.

The three-dimensional detection comprises model difference and key point extraction.

The model difference step specifically comprises: respectively extracting three-dimensional SIFT feature points of a reference object three-dimensional model and a target object three-dimensional model to be detected, performing rapid nearest neighbor matching and screening to obtain feature point pairs, calculating a homography matrix between the two three-dimensional models according to the feature point pairs, registering the reference object three-dimensional model and the target object three-dimensional model to be detected according to the homography matrix between the two three-dimensional models, and then taking a difference model of the two three-dimensional models.

The key point extraction step specifically comprises the following steps: and extracting the three-dimensional position of the key point of the target object to be detected by using morphological processing and adaptive threshold segmentation according to the difference model. In this embodiment, since the difference model contains less information, the three-dimensional position of the key point can be quickly and accurately extracted by using morphological processing and adaptive threshold segmentation.

In some embodiments, if three-dimensional detection needs to be performed on a single side of a target object to be detected, a single-side three-dimensional detection system as shown in fig. 1 may be set up, where the three-dimensional detection system includes light field cameras 1, 2, 3, 4, a structured light source 5, and an optical strut 6, where the light field cameras 1, 2, 3, 4 are connected and fixed by the optical strut 6 and are all disposed on one side of the target object 7 to be detected, the target object 7 to be detected is placed in a working range 8 of the three-dimensional detection system, and when the above steps are performed, a reference object is also correspondingly placed in the working range 8. In other embodiments, when three-dimensional detection needs to be performed on both sides of an object to be detected, a two-sided three-dimensional detection system as shown in fig. 2 may be set up, where the three-dimensional detection system includes light field cameras 9, 10, 11, 12, structured light sources 13, 14, and optical struts 15, 16, the light field cameras 9, 11 are connected and fixed by the optical strut 15 and are disposed on a first side of an object 17 to be detected, the light field cameras 10, 12 are connected and fixed by the optical strut 16 and are disposed on a second side of the object 17 to be detected, the structured light sources 13, 14 are also disposed on both sides of the object 17 to be detected, the object 17 to be detected is placed in a working range 18 of the three-dimensional detection system, and when the above steps are performed, a reference object is also correspondingly placed in the working range 18. In the schematic diagrams of the three-dimensional inspection systems of fig. 1 and 2, the light field cameras are not limited to the number shown in the drawings, and more light field cameras may be disposed along the optical struts as needed.

In this embodiment, structured light-based three-dimensional reconstruction is performed on a reference object and a target object to be detected respectively to obtain a reference object three-dimensional model and a target object three-dimensional model to be detected respectively, as shown in fig. 3, where the structured light-based three-dimensional reconstruction specifically includes:

s1: correspondingly collecting a plurality of light fields through a plurality of light field cameras, and calibrating homography matrixes of light rays emitted by the structured light source to the plurality of light fields respectively; specifically, a homography matrix of light rays emitted by a structured light source to a plurality of light fields is calibrated by combining a multi-light-field calibration algorithm of the structured light;

taking N light field cameras as an example,the multi-light-field calibration algorithm is used for determining the corresponding relation between the three-dimensional space coordinate (namely the coordinate in a world coordinate system) of the surface of the target object to be detected and the four-dimensional coordinate point (namely the coordinate in a light field biplane coordinate system) in the light field acquired by the light field camera by calculating the position and posture parameters of the light field camera relative to a calibration plate in the space and camera internal parameters.

Firstly, collecting a calibration plate image with structured light stripes, and expressing the intersection relationship of light rays and space points under a camera coordinate system according to a light field imaging principle as follows:

wherein (i, j, x, y) isRay coordinates parameterized by a physical biplane coordinate system in free space, (X)C,YC,ZC) Is the coordinate of the object point in the free space under the corresponding camera coordinate system, and f is the focal length of the light field camera.

Object point coordinates (X) in free space under world coordinate systemW,YW,ZW) And (X)C,YC,ZC) The conversion relationship between the two is as follows:

[XC YC ZC 1]T=R[XW YW ZW 1]T+T. (2)

where R is the rotation matrix and T is the translation vector.

The conversion relation from the light field biplane coordinate system to the physical biplane coordinate system after decoding is as follows:

wherein (u, v, s, t) represents the coordinates of the light field pixel points under the light field biplane coordinate system, and the position corresponds to the decoded light field biplane coordinate system; k is a radical ofi,kj,ku,kv,u0,v0Are 6 independent camera intrinsic parameters.

Extracting the angular points of the calibration plate image and the central characteristic points of the structured light striations, screening and matching, and solving a homography matrix between a three-dimensional world coordinate system and a light field biplane coordinate according to the conversion relation of the formulas (1), (2) and (3)In the step, compared with a common image, the light field provides more and more accurate angular point pairs, and the central feature of the structured light stripes is added, so that the calibration result is more robust and accurate.

S2: dividing the working range of the three-dimensional detection system into a plurality of sub-fields, and registering the corresponding light field for each sub-field by using the homography matrix obtained in the step S1 to obtain a plurality of sub-field light fields;

specifically, the working range of the system is divided into a plurality of subfields. Registering a plurality of corresponding input light fields for each sub-field by using the homography matrix obtained in the step S1, traversing each pixel of the registered light fields, and removing highlight to obtain a complete and high-quality sub-field light field;

in this embodiment, the system working interval is divided into M sub-fieldsFor N light fields acquired by N light field cameras, a light field whose field of view covers more than 70% of the field of view is considered to correspond to the sub-field of view FmThe light field, the light field camera distribution case is composed of a logic matrix [ a ] with the dimension of M multiplied by Nmn]M×NExpressed, the matrix elements are defined as:

wherein, FoVnFor the field of view of the nth light field (i.e. the light field captured by the nth light field camera), FoVmIs the range of the mth subfield. a ismnWhen 1, the light field collected by the nth light field camera corresponds to the mth subfield, amnA value of 0 indicates that the overlapping area between the field of view range of the nth light field camera and the mth subfield is too small, and this embodiment considers that it does not correspond to the mth subfield.Can be described as the number of light field cameras corresponding to the mth sub-field of view.

For the sub-field of view FmIn the corresponding ofSelecting a reference light field L from the individual light fieldsrUsing the light field homography matrix obtained in step S1Registering the sub-fields of viewCorresponding light fields, each light field being transformed to a reference light field LrThe biplane coordinate system is as follows:

Ln'=HnHr -1Ln (5)

wherein L isn' is the n-th light field LnLight field after homography matrix registration, HrIs the reference light field LrCorresponding homography matrix registered by HnHr -1By definition, all light fields are transformed into the same coordinate system.

In the further embodiment, the registered light field is subjected to highlight removal and then is fused to obtain the large-view-angle light field, because the three-dimensional detection method is difficult to detect the target object with surface specular reflection, and the surface characteristics of a plurality of target objects to be detected are specular reflection in practical application.

Specifically, graying the registered light field, traversing each pixel position of the registered light field, and if no high light reflection condition exists at a certain pixel position, performing gray level registration on the registered light fieldThe difference in the pixel gray value of the value at this pixel position in the individual light field is small. In this embodiment, whether there is high reflection at each pixel location is evaluated using whether the Mean Square Error (MSE) is greater than a given threshold T:

wherein λ ism(u0,v0,s0,t0) Registering pixel positions (u) in the light field characterizing the mth subfield0,v0,s0,t0) Whether there is high reflection, lambdam(u0,v0,s0,t0) A pixel position (u) in the registered light field of 1, i.e. representing the corresponding mth subfield0,v0,s0,t0) Where there is high reflection, λm(u0,v0,s0,t0) A pixel position (u) in the registered light field of 0, i.e. representing the corresponding mth subfield0,v0,s0,t0) There is no high reflection.

If high reflection exists, the non-zero pixel values of other light fields are averaged after the pixel value larger than the non-zero pixel average value is removed, and the pixel position is assigned with a value, so that the light field fusion is completed, and the fast and accurate highlight removing operation is realized.

S3: performing three-dimensional reconstruction based on structured light on each sub-field light field;

and obtaining the registered light field of each sub-field after the highlight of each sub-field is removed through the steps. Compared with a common image, the light field can provide more key points for three-dimensional reconstruction, can generate dense point cloud, and improves reconstruction accuracy to a certain extent. In addition, structured light is adopted for illumination, and the homography transformation relation among all light fields can be accurately calculated by using the linear structured light stripe characteristics in the calibration process, so that the three-dimensional reconstruction precision is improved.

Specifically, for each sub-field, the registered light field pixel points without highlight are projected to a three-dimensional world coordinate system by referring to a light field homography matrix, and a sub-field space point cloud is generated. The point cloud coordinates are calculated using the following formula:

[XW,YW,ZW,1]T=Ηr -1[u,v,s,t,1]T. (7)

in this embodiment, a Delaunay triangulation method is used to reconstruct the three-dimensional surface geometric texture of the key component of the target object to be detected in the subfield. The triangulation links the point clouds into a triangular patch form to describe the topological structure between the space three-dimensional point clouds, namely, the three-dimensional surface geometric texture of the target object to be detected can be effectively represented. The Delaunay triangulation method has uniqueness and optimality, and the embodiment uses the method to perform three-dimensional reconstruction on the subfield three-dimensional point cloud so as to be further used for three-dimensional detection.

The preferred embodiment of the invention provides a three-dimensional detection system and a three-dimensional detection method based on structured light and a multi-light field camera, wherein the multi-light field camera and the structured light are combined, the light field camera can provide a large number of accurate key points for close-range three-dimensional reconstruction, the structured light has the advantages of large information quantity, rapid processing and the like, and the detection precision can be improved. Firstly, building a three-dimensional detection system for mixing a multi-light-field camera and structured light by a plurality of light-field cameras, a structured light source and a plurality of optical supporting rods; calibrating homography matrixes of light rays emitted by a structured light source to the multiple light fields respectively by combining a multiple light field calibration algorithm of the structured light; dividing the system working range into a plurality of sub-fields of view; registering a plurality of corresponding input light fields by using a homography matrix for each sub-field, traversing each pixel of the registered light fields, and removing highlight to obtain a complete and high-quality sub-field light field; performing three-dimensional reconstruction based on structured light on each sub-field light field; and finally, performing surface three-dimensional detection on three-dimensional reconstruction results of the target object and the reference object to be detected, and outputting the accurate three-dimensional position of the key point in the large-view-angle light field.

An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions, and when the computer-executable instructions are called and executed by a processor, the computer-executable instructions cause the processor to implement the panoramic light field stitching method described above, and specific implementation may refer to method embodiments, and is not described herein again.

The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种鉴相法超声波风速检测仪

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!