Depth measurement method based on combination of depth learning and structured light

文档序号:1626272 发布日期:2020-01-14 浏览:28次 中文

阅读说明:本技术 一种基于深度学习和结构光相结合的深度测量方法 (Depth measurement method based on combination of depth learning and structured light ) 是由 常婉 向森 邓慧萍 吴谨 于 2019-09-16 设计创作,主要内容包括:本发明属于三维重建技术领域,公开了一种基于深度学习和结构光相结合的深度测量方法,包括:通过结构光视觉获得包裹形变相位;通过深度学习获得初始深度值;根据初始深度值获得第一相位补偿系数;根据包裹形变相位和第一相位补偿系数获得真实形变相位,对真实形变相位进行相位展开,获得测量深度值。本发明解决了现有技术中深度测量的精度较低的问题,能够有效提高测量精度。(The invention belongs to the technical field of three-dimensional reconstruction, and discloses a depth measurement method based on combination of depth learning and structured light, which comprises the following steps: obtaining a wrapping deformation phase through structured light vision; obtaining an initial depth value through depth learning; obtaining a first phase compensation coefficient according to the initial depth value; and obtaining a real deformation phase according to the wrapping deformation phase and the first phase compensation coefficient, and performing phase expansion on the real deformation phase to obtain a measured depth value. The invention solves the problem of lower precision of depth measurement in the prior art, and can effectively improve the measurement precision.)

1. A depth measurement method based on combination of deep learning and structured light is characterized by comprising the following steps:

obtaining a wrapping deformation phase through structured light vision;

obtaining an initial depth value through deep learning;

obtaining a first phase compensation coefficient according to the initial depth value;

and acquiring a real deformation phase according to the wrapping deformation phase and the first phase compensation coefficient, and performing phase expansion on the real deformation phase to acquire a measured depth value.

2. The depth measurement method based on the combination of the deep learning and the structured light as claimed in claim 1, comprising the steps of:

s1, constructing a structured light system, and calibrating the structured light system to obtain a reference phase

Figure FDA0002202515970000011

S2, sequentially projecting N sinusoidal grating fringe patterns with same frequency and phase difference to a measured object, and acquiring the N sinusoidal grating fringe patterns through a camera to obtain N deformation patterns;

s3, obtaining wrapping phases according to the N deformation graphs

Figure FDA0002202515970000012

S4, based on the step S2, the sine wave influence elimination processing is carried out on the deformation graph, and a corrected gray scale graph is obtained;

s5, based on the step S4, depth estimation is carried out on the corrected gray scale image through a convolutional neural network, and an initial depth value Z of each pixel point in the corrected gray scale image is obtained(x,y)

S6, based on the step S5, according to the initial depth value Z(x,y)Obtaining a first phase compensation coefficient;

s7, obtaining the wrapping deformation phase based on the step S3And calculating the first phase compensation coefficient obtained in step S6 to obtain the true deformation phase

Figure FDA0002202515970000017

3. The deep learning and structured light based device of claim 2The combined depth measurement method, wherein in step S1, the structured light system includes: the device comprises a camera, a projector, a reference plane and a measured object; a baseline distance d between the camera and the projector, and a depth Z from the camera to the reference plane0

4. The depth measurement method based on the combination of deep learning and structured light according to claim 2, wherein in step S4, the sine wave influence elimination processing is specifically: and carrying out averaging processing on the gray values of the N deformation graphs at the same pixel position.

5. The depth measurement method based on the combination of the deep learning and the structured light, as claimed in claim 2, wherein in step S5, the convolutional neural network specifically employs an auto-supervised monocular depth estimation network.

6. The depth measurement method based on the combination of the deep learning and the structured light as claimed in claim 2, wherein the step S6 is specifically implemented by:

s6.1, according to the first depth value Z0And said initial depth value Z(x,y)Obtaining a second depth value DeltaZ(x,y)

ΔZ(x,y)=Z0-Z(x,y)(1)

Wherein the first depth value Z0For a depth value of a camera in the structured light system to a reference plane, the second depth value Δ Z(x,y)Initially estimating the depth value of the measured object to the reference plane for the depth network;

s6.2, comparing the second depth value delta Z(x,y)As a guide, obtaining the first phase compensation coefficient;

wherein, the depth value of the measured object to the reference plane calculated according to the first phase compensation coefficient is closest to the second depth value Delta Z(x,y)

7. The depth measurement method based on the combination of deep learning and structured light as claimed in claim 6, wherein the specific calculation manner of step S6.2 is as follows:

Figure FDA0002202515970000021

Figure FDA0002202515970000022

Figure FDA0002202515970000023

the calculation result comprises: l (m)0)>L(m1)>L(m2)>…;

When L (m) is minimum, the corresponding m is the first phase compensation coefficient;

wherein h is(x,y)(m) calculating the depth value from the measured object to the reference plane according to the phase compensation coefficient m, and recording as a third depth value; l (m) is an error value between the third depth value and the second depth value, L (m)i) Is an error value of i +1 th bit in the order from large to small, f0Is the frequency of the sinusoidal grating fringe pattern and d is the baseline distance between the projector and the camera in the structured light system.

8. The depth measurement method based on the combination of deep learning and structured light as claimed in claim 7, wherein in step S7, the true deformation phase

Figure FDA0002202515970000031

wherein m in the formula (5) is the first phase compensation coefficient.

9. The depth measurement method based on the combination of deep learning and structured light according to claim 8, wherein in step S7, the depth value is specifically calculated by:

Figure FDA0002202515970000033

wherein h (x, y) in the formula (6) is a measured depth value of a surface point p (x, y) of the measured object relative to the reference plane.

Technical Field

The invention relates to the technical field of three-dimensional reconstruction, in particular to a depth measurement method based on combination of deep learning and structured light.

Background

The traditional two-dimensional imaging technology cannot meet the requirements of people at present, because an image is the projection of a three-dimensional space in an optical system, only the recognition of image layers is not enough, the higher layer of computer vision is necessary to accurately obtain the shape, position and posture of an object in the three-dimensional space, and the detection, recognition, tracking and interaction of the object in the three-dimensional space are realized through a three-dimensional reconstruction technology. Nowadays, more and more technologies based on three-dimensional images are emerging, and in order to acquire a real three-dimensional image, depth perception is generally performed on a scene or an object.

In a series of technologies for acquiring three-dimensional information, the structured light technology is a depth perception technology with complete theory and sufficient feasibility, and generally, the structured light technology firstly projects a plurality of fixed sine wave templates on a measured object, the templates deform on the surface of the object, and then a camera is used for acquiring a deformation image. And finally, accurately calculating the depth in the scene by utilizing a triangulation method through calculating the deformation of the template shown in the scene or the corresponding relation between the template and the point on the image. In recent years, deep learning develops rapidly, and particularly, great progress is made in the field of computer vision, a deep learning method can also realize depth estimation of a scene, the deep learning method can quickly realize depth estimation of a single image through complete end-to-end training by designing a complex multilayer Convolutional Neural Network (CNN) structure and using various network structures to fit the relationship between an image and depth.

However, both of the above depth perception techniques have their limitations. The phase structured light technology has a phase wrapping problem, the measured phase is a result of dividing a real phase by 2 pi to obtain a module, and the depth value and the three-dimensional information of a scene can be correctly obtained only after the wrapped phase is unfolded to obtain the real phase. Deep learning faces the problem that efficient local algorithms result in poor matching.

Disclosure of Invention

The depth measurement method based on the combination of the deep learning and the structured light solves the problem of low depth measurement precision in the prior art.

The embodiment of the application provides a depth measurement method based on combination of deep learning and structured light, which comprises the following steps:

obtaining a wrapping deformation phase through structured light vision;

obtaining an initial depth value through deep learning;

obtaining a first phase compensation coefficient according to the initial depth value;

and acquiring a real deformation phase according to the wrapping deformation phase and the first phase compensation coefficient, and performing phase expansion on the real deformation phase to acquire a measured depth value.

Preferably, the method comprises the following steps:

s1, constructing a structured light system, and calibrating the structured light system to obtain a reference phase

Figure BDA0002202515980000021

S2, sequentially projecting N sinusoidal grating fringe patterns with same frequency and phase difference to a measured object, and acquiring the N sinusoidal grating fringe patterns through a camera to obtain N deformation patterns;

s3, obtaining wrapping phases according to the N deformation graphs

Figure BDA0002202515980000022

Wrapping the phase

Figure BDA0002202515980000023

And the reference phase

Figure BDA0002202515980000024

Comparing to obtain the phase of the wrapping deformation

S4, based on the step S2, the sine wave influence elimination processing is carried out on the deformation graph, and a corrected gray scale graph is obtained;

s5, based on the step S4, depth estimation is carried out on the corrected gray scale image through a convolutional neural network, and an initial depth value Z of each pixel point in the corrected gray scale image is obtained(x,y)

S6, based on the step S5, according to the initial depth value Z(x,y)Obtaining a first phase compensation coefficient;

s7, obtaining the wrapping deformation phase based on the step S3

Figure BDA0002202515980000026

And calculating the first phase compensation coefficient obtained in step S6 to obtain the true deformation phase

Figure BDA0002202515980000027

And according to the true deformation phaseAnd calculating to obtain a measured depth value.

Preferably, in step S1, the structured light system includes: the device comprises a camera, a projector, a reference plane and a measured object; a baseline distance d between the camera and the projector, and a depth Z from the camera to the reference plane0

Preferably, in step S4, the sine wave influence elimination processing specifically includes: and carrying out averaging processing on the gray values of the N deformation graphs at the same pixel position.

Preferably, in step S5, the convolutional neural network specifically uses an auto-supervised monocular depth estimation network.

Preferably, the specific manner of step S6 is:

s6.1, according to the first depth value Z0And said initial depth value Z(x,y)Obtaining a second depth value DeltaZ(x,y)

ΔZ(x,y)=Z0-Z(x,y)(1)

Wherein the first depth value Z0For a depth value of a camera in the structured light system to a reference plane, the second depth value Δ Z(x,y)Initially estimating the depth value of the measured object to the reference plane for the depth network;

s6.2, comparing the second depth value delta Z(x,y)As a guide, obtaining the first phase compensation coefficient;

wherein, the depth value of the measured object to the reference plane calculated according to the first phase compensation coefficient is closest to the second depth value Delta Z(x,y)

Preferably, the specific calculation manner of step S6.2 is:

Figure BDA0002202515980000031

the calculation result comprises: l (m)0)>L(m1)>L(m2)>…;

When L (m) is minimum, the corresponding m is the first phase compensation coefficient;

wherein h is(x,y)(m) calculating the depth value from the measured object to the reference plane according to the phase compensation coefficient m, and recording as a third depth value; l (m) is an error value between the third depth value and the second depth value, L (m)i) Is an error value of i +1 th bit in the order from large to small, f0Is the frequency of the sinusoidal grating fringe pattern and d is the baseline distance between the projector and the camera in the structured light system.

Preferably, in step S7, the true deformation phase

Figure BDA0002202515980000041

The specific calculation method is as follows:

Figure BDA0002202515980000042

wherein m in the formula (5) is the first phase compensation coefficient.

Preferably, in step S7, the specific calculation method of the measured depth value is as follows:

Figure BDA0002202515980000043

wherein h (x, y) in the formula (6) is a measured depth value of a surface point p (x, y) of the measured object relative to the reference plane.

One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:

in the embodiment of the application, a deep learning method is used for depth estimation, the depth value is used for further guiding and solving the optimal phase compensation coefficient, the problem of phase unwrapping is solved, and finally the accurate depth value is obtained through measurement. The invention combines the structured light technology and the deep learning, and the initial depth value is estimated by the deep learning method and is provided for phase unwrapping, and the estimated depth value is utilized to further guide and solve the optimal phase compensation coefficient, so that the depth measurement result is more accurate. Compared with the existing phase unwrapping technology, the method has the problems of certain limitations in the aspects of measurement accuracy, automation degree and the like; compared with the existing depth estimation method for deep learning, the method has the advantages that the measurement accuracy is low, and particularly the measurement accuracy is low under the conditions that the edge of the measured object and the pixel shielding part exist.

Drawings

In order to more clearly illustrate the technical solution in the present embodiment, the drawings needed to be used in the description of the embodiment will be briefly introduced below, and it is obvious that the drawings in the following description are one embodiment of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on the drawings without creative efforts.

Fig. 1 is a flowchart of a depth measurement method based on deep learning and structured light combination according to an embodiment of the present invention;

fig. 2 is a three-step phase-shift sinusoidal grating fringe pattern in the depth measurement method based on the combination of deep learning and structured light provided by the embodiment of the present invention;

fig. 3 is a schematic diagram of a structured light system in a depth measurement method based on the combination of deep learning and structured light according to an embodiment of the present invention.

Detailed Description

In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.

The embodiment provides a depth measurement method based on deep learning and structured light combination, two branches are used for processing respectively, and finally depth values are solved in a mutual combination mode, and a structural block diagram is shown in fig. 1.

Specifically, the device comprises a first branch, a second branch and a combining part.

Branching one: the method comprises the steps of calibrating a structured light system to obtain a reference phase, projecting a plurality of sinusoidal grating fringe patterns to a measured object, collecting the sinusoidal grating fringe patterns by using a camera, calculating a wrapping phase, and comparing the wrapping phase with the reference phase to obtain a wrapping deformation phase.

And branch two: and performing depth estimation on the gray level image without the influence of the sine wave by using a depth estimation network to obtain an initial depth value of each pixel point.

Binding moiety: for each pixel, the estimated initial depth value is utilized to further guide the solution to obtain the optimal phase compensation coefficient m, namely when the calculated depth value and the estimated initial depth value approach infinitely, the value of m is the obtained optimal phase compensation coefficient, at the moment, the phase expansion can be accurately carried out, the problem of phase unwrapping is solved, and finally, the accurate depth value is solved according to the real phase difference value.

The invention provides a depth measurement method based on combination of deep learning and structured light, which mainly comprises the following steps:

s1, constructing a structured light system, and calibrating the structured light system to obtain a reference phase

Figure BDA0002202515980000051

S2, sequentially projecting N sinusoidal grating fringe patterns with same frequency and phase difference to a measured object, and acquiring the N sinusoidal grating fringe patterns through a camera to obtain N deformation patterns;

s3, obtaining wrapping phases according to the N deformation graphs

Figure BDA0002202515980000061

Wrapping the phase

Figure BDA0002202515980000062

And the reference phase

Figure BDA0002202515980000063

Comparing to obtain the phase of the wrapping deformation

Figure BDA0002202515980000064

S4, based on the step S2, the sine wave influence elimination processing is carried out on the deformation graph, and a corrected gray scale graph is obtained;

s5, based on the step S4, depth estimation is carried out on the corrected gray scale image through a convolutional neural network, and an initial depth value Z of each pixel point in the corrected gray scale image is obtained(x,y)

S6, based on the step S5, according to the initial depth value Z(x,y)Obtaining a first phase compensation coefficient;

s7, obtaining the wrapping deformation phase based on the step S3And calculating the first phase compensation coefficient obtained in step S6 to obtain the true deformation phaseAnd according to the true deformation phase

Figure BDA0002202515980000067

And calculating to obtain a measured depth value.

The invention combines the advantages of the structured light measurement technology and the deep learning depth estimation, and simultaneously complements the limitations of the structured light measurement technology and the deep learning depth estimation. For the existing deep learning, only the approximate context of the object can be estimated, and the depth value is a less accurate depth value.

The present invention will be further described with reference to specific examples.

Referring to fig. 1 to 3, the present embodiment provides a depth measurement method based on deep learning and structured light combination, including the following steps:

(1) constructing a structured light system, and calibrating the structured light system to obtain a reference phase

Figure BDA0002202515980000068

Wherein the structured light system comprises: camera, projector, reference plane and measured object. Wherein the baseline distance between the camera and the projector is d, and the depth from the camera to the reference plane is Z0. The structured light system may be calibrated using known techniques (e.g., using the zhang scaling method), see fig. 3.

(2) Three sinusoidal grating fringe patterns with same frequency and phase difference are projected to a measured object through a projector (the sinusoidal grating fringe patterns are shown in figure 2), and are collected through a camera, so that three deformation patterns modulated by the measured object are obtained.

(3) Calculating to obtain wrapping phase by using three-step phase shift method

Figure BDA0002202515980000071

The wrapped phase

Figure BDA0002202515980000072

Varying periodically between-pi and pi. Wrapping the phase

Figure BDA0002202515980000073

And the reference phase

Figure BDA0002202515980000074

Performing subtraction to obtain the phase of the wrapped deformation

Figure BDA0002202515980000075

(4) Based on the step (2), aiming at the same pixel position, carrying out averaging processing on the gray values of the three acquired deformation images at the pixel position so as to eliminate the influence of sine waves; and after each pixel point is operated similarly, obtaining a corrected gray-scale image, namely obtaining a recovered gray-scale image.

It should be noted that, in addition to the method for eliminating the sine wave influence by averaging provided above, the deformation map may be converted into a frequency domain filter or a convolutional neural network may be used to eliminate the sine wave influence.

(5) Based on (4), the convolutional neural network specifically selects a Self-Supervised Monocular Depth Estimation network (Self-Supervised Depth Estimation Net) to perform Depth Estimation on the corrected gray-scale image, so as to obtain an initial Depth value Z of each pixel point in the corrected gray-scale image(x,y)

(6) Based on (5), the initial depth value Z of the depth network estimation is known(x,y)And the depth value Z of the camera to the reference plane is known0(noted as the first depth value), the depth value Δ Z from the measured object to the reference plane initially estimated by the depth network(x,y)(noted as second depth value) is:

ΔZ(x,y)=Z0-Z(x,y)(1)

the second depth value DeltaZ(x,y)As a guide, the first phase compensation coefficient (i.e., the optimal phase compensation coefficient m) is solved.

Namely the depth value Delta Z from the measured object to the reference plane initially estimated by the known depth network(x,y)The objective is to solve for the value of the phase compensation coefficient m that is closest to the second depth value Δ Z(x,y). The mathematical description is as follows:

Figure BDA0002202515980000076

Figure BDA0002202515980000081

that is, the following results: l (m)0)>L(m1)>L(m2)>…

When l (m) is the minimum, the corresponding m is the best phase compensation coefficient, i.e. the first phase compensation coefficient.

Wherein h is(x,y)(m) calculating the depth value from the measured object to the reference plane according to the phase compensation coefficient m, and recording as a third depth value; l (m) is an error value between the third depth value and the second depth value, L (m)i) The error values are sorted from large to small as the (i + 1) th bit,f0is the frequency of the sinusoidal grating fringe pattern and d is the baseline distance between the projector and the camera in the structured light system.

(7) Based on (3) and (6), the wrapping deformation phase of each pixel point is known

Figure BDA0002202515980000082

And a first phase compensation coefficient m, calculating to obtain the true deformation phase

Figure BDA0002202515980000083

Comprises the following steps:

Figure BDA0002202515980000084

further, the true deformation phase is calculatedThe corresponding depth values are:

Figure BDA0002202515980000086

at this time, h (x, y) is the final depth of the surface point p (x, y) of the object to be measured with respect to the reference plane, i.e., the depth obtained by substituting the optimal phase compensation coefficient m obtained by equation (5) into the depth calculation equation.

In summary, the present invention adopts a new phase unwrapping method to convert the solved phase compensation coefficient into a regression problem. Specifically, rough depth value estimation is carried out on the acquisition template through a depth neural network, the initial depth value is further used for guiding and solving an optimal phase compensation coefficient so as to accurately carry out phase expansion, solve the problem of phase unwrapping, and finally measure and obtain an accurate depth value. The depth measurement method combines the advantages of deep learning depth estimation and structured light depth measurement, and can greatly improve the precision and speed of depth measurement.

The depth measurement method based on the combination of the deep learning and the structured light provided by the embodiment of the invention at least comprises the following technical effects:

(1) because the rough depth value estimated by the deep learning method is provided for phase unwrapping, the workload of phase unwrapping can be reduced, and the measurement accuracy can be improved.

(2) The structured light technology is combined with deep learning, and the estimated depth value is utilized to further guide and solve the optimal phase compensation coefficient, so that the depth measurement result is more accurate.

Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention has been described in detail with reference to examples, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, which should be covered by the claims of the present invention.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:无地面水文数据支持的水库蓄水变化量遥感监测方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!