Binocular depth estimation method based on pseudo label fusion

文档序号:1891130 发布日期:2021-11-26 浏览:17次 中文

阅读说明:本技术 一种基于伪标签融合的双目深度估计方法 (Binocular depth estimation method based on pseudo label fusion ) 是由 张颖 魏杰科 曹豫 成二康 于 2021-10-29 设计创作,主要内容包括:本发明涉及自动驾驶技术领域,具体来说是一种基于伪标签融合的双目深度估计方法,包括通过双目相机获得左右图;对左右图进行立体匹配,计算出深度图作为伪标签;还包括:通过深度图和置信度网络对左右图进行训练,获得深度图与置信度图;将深度图、置信度图和立体匹配的深度图结合,放入融合网络;获取融合后的深度图,并与真值计算损失函数,反向传播训练模型。本发明所提供的一种基于伪标签融合的双目深度估计方法,既能保留深度学习方法泛化能力强的优势,又能结合传统方法精度高的优点,使本发明的深度图系统在各场景环境下均能保持良好的精度与鲁棒性。(The invention relates to the technical field of automatic driving, in particular to a binocular depth estimation method based on pseudo tag fusion, which comprises the steps of obtaining left and right images through a binocular camera; carrying out stereo matching on the left image and the right image, and calculating a depth image as a pseudo label; further comprising: training the left and right images through a depth image and a confidence coefficient network to obtain a depth image and a confidence coefficient image; combining the depth map, the confidence map and the stereo matching depth map, and putting the depth map, the confidence map and the stereo matching depth map into a fusion network; and acquiring the fused depth map, calculating a loss function with the true value, and reversely propagating the training model. The binocular depth estimation method based on pseudo tag fusion provided by the invention not only can retain the advantage of strong generalization capability of a depth learning method, but also can combine the advantage of high precision of the traditional method, so that the depth map system provided by the invention can keep good precision and robustness under various scene environments.)

1. A binocular depth estimation method based on pseudo label fusion comprises the steps of obtaining left and right images through a binocular camera; carrying out stereo matching on the left image and the right image, and calculating a depth image as a pseudo label; characterized in that the method further comprises: training the left and right images through a depth image and a confidence coefficient network to obtain a depth image and a confidence coefficient image; combining the depth map, the confidence map and the stereo matching depth map, and putting the depth map, the confidence map and the stereo matching depth map into a fusion network; and acquiring the fused depth map, calculating a loss function with the true value, and reversely propagating the training model.

2. The binocular depth estimation method based on pseudo tag fusion according to claim 1, wherein the method is specifically as follows: a, acquiring left and right views acquired by a binocular camera; b, the deep learning neural network comprises a depth map model and a confidence coefficient model, the depth of each pixel point in the image is obtained by inputting the left and right images and the depth map model, and a loss function is constructed by using the depth and a true value of the depth map; c, obtaining a depth map from the left view and the right view by using a binocular stereo matching method; d, superposing the depth map of the deep learning, the depth map of the traditional method and the confidence map, putting the superposed depth map into a fusion network to obtain a fused depth map, and constructing a loss function by using a fused result and a truth value; weighting the two loss functions, reversely propagating the training model, and reversely propagating multiple rounds of training to obtain a final output model; and f, in the reasoning stage, predicting the depth map and the confidence coefficient through the trained model, simultaneously calculating the traditional depth map, and fusing to obtain a final result.

3. The binocular depth estimation method based on pseudo tag fusion of claim 2, wherein the step b comprises the steps of: after a depth map model and a confidence coefficient model are obtained through deep learning training, a left image and a right image are input to the depth map model, a depth map D1 of a current frame left image and a confidence coefficient map corresponding to the depth map are obtained, and then a loss function is constructed by the depth map and a real depth map: loss1= L1(D1, Dgt), where L1() represents L1 loss, Dgt represents the depth map true value corresponding to the frame image, and D1 represents the depth map obtained by the depth map model.

4. The binocular depth estimation method based on pseudo tag fusion according to claim 2, wherein the step d comprises the steps of: superposing the depth map D1 obtained in the step b and the depth map D2 obtained in the step C and the confidence map C1 obtained by the confidence network in the channel direction, putting the superposed images into a fusion network to obtain a fused depth map D3, solving loss between the depth map and a true value, and constructing a supervision loss function: loss2= L1(D3, Dgt), where L1() represents L1 loss, Dgt represents the depth map true value corresponding to the frame image, and D3 represents the depth map after fusion.

Technical Field

The invention relates to the technical field of automatic driving, in particular to a binocular depth estimation method based on pseudo tag fusion.

Background

The NHTSA divides the unmanned driving technology into 6 levels, each of which is 0 to 5, wherein L0 is a general vehicle controlled by full manpower, the L1 level is also called auxiliary driving and can realize simple acceleration and deceleration, the L2 level is also called partial automatic driving, automatic parking can be realized while all contents of the L1 level are realized, the L4 and L5 levels can both realize full automatic driving, and the difference is that the L4 level can only realize full automatic driving under specific roads and weather, and the L5 level can adapt to all terrain and all climate. Depth estimation is an important technology of unmanned driving, and has important application in obstacle detection, distance measurement and three-dimensional target detection in the automatic driving process. The distance information of the obstacle can be acquired through various sensors such as a laser radar, an infrared sensor, a monocular camera and a binocular camera. The camera is the most common sensor in automatic driving, scale information is difficult to obtain by a monocular camera through depth estimation, a great challenge is caused to monocular depth estimation by a dynamic object, the binocular camera can recover the scale by utilizing the left and right target baseline information, but in the prior art, due to the influence of light and scenes, binocular depth map estimation based on vision is not enough and is often robust.

The traditional method calculates the parallax of the matching points by searching the matching points of the left and right frames of the binocular image, and then obtains the depth according to the parallax and the baseline distance, such as patent numbers: TW1069348288B, extracting matching points from the binocular image, and then calculating parallax to recover the depth map, but this method is sensitive to illumination and scene texture and is not robust enough in scenes with poor illumination conditions or scarce texture; the binocular depth estimation based on deep learning mainly obtains a disparity map of a left image through a neural network, and then projects the left image to a right image through the disparity map and makes a difference with a real right image to train a model, but the scheme is not robust enough for scenes which are not in training data, and due to the existence of projection errors, the accuracy of the depth image has errors. Therefore, it is necessary to provide a method that can combine the accuracy of the conventional method and the robustness of the deep learning method based on the prior art.

Disclosure of Invention

The invention aims to solve the defects of the prior art, provides a binocular depth estimation method based on pseudo label fusion, and improves the precision and robustness of a depth map by combining the traditional method with deep learning.

In order to achieve the purpose, a binocular depth estimation method based on pseudo tag fusion is designed, and comprises the following steps of obtaining left and right images through a binocular camera; carrying out stereo matching on the left image and the right image, and calculating a depth image as a pseudo label;

the method further comprises the following steps: training the left and right images through a depth image and a confidence coefficient network to obtain a depth image and a confidence coefficient image; combining the depth map, the confidence map and the stereo matching depth map, and putting the depth map, the confidence map and the stereo matching depth map into a fusion network; and acquiring the fused depth map, calculating a loss function with the true value, and reversely propagating the training model.

The invention also has the following preferable technical scheme:

further, the method specifically comprises the following steps:

a, acquiring left and right views acquired by a binocular camera; b, the deep learning neural network comprises a depth map model and a confidence coefficient model, the depth of each pixel point in the image is obtained by inputting the left and right images and the depth map model, and a loss function is constructed by using the depth and a true value of the depth map; c, obtaining a depth map from the left view and the right view by using a binocular stereo matching method; d, superposing the depth map of the deep learning, the depth map of the traditional method and the confidence map, putting the superposed depth map into a fusion network to obtain a fused depth map, and constructing a loss function by using a fused result and a truth value; weighting the two loss functions, reversely propagating the training model, and reversely propagating multiple rounds of training to obtain a final output model; and f, in the reasoning stage, predicting the depth map and the confidence coefficient through the trained model, simultaneously calculating the traditional depth map, and fusing to obtain a final result.

Further, the step b comprises the following steps: after a depth map model and a confidence coefficient model are obtained through deep learning training, a left image and a right image are input to the depth map model, a depth map D1 of a current frame left image and a confidence coefficient map corresponding to the depth map are obtained, and then a loss function is constructed by the depth map and a real depth map: loss1= L1(D1, Dgt), where L1() represents L1 loss, Dgt represents the depth map true value corresponding to the frame image, and D1 represents the depth map obtained by the depth map model.

Further, the step d comprises the following steps: superposing the depth map D1 obtained in the step b and the depth map D2 obtained in the step C and the confidence map C1 obtained by the confidence network in the channel direction, putting the superposed images into a fusion network to obtain a fused depth map D3, solving loss between the depth map and a true value, and constructing a supervision loss function: loss2= L1(D3, Dgt), where L1() represents L1 loss, Dgt represents the depth map true value corresponding to the frame image, and D3 represents the depth map after fusion.

Advantageous effects of the invention

The binocular depth estimation method based on pseudo tag fusion provided by the invention has the advantages that: for left and right images obtained by a binocular camera, training the left and right images through a depth image/confidence coefficient network to obtain a depth image and a confidence coefficient image; stereo matching is carried out on the left image and the right image by using a traditional method, and a depth image is calculated to be used as a pseudo label; and then combining the depth map subjected to deep learning with the confidence map and the depth map subjected to stereo matching, putting the depth map into a fusion network, obtaining the depth map subjected to fusion, calculating a loss function with a true value, and reversely propagating a training model, so that the advantage of strong generalization capability of the deep learning method can be kept, and the advantage of high precision of the traditional method can be combined, so that the depth map system disclosed by the invention can keep good precision and robustness under various scene environments.

Drawings

Fig. 1 illustrates an exemplary binocular depth estimation method based on pseudo tag fusion according to the present invention;

FIG. 2 illustrates a vehicle on-ramp scenario captured in one embodiment;

FIG. 3 illustrates depth map results obtained using a conventional stereo matching method for FIG. 2;

fig. 4 illustrates the depth map results obtained using the method of the present invention on fig. 2.

Detailed Description

The invention is further explained with reference to the accompanying drawings, and referring to fig. 1, the method for estimating binocular depth based on pseudo tag fusion specifically includes the following steps:

a, acquiring left and right views acquired by a binocular camera;

b, the deep learning neural network comprises a depth map model and a confidence coefficient model, the depth of each pixel point in the image is obtained by inputting the left and right images and the depth map model, and a loss function is constructed by using the depth and a true value of the depth map;

c, obtaining a depth map from the left view and the right view by using a binocular stereo matching method;

d, superposing the depth map of the deep learning, the depth map of the traditional method and the confidence map, putting the superposed depth map into a fusion network to obtain a fused depth map, and constructing a loss function by using a fused result and a truth value;

weighting the two loss functions, reversely propagating the training model, and reversely propagating multiple rounds of training to obtain a final output model;

and f, in the reasoning stage, predicting the depth map and the confidence coefficient through the trained model, simultaneously calculating the traditional depth map, and fusing to obtain a final result.

In the above step, the step b includes the following steps: after a depth map model and a confidence coefficient model are obtained through deep learning training, a left image and a right image are input to the depth map model, a depth map D1 of a current frame left image and a confidence coefficient map corresponding to the depth map are obtained, and then a loss function is constructed by the depth map and a real depth map: loss1= L1(D1, Dgt), where L1() represents L1 loss, Dgt represents the depth map true value corresponding to the frame image, and D1 represents the depth map obtained by the depth map model.

The step c comprises the following steps: firstly, extracting pixel points with the pixel gradient value larger than a certain threshold value in a left image, then traversing and searching the pixel points on a base line with the same y value in a right image, and calculating the SAD value of a certain area around the left and right pixel points so as to find out corresponding matching points. And then obtaining pixel parallax D '= x2-x1 according to the matching points, wherein x2 is an x coordinate of a pixel point in the left image, x1 is an x coordinate of a corresponding point in the right image, D' is a parallax value, and finally obtaining a depth value of the pixel point according to the parallax value and camera parameters to obtain a depth image D2.

The step d comprises the following steps: superposing the depth map D1 obtained in the step b and the depth map D2 obtained in the step C and the confidence map C1 obtained by the confidence network in the channel direction, putting the superposed images into a fusion network to obtain a fused depth map D3, solving loss between the depth map and a true value, and constructing a supervision loss function: loss2= L1(D3, Dgt), where L1() represents L1 loss, Dgt represents the depth map true value corresponding to the frame image, and D3 represents the depth map after fusion.

The binocular depth estimation method based on pseudo tag fusion adopted by the invention is specifically described below through a specific embodiment, and in the embodiment, the technical scheme provided by the invention is specifically implemented on a certain road section.

Step a, a video sequence picture which randomly acquires about 20 ten thousand laser point cloud signals on the road section by using an acquisition vehicle with a 32-line laser radar and a binocular camera is used as supervised training data, and the error of the laser radar is in centimeter level and is far smaller than the visual ranging error (meter level), so that the video sequence picture can be used as a training true value.

And b, carrying out supervised training by using a gpu server with a large video memory, wherein the model structure and the loss design are described in detail in the technical scheme, the batch size is set to be 64 in the training process, the optimizer uses SGD, the initial learning rate is 0.01, the initial learning rate is adjusted to be 0.001 at the 60 epoch, and the loss finishes the convergence yield model after 120 epochs are trained.

And c, randomly acquiring a video sequence chart of about 1 ten thousand laser point cloud signals on the road section by using an acquisition vehicle with a 32-line laser radar and a binocular camera, wherein the data is used as test data with a depth true value for checking the algorithm effect.

D, reasoning the test data in the server by using the model trained in the step b and obtaining the depth map of each test picture.

And e, effect display and model performance statistics, referring to fig. 2, showing an on-ramp scene on the road section, and fig. 4 showing a depth map result generated by using the method, so that it can be seen that the depth information of the vehicle and the road surface is well restored. Fig. 3 is a depth map obtained using a conventional stereo matching method, and the vehicle and road depth information is distorted and lost seriously. Finally, according to the comprehensive evaluation result in the test data, the depth average error of the traditional stereo matching method exceeds 15%, the average error of the binocular depth map method based on deep learning is greater than 8%, and the depth average error of the method is less than 6%.

The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be within the technical scope of the present invention, and the technical solutions and novel concepts according to the present invention should be covered by the scope of the present invention.

7页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于测量身体部位的关键尺寸的方法、设备和介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!