VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method

文档序号:925104 发布日期:2021-03-02 浏览:9次 中文

阅读说明:本技术 基于vi-slam和深度估计网络的无人机场景稠密重建方法 (VI-SLAM and depth estimation network-based unmanned aerial vehicle scene density reconstruction method ) 是由 周忠 吕文官 温佳伟 闫飞虎 柳晨辉 于 2020-09-29 设计创作,主要内容包括:本发明涉及一种基于VI-SLAM和深度估计的无人机场景稠密重建方法,步骤为:(1)将惯性导航器件IMU固定到无人机上,并对无人机自带单目相机内参、外参和IMU外参进行标定;(2)使用无人机单目相机和IMU采集无人机场景的图像序列及IMU信息;(3)使用VI-SLAM对步骤(2)中采集的图像和IMU信息进行处理,得到带尺度信息的相机位姿;(4)将单目图像信息作为原视图输入视点生成网络,得到右视图,再将原视图和右视图输入深度估计网络,得到图像的深度信息;(5)将步骤(3)中得到的相机姿态和步骤(4)中的深度图进行结合,得到局部点云;(6)经过点云优化及配准,将SLAM跟踪轨迹与局部点云融合,从而得到无人机场景稠密点云模型。(The invention relates to an unmanned aerial vehicle scene density reconstruction method based on VI-SLAM and depth estimation, which comprises the following steps: (1) fixing an inertial navigation device IMU on the unmanned aerial vehicle, and calibrating internal parameters, external parameters and IMU external parameters of a monocular camera of the unmanned aerial vehicle; (2) acquiring an image sequence and IMU information of an unmanned aerial vehicle scene by using an unmanned aerial vehicle monocular camera and an IMU; (3) processing the image and IMU information acquired in the step (2) by using a VI-SLAM to obtain a camera pose with scale information; (4) inputting monocular image information as an original view into a viewpoint generation network to obtain a right view, and inputting the original view and the right view into a depth estimation network to obtain depth information of the image; (5) combining the camera attitude obtained in the step (3) with the depth map obtained in the step (4) to obtain local point cloud; (6) and fusing the SLAM tracking track and the local point cloud through point cloud optimization and registration to obtain an unmanned aerial vehicle scene dense point cloud model.)

1. An unmanned aerial vehicle scene dense reconstruction method based on a VI-SLAM and a depth estimation network is characterized by comprising the following steps:

step 1: fixing an inertial navigation device IMU on the unmanned aerial vehicle, and calibrating internal parameters and external parameters of a monocular camera carried by the unmanned aerial vehicle and external parameters of the inertial navigation device IMU;

step 2: acquiring an image sequence of an unmanned aerial vehicle scene by using a monocular camera and a bound IMU device of the unmanned aerial vehicle, and acquiring IMU information at each time point when the unmanned aerial vehicle flies according to a fixed frequency;

and step 3: processing the image sequence and IMU information in the step 2 by using a vision and inertial navigation fused simultaneous positioning and mapping method, namely a VI-SLAM algorithm, and dividing the processing into three threads of tracking, local map optimization and loop detection to finally obtain a camera pose with scale information, namely absolute displacement information;

and 4, step 4: inputting monocular image information as a left view input viewpoint to generate a network to obtain a right view, and inputting the left view and the right view into a depth estimation network to obtain image depth information;

and 5: combining the camera pose obtained in the step 3 with the image depth information obtained in the step 4 to obtain local point cloud;

step 6: and fusing the local point clouds through point cloud optimization and registration to obtain a dense point cloud model of the unmanned aerial vehicle scene.

2. The VI-SLAM and depth estimation network-based unmanned aerial vehicle scene dense reconstruction method of claim 1, wherein: in the step 3, the implementation process of the VI-SLAM algorithm is as follows:

the VI-SLAM algorithm is divided into three threads, namely a tracking thread belonging to the front end, a local map optimization thread belonging to the rear end and a loop detection thread:

tracking a thread, mainly providing an initial value of a key frame for an optimization thread, maintaining a fixed-size window to control the optimization scale, and initializing the VI-SLAM system to obtain initial pose estimation, scale factors, gravity direction, IMU bias and speed of a camera; then calculating the noise covariance transmission of rotation, speed and position in the tracking process; finally, tracking the pose of the camera based on the sliding window, positioning the pose of the new frame, marking a new key frame, and maintaining a sliding window with a fixed size by marginalizing the old frame;

the local map optimization thread is used for further pose optimization of the key frame information in the sliding window in the tracking thread, a local map is created by utilizing the common-view relation between the key frames, and nonlinear optimization is carried out on residual errors generated by observation differences in the local map, so that more accurate camera pose estimation is obtained;

a loop detection thread for detecting whether a loop is generated in the tracking process; when a loop is generated, the relative pose between frames forming the loop is used as a constraint relation to further correct the estimated pose in local map optimization, the pose estimation consistency is ensured, IMU information is fused in the tracking process, the scale information of the camera pose is considerable, the camera pose is more accurate through correction between the loop frames, and finally the camera pose with scale information, namely absolute displacement information is obtained.

3. The VI-SLAM and depth estimation network-based unmanned aerial vehicle scene dense reconstruction method of claim 2, wherein: the tracking thread is implemented as:

(1) initializing a VI-SAL system, namely performing visual initialization in a loose coupling mode to acquire a relative pose between frames, solving an IMU initialization value according to a corresponding relation between the acquired pose and an IMU, performing visual initialization by taking the pose of a first frame as a global reference coordinate, and performing initial estimation on the pose of a camera in a polar geometry mode to acquire the relative pose between the cameras; calculating the gravity direction, the scale factor and IMU offset by using the relation between the IMU measurement value and the camera tracking relative pose, and finally calculating the speed;

(2) covariance transmission, wherein IMU measurement data is influenced by noise, the noise obeys Gaussian distribution, and covariance information of an optimized variable is obtained as a weight factor during optimization by establishing a noise model of rotation, speed and position;

(3) maintaining a window with a fixed size based on pose tracking of a sliding window, wherein the window consists of a nearest adjacent frame and a nearest key frame, when a new frame arrives, whether the new frame is a key frame is marked first, marginalizing is carried out on an old frame so that the new frame enters the window, and if the new frame is the key frame, the marginalizing of the key frame is used, namely, the oldest key frame in the sliding window is marginalized; otherwise, marginalizing the next new frame, wherein the marginalizing method adopts the Shur decomposition, and the retention of prior information in the optimization problem is guaranteed.

4. The VI-SLAM and depth estimation network-based unmanned aerial vehicle scene dense reconstruction method of claim 2, wherein: the local map optimization is realized by:

(1) for a sliding window maintained in a tracking thread, optimizing a frame newly arriving at the window with a previous adjacent frame, minimizing a reprojection error between two frames and a difference between IMU measurement and visual measurement, wherein the calculation speed is high because only the position optimization between the two frames is involved at the moment;

(2) if the frame of the new window is a key frame, all key frames with a common view relation with the current key frame are taken as a local map of the current key frame, all camera poses in the local map are optimized by using constraints generated by all common view relations in the local map during optimization, the nonlinear optimization is calculated by minimizing reprojection errors among the common view frames, differences between IMU measurement and visual measurement and prior errors, and all key frames participating in optimization obtain more accurate camera pose estimation.

5. The VI-SLAM and depth estimation network-based unmanned aerial vehicle scene dense reconstruction method of claim 3, wherein: the loop detection is specifically realized as follows:

(1) for a new key frame, finding out the minimum similarity score of the key frame in the new frame and a local map thereof by utilizing word bag model acceleration feature matching, and taking the value as a threshold value; then, finding out key frames with the similarity score larger than a threshold value with the new frame from all the previous key frames, grouping the key frames, dividing each key frame and the first 10 key frames which are closest to the key frame into a group, calculating the total score of each group and the key frame with the highest score of each group, and then taking 75% of the highest value of the group score as a new threshold value, and finding out the frames with the highest score in all the groups higher than the new threshold value as loop candidate frames; matching the adjacent frames of the new frame with the adjacent frames of the candidate frame, checking whether the same adjacent key frames pass or not, and considering that loop detection is successful;

(2) as long as loop detection is successful, the camera returns to a certain original passing place, accumulated errors in the tracking process are eliminated by utilizing the pose relationship between loop frames and new frames, the positions of the new frames and the loop frames are fixed to be used as global constraints, global nonlinear optimization is carried out on the basis by minimizing reprojection errors between common-view frames and errors between IMU measurement and visual measurement, and the pose of a middle key frame is adjusted, so that the camera pose obtained by local map optimization is further accurate, and because IMU information is fused in the tracking process, the scale information of the camera pose is considerable, and finally the camera pose with scale information, namely absolute displacement information is obtained; and adding a new constraint relation between the frame with successful loop detection and the loop frame into global pose optimization, reconstructing an optimization problem, and finally obtaining the pose estimation with higher precision and scale by minimizing the reprojection error between the common-view key frames and the difference between IMU measurement and visual measurement.

6. The VI-SLAM and depth estimation network-based unmanned aerial vehicle scene dense reconstruction method of claim 1, wherein: in step 4, the process of obtaining the image depth information is as follows:

(1) the depth estimation task is divided into two sub-problems, namely, a monocular view is used as a left view, and a right view is generated through a viewpoint estimation network; secondly, a depth estimation result is obtained by jointly inputting the left view and the right view into a depth estimation network; the first sub-problem adopts a Deep3D network structure and is divided into a main prediction network and a selection layer, wherein the main prediction network is based on VGG16, a branch is arranged behind each pooling layer, an deconvolution layer is used for sampling an input feature map, the branch obtained feature maps are superposed to obtain a feature representation consistent with the size of an input image; then, the selected layer is convoluted again through the characteristic, softmax transformation is carried out on each spatial position channel once to obtain probability distribution maps under different parallaxes, each pixel of the image is weighted according to the parallax probability to calculate the offset of the point, and the output weighted result is the finally generated right view; the second sub-problem network adopts a FlowNetCorr network structure, the network is also divided into two parts, one part is a contraction part and consists of convolution layers and is used for extracting and fusing the characteristics of two pictures, the other part is an expansion layer and is used for recovering the depth prediction to a high pixel, the contraction part firstly carries out convolution operation on a left view and a right view for three times respectively, extracts the characteristics of the two views and then carries out matching fusion on characteristic information; the expansion layer consists of a deconvolution layer and a convolution layer, the characteristics acquired by each layer are up-sampled in the deconvolution process and connected with the characteristics corresponding to the contraction part, and the deconvolution is finished after 4 times to obtain a depth estimation image;

(2) model training, wherein the model training process is divided into two stages, and a viewpoint generation network and a depth estimation network are trained independently in the first stage; in the second stage, the two parts are combined into a whole and trained in an end-to-end mode; for the viewpoint generation network, the VGG16 is used as a reference network, the training parameters on ImageNet are used as the initialization of network parameters, and for other parameters in the network, the training result of Deep3D is used as a corresponding initialization value; on the basis, 22600 groups of binocular images selected from the KITTI data set are used as a training set to further train the viewpoint generation network, the model iterates for 20 ten thousand times or more at the learning rate of 0.0002, and for the binocular depth estimation network, the virtual data set FlyingTimes 3D is used for training the network; in the second stage, the two trained networks are combined to carry out end-to-end training, 200 pairs of frames with parallax in the KITTI data set are selected and used in the fine adjustment test process, and meanwhile, the fine adjustment data set is ensured not to contain the selected frames in the test data set, and in addition, as the resolution of the KITTI data is higher than that of the image generated by the viewpoint, the right image acquired by the viewpoint generation needs to be subjected to up-sampling so that the binocular resolution is kept consistent;

(3) and (3) depth estimation of the monocular image is realized by utilizing the network model obtained after training, the monocular image is used as a left view and input into the depth estimation network, a right view is obtained through viewpoint generation, and then depth estimation is carried out on the left view and the right view jointly to obtain depth information of the image.

7. The VI-SLAM and depth estimation network-based unmanned aerial vehicle scene dense reconstruction method of claim 1, wherein: in the step 5, the local point cloud generation process is as follows:

the local point cloud is composed of a group of discrete points, each point corresponds to a certain pixel in the two-dimensional image, the pixel has x, y and z three-dimensional coordinates and R, G and B color information, and the corresponding relation between the two-dimensional image and the three-dimensional space point is calculated through a pinhole camera model, and the model formula is shown as formula (a):

wherein, the pinhole camera model comprises X-axis direction focal length fxFocal length f in Y-axis directionyX-axis direction optical center position cxAnd the optical center position c in the Y-axis directionyModel parameters, (u, v) represent pixel coordinates of an image, d is the depth value of a pixel point provided by image depth information, the right side of a formula is a homogeneous expression of three-dimensional coordinates of a space point, and a local point cloud is obtained by solving the three-dimensional coordinates of each pixel point on a single image;

the coordinates of the local point clouds correspond to the positions under the camera coordinate system, and the estimated camera pose is utilized to transform the space coordinates of each point in the local point clouds to be under the unified global coordinates so as to facilitate the subsequent point cloud fusion.

8. The VI-SLAM and depth estimation network-based unmanned aerial vehicle scene dense reconstruction method of claim 1, wherein: the point cloud optimization, registration and fusion process in the step 6 is as follows:

(1) removing external points in point cloud optimization, wherein a large number of inaccurate depth points are estimated in point cloud recovered by depth information, the most prominent point is an outlier, namely the distance between part of pixel points and other adjacent space points obviously deviates from a point group, the external points in the point cloud are removed by using a plurality of constraint modes of LOF operators, parallax threshold values and view point common view relations, and the local reachable density lrd is set for describing the distribution condition of the point cloud around the space pointsk(p) the inverse of the mean reachable distance of a point within the neighborhood of point p to the target point, Nk(p) is the kth distance domain of the point p, reach-distance (p, o) is the kth reachable distance from the point o to the point p, and the local reachable density calculation formula of the point p is shown as the formula (b):

according to the obtained reachable density, further calculating the LOF factor corresponding to the space point, wherein the calculation formula is shown as formula (c), and when the LOF factor is obtained, the LOF factor is calculatedk(p) when the distribution of the current points is more sparse than that of other points, the current points are considered as candidate outer points; in the process of removing the outer points, on one hand, 0-30m spatial points are extracted according to a parallax threshold value, on the other hand, only a depth estimation accurate area is divided according to a common visual area for scene processing, and on the basis, the candidate outer points are removed, so that accurate information is provided for point cloud fusion;

(2) carrying out trilateral filtering in point cloud optimization, and simultaneously calculating pose, pixel and depth difference as filtering weight values, wherein a filtering factor calculation formula is as follows:

s denotes the range of the filter kernel, I denotes the pixel value of the corresponding pointD denotes a depth value of the corresponding point,as a gaussian weight function of the difference in pose at point p and at point q,as a gaussian weight function of the pixel differences,is a Gaussian weight function of depth differences, WpThe calculation formula is (e) for the continuous product of each Gaussian distribution weight,

wherein, the larger the edge color difference of the object, Ip-IqAnd Dp-DqThe larger the value is, the closer the weight result is to 0, and the object edge information is effectively retained; for the area with the same object with insignificant depth chromatic aberration, corresponding Ip-IqAnd Dp-DqThe weight result is close to 0, the weight result is close to 1, the trilateral filtering is close to Gaussian filtering, and a good smoothing effect is achieved after the filtering; through trilateral filtering, noise reduction is carried out on the basis of keeping object edge information, and the accuracy of point cloud is further improved;

(3) point cloud registration and fusion, namely firstly, aiming at two groups of point clouds with a common view relation through an ICP (inductively coupled plasma) algorithm, and estimating the relative pose between the point clouds by utilizing the corresponding relation of characteristic points; and then, scale registration is carried out, the point cloud pose and the pose tracked by the SLAM are compared, namely the point cloud and the camera pose can be fused, the camera tracking pose is taken as the standard during fusion, local point cloud is fused according to the path track, and the unmanned aerial vehicle scene dense point cloud model with the consistent overall situation is obtained.

Technical Field

The invention relates to an unmanned aerial vehicle scene density reconstruction method based on a VI-SLAM (visual inertial navigation fusion and simultaneous positioning and mapping) and a depth estimation network, and belongs to the field of virtual reality.

Background

Three-dimensional reconstruction refers to the establishment of a mathematical model suitable for computer representation and processing of a three-dimensional object, is the basis for processing, operating and analyzing the properties of the three-dimensional object in a computer environment, and is also a key technology for establishing virtual reality expressing an objective world in a computer. With the increasing demand of three-dimensional reconstruction and the continuous popularization of unmanned aerial vehicle aerial photography, point cloud reconstruction based on unmanned aerial vehicle aerial photography images is a research hotspot.

The traditional dense reconstruction mode using a depth camera or based on SfM needs complex hardware equipment and huge computing resources, and can not well meet the requirements of lightweight and rapid reconstruction of equipment in large-scale scene reconstruction of unmanned aerial vehicles. KinectFusion gives an impressive result, taking as input the depth map of the Kinect measurements, while using Iterative Closest Point (ICP) to locate the camera itself, and using TSDF fusion to fuse all measurements into a global map. Although the RGB-D camera provides a depth measurement with few anomalies, it can only maintain an accurate measurement over a distance of four to five meters, since it is based on a TOF camera. They cannot work in such large outdoor scenes. Aiming at the requirement of fast scene reconstruction, the application of deep learning in the aspect of deep estimation provides a new idea for researchers. The author of the CNN-SLAM proposed by Tateno et al in 2017 proposes an application using CNN in combination with SLAM, and the main process is as follows: and (3) utilizing a direct method SLAM to track the camera and extract key frames, and predicting the depth value of the single-frame image on the extracted key frames to obtain a depth image. However, the single image depth estimation method based on depth learning mainly focuses on the direct correspondence between the scene information and the depth information of a single view, and directly estimates the depth information on the basis of a single image. Geometric constraints contained in the environmental information are not fully utilized in the model design process, and the model precision depends on the quality of a training data set, so that the network lacks generalization capability.

The scene dense reconstruction method based on the SfM technology mainly utilizes the feature matching relationship between images to construct geometric information constraint between the images, and obtains a point cloud reconstruction result meeting the geometric constraint in a global optimization mode. In the method, a large amount of computing resources are needed to be used when the map point cloud is optimized, the efficiency of the reconstruction process is neglected, and the point cloud model needs to be obtained for a long time.

For some limitations of the current work, the invention provides an unmanned aerial vehicle scene reconstruction method based on a VI-SLAM and a depth estimation network, which improves the operation efficiency of scene reconstruction by comprehensively utilizing scene information of images and can solve the problem that a large scene cannot be reconstructed quickly.

Disclosure of Invention

The technical problem of the invention is solved: the method for reconstructing the scene density of the unmanned aerial vehicle based on the VI-SLAM and the depth estimation network is used for solving the problem that large-scale scenes cannot be reconstructed quickly and densely in the prior art, the estimation of scene depth information is realized by tracking the camera posture in real time and estimating the depth of a single picture, and the higher operation efficiency can be achieved during reconstruction. Experiments show that the algorithm of the invention can remarkably improve the efficiency of scene reconstruction, and the processing speed of video stream can reach 1.3 frames per second; meanwhile, the global consistency of the large scene reconstruction result can be ensured.

The technical scheme adopted by the invention is as follows: an unmanned aerial vehicle scene density reconstruction method based on a VI-SLAM and a depth estimation network is characterized in that aerial images acquired at middle and low altitudes are fused with IMU information to estimate a camera space pose and preliminarily acquire a scene three-dimensional structure; and then estimating the depth information of the scene in the monocular image by using the depth estimation network, and providing a basis for dense scene reconstruction. And finally, carrying out global consistency fusion on the obtained local point cloud by using the obtained depth information and the camera attitude to achieve the purpose of quickly reconstructing the three-dimensional scene.

The method specifically comprises the following steps:

step 1: fixing an inertial navigation device IMU on the unmanned aerial vehicle, and calibrating internal parameters and external parameters of a monocular camera carried by the unmanned aerial vehicle and external parameters of the inertial navigation device IMU;

step 2: acquiring an image sequence of an unmanned aerial vehicle scene by using a monocular camera and a bound IMU device of the unmanned aerial vehicle, and acquiring IMU information at each time point when the unmanned aerial vehicle flies according to a fixed frequency;

and step 3: processing the image sequence and IMU information in the step 2 by using a vision and inertial navigation fused simultaneous positioning and mapping method, namely a VI-SLAM algorithm, and dividing the processing into three threads of tracking, local map optimization and loop detection to finally obtain a camera pose with scale information, namely absolute displacement information;

and 4, step 4: inputting monocular image information as a left view input viewpoint to generate a network to obtain a right view, and inputting the left view and the right view into a depth estimation network to obtain image depth information;

and 5: combining the camera pose obtained in the step 3 with the image depth information obtained in the step 4 to obtain local point cloud;

step 6: and fusing the local point clouds through point cloud optimization and registration to obtain a dense point cloud model of the unmanned aerial vehicle scene.

In the step 3, the implementation process of the VI-SLAM algorithm is as follows:

the VI-SLAM algorithm is divided into three threads, namely a tracking thread belonging to the front end, a local map optimization thread belonging to the rear end and a loop detection thread:

tracking a thread, mainly providing an initial value of a key frame for an optimization thread, maintaining a fixed-size window to control the optimization scale, and initializing the VI-SLAM system to obtain initial pose estimation, scale factors, gravity direction, IMU bias and speed of a camera; then calculating the noise covariance transmission of rotation, speed and position in the tracking process; finally, tracking the pose of the camera based on the sliding window, positioning the pose of the new frame, marking a new key frame, and maintaining a sliding window with a fixed size by marginalizing the old frame;

the local map optimization thread is used for further pose optimization of the key frame information in the sliding window in the tracking thread, a local map is created by utilizing the common-view relation between the key frames, and nonlinear optimization is carried out on residual errors generated by observation differences in the local map, so that more accurate camera pose estimation is obtained;

and the loop detection thread is used for detecting whether a loop is generated in the tracking process, when the loop is generated, the relative pose between frames forming the loop is used as a constraint relation to further correct the estimated pose in the local map optimization, so that the pose estimation consistency is ensured.

The tracking thread is implemented as:

(1) VI-SLAM system initialization, wherein a loose coupling mode is adopted, visual initialization is firstly carried out, the relative pose between frames is obtained, then an IMU initialization value is solved according to the corresponding relation between the obtained pose and the IMU, the pose of the first frame is used as a global reference coordinate for visual initialization, and the pose of a camera is initially estimated in a polar geometry mode to obtain the relative pose between cameras; calculating the gravity direction, the scale factor and IMU offset by using the relation between the IMU measurement value and the camera tracking relative pose, and finally calculating the speed;

(2) covariance transmission, wherein IMU measurement data is influenced by noise, the noise obeys Gaussian distribution, and covariance information of an optimized variable is obtained as a weight factor during optimization by establishing a noise model of rotation, speed and position;

(3) maintaining a window with a fixed size based on pose tracking of a sliding window, wherein the window consists of a nearest adjacent frame and a nearest key frame, when a new frame arrives, whether the new frame is a key frame is marked first, marginalizing is carried out on an old frame so that the new frame enters the window, and if the new frame is the key frame, the marginalizing of the key frame is used, namely, the oldest key frame in the sliding window is marginalized; otherwise, marginalizing the next new frame, wherein the marginalizing method adopts the Shur decomposition, and the retention of prior information in the optimization problem is guaranteed.

The local map optimization is realized by:

(1) for a sliding window maintained in a tracking thread, optimizing a frame newly arriving at the window with a previous adjacent frame, minimizing a reprojection error between two frames and a difference between IMU measurement and visual measurement, wherein the calculation speed is high because only the position optimization between the two frames is involved at the moment;

(2) if the frame of the new window is a key frame, all key frames with a common view relation with the current key frame are taken as a local map of the current key frame, all camera poses in the local map are optimized by using constraints generated by all common view relations in the local map during optimization, the nonlinear optimization is calculated by minimizing reprojection errors among the common view frames, differences between IMU measurement and visual measurement and prior errors, and all key frames participating in optimization obtain more accurate camera pose estimation.

The loop detection is specifically realized as follows:

(1) for new key frames, feature matching is accelerated by utilizing a bag-of-words model. Finding out the minimum similarity score between the new frame and the key frame in the local map of the new frame, and taking the value as a threshold value; then, finding out key frames with similarity scores larger than a threshold value from all previous key frames (excluding key frames of a local map), grouping the key frames, wherein each key frame and the first 10 key frames which are closest to the key frame are divided into a group, calculating the total score of each group and the key frame with the highest score of each group, and then taking 75% of the highest score of the group as a new threshold value to find out the frames with the highest score in all groups higher than the new threshold value as loopback candidate frames; matching the adjacent frames of the new frame with the adjacent frames of the candidate frame, checking whether the same adjacent key frames pass or not, and considering that loop detection is successful;

(2) as long as loop detection is successful, the camera returns to a certain original passing place, accumulated errors in the tracking process are eliminated by utilizing the pose relationship between loop frames and new frames, the positions of the new frames and the loop frames are fixed to serve as global constraints, global nonlinear optimization is carried out on the basis by minimizing reprojection errors between common-view frames and errors between IMU measurement and visual measurement, and the pose of a middle key frame is adjusted, so that the camera pose obtained by local map optimization is further accurate. Because IMU information is fused in the tracking process, the scale information of the camera pose is considerable, and the camera pose with the scale information, namely absolute displacement information, is finally obtained; and adding a new constraint relation between the frame with successful loop detection and the loop frame into global pose optimization, reconstructing an optimization problem, and finally obtaining the pose estimation with higher precision and scale by minimizing the reprojection error between the common-view key frames and the difference between IMU measurement and visual measurement.

In step 4, the process of obtaining the image depth information is as follows:

(1) the depth estimation task is divided into two sub-problems, namely, a monocular view is used as a left view, and a right view is generated through a viewpoint estimation network; secondly, a depth estimation result is obtained by jointly inputting the left view and the right view into a depth estimation network; the first sub-problem adopts a Deep3D network structure and is divided into a main prediction network and a selection layer, wherein the main prediction network is based on VGG16, a branch is arranged behind each pooling layer, an deconvolution layer is used for sampling an input feature map, the branch obtained feature maps are superposed to obtain a feature representation consistent with the size of an input image; then, the selected layer is convoluted again through the characteristic, softmax transformation is carried out on each spatial position channel once to obtain probability distribution maps under different parallaxes, each pixel of the image is weighted according to the parallax probability to calculate the offset of the point, and the output weighted result is the finally generated right view; the second sub-problem network adopts a FlowNetCorr network structure, the network is also divided into two parts, one part is a contraction part and consists of convolution layers and is used for extracting and fusing the characteristics of two pictures, the other part is an expansion layer and is used for recovering the depth prediction to a high pixel, the contraction part firstly carries out convolution operation on a left view and a right view for three times respectively, extracts the characteristics of the two views and then carries out matching fusion on characteristic information; the expansion layer consists of a deconvolution layer and a convolution layer, the characteristics acquired by each layer are up-sampled in the deconvolution process and connected with the characteristics corresponding to the contraction part, and the deconvolution is finished after 4 times to obtain a depth estimation image;

(2) model training, wherein the model training process is divided into two stages, and a viewpoint generation network and a depth estimation network are trained independently in the first stage; in the second stage, the two parts are combined into a whole and trained in an end-to-end mode; for the viewpoint generation network, the VGG16 is used as a reference network, the training parameters on ImageNet are used as the initialization of network parameters, and for other parameters in the network, the training result of Deep3D is used as a corresponding initialization value; on the basis, 22600 groups of binocular images selected from the KITTI data set are used as a training set to further train the viewpoint generation network, the model iterates for 20 ten thousand times or more at the learning rate of 0.0002, and for the binocular depth estimation network, the virtual data set FlyingTimes 3D is used for training the network; in the second stage, the two trained networks are combined to carry out end-to-end training, 200 pairs of frames with parallax in the KITTI data set are selected and used in the fine adjustment test process, and meanwhile, the fine adjustment data set is ensured not to contain the selected frames in the test data set, and in addition, as the resolution of the KITTI data is higher than that of the image generated by the viewpoint, the right image acquired by the viewpoint generation needs to be subjected to up-sampling so that the binocular resolution is kept consistent;

(3) and (3) depth estimation results are obtained by utilizing the network model obtained after training, namely, the depth estimation of the monocular image is realized, after the monocular image is used as a left view and input into the depth estimation network, a right view is obtained through viewpoint generation, then the left view and the right view are combined for depth estimation, the depth information of the image is obtained, and the schematic diagrams of the viewpoint generation results and the depth estimation results are shown in the attached figure 4.

In the step 5, the local point cloud generation process is as follows:

the local point cloud is composed of a group of discrete points, each point corresponds to a certain pixel in the two-dimensional image, the pixel has x, y and z three-dimensional coordinates and R, G and B color information, and the corresponding relation between the two-dimensional image and the three-dimensional space point is calculated through a pinhole camera model, and the model formula is shown as formula (a):

wherein, the pinhole camera model comprises X-axis direction focal length fxFocal length f in Y-axis directionyX-axis direction optical center position cxAnd the optical center position c in the Y-axis directionyModel parameters, (u, v) represent pixel coordinates of an image, d is the depth value of a pixel point provided by image depth information, the right side of a formula is a homogeneous expression of three-dimensional coordinates of a space point, and a local point cloud is obtained by solving the three-dimensional coordinates of each pixel point on a single image;

the coordinates of the local point clouds correspond to the positions under the camera coordinate system, and the estimated camera pose is utilized to transform the space coordinates of each point in the local point clouds to be under the unified global coordinates so as to facilitate the subsequent point cloud fusion.

The point cloud optimization, registration and fusion process in the step 6 is as follows:

(1) removing external points in point cloud optimization, wherein a large number of inaccurate depth points are estimated in point cloud recovered by depth information, the most prominent point is an outlier, namely the distance between part of pixel points and other adjacent space points obviously deviates from a point group, the external points in the point cloud are removed by using a plurality of constraint modes of LOF operators, parallax threshold values and view point common view relations, and the local reachable density lrd is set for describing the distribution condition of the point cloud around the space pointsk(p) the inverse of the mean reachable distance of a point within the neighborhood of point p to the target point, Nk(p) is the kth distance domain of the point p, reach-distance (p, o) is the kth reachable distance from the point o to the point p, and the local reachable density calculation formula of the point p is shown as the formula (b):

according to the obtained reachable density, further calculating the LOF factor corresponding to the space point, wherein the calculation formula is shown as formula (c), and when the LOF factor is obtained, the LOF factor is calculatedk(p) when the distribution of the current points is more sparse than that of other points, the current points are considered as candidate outer points; in the process of removing the outer points, on one hand, 0-30m spatial points are extracted according to a parallax threshold value, on the other hand, only a depth estimation accurate area is divided according to a common visual area for scene processing, and on the basis, the candidate outer points are removed, so that accurate information is provided for point cloud fusion;

(2) carrying out trilateral filtering in point cloud optimization, and simultaneously calculating pose, pixel and depth difference as filtering weight values, wherein a filtering factor calculation formula is as follows:

s denotes a range of the filtering kernel, I denotes a pixel value of the corresponding point, D denotes a depth value of the corresponding point,as a gaussian weight function of the difference in pose at point p and at point q,as a gaussian weight function of the pixel differences,is a Gaussian weight function of depth differences, WpThe calculation formula is (e) for the continuous product of each Gaussian distribution weight,

wherein, the larger the edge color difference of the object, Ip-IqAnd Dp-DqThe larger the value is, the closer the weight result is to 0, and the object edge information is effectively retained; for the area with the same object with insignificant depth chromatic aberration, corresponding Ip-IqAnd Dp-DqThe weight result is close to 0, the weight result is close to 1, the trilateral filtering is close to Gaussian filtering, and a good smoothing effect is achieved after the filtering; through trilateral filtering, noise reduction is carried out on the basis of keeping object edge information, and the accuracy of point cloud is further improved;

(3) point cloud registration and fusion, namely firstly, aiming at two groups of point clouds with a common view relation through an ICP (inductively coupled plasma) algorithm, and estimating the relative pose between the point clouds by utilizing the corresponding relation of characteristic points; and then, scale registration is carried out, the point cloud pose and the pose tracked by the SLAM are compared, namely the point cloud and the camera pose can be fused, the camera tracking pose is taken as the standard during fusion, local point cloud is fused according to the path track, and the unmanned aerial vehicle scene dense point cloud model with the consistent overall situation is obtained.

According to the method, visual information and IMU information are fused in the tracking process, so that a more accurate camera attitude is obtained, image depth information is estimated by using a depth learning method, the reconstruction of a three-dimensional scene is realized, and a point cloud model of an unmanned aerial vehicle scene is obtained.

Compared with the prior art, the invention has the advantages that:

(1) the invention provides a new monocular depth estimation method, which comprises the steps of firstly utilizing a viewpoint generation network to estimate parallax information of each pixel point under a corresponding view, describing the parallax information by utilizing a probability distribution form in order to ensure the accuracy of view generation, and then constructing an image under a new viewpoint; and then generating corresponding depth information through a binocular system constructed by the two images. The early method for performing monocular depth estimation based on a depth learning mode is poor in precision, and the main reason is that a monocular image and a corresponding depth map are directly used for training a network, and the constraint of scene geometric information is lacked in the prediction process. On the contrary, the binocular vision depth estimation method based on the depth learning mode is superior to monocular depth estimation in comprehensive performance. The method fully considers the geometric constraint in the scene during depth estimation, and is far higher than the traditional single-image depth estimation method in the aspect of precision. Meanwhile, as the two tasks are relatively independent, the requirement on a data set is relatively low during training, and a large amount of real scene dense depth map information is not required. The network has better generalization capability, and the network can still be trained by utilizing the virtual scene, so that higher accuracy can be ensured. Experiments prove that the depth estimation precision on the KITTI data set is higher than that of the traditional method, and the ARD residual error can reach 0.09 within the range of 0-50 m. Aiming at the problem that large-scale scenes cannot be quickly and densely reconstructed, the VI-SLAM technology is used for tracking the camera posture in real time, and the estimation of scene depth information is realized through the depth estimation of a single picture. Faster operating efficiency can be achieved at dense reconstructions. Experiments show that the algorithm of the invention can remarkably improve the efficiency of scene reconstruction, and the processing speed of video stream can reach 1.3 frames per second; meanwhile, the global consistency of the large scene reconstruction result can be ensured.

(2) According to the invention, local point cloud is recovered by using the pose and image depth information of an input camera, and algorithms such as point cloud abnormal value processing and point cloud trilateral filtering based on an LOF operator are provided aiming at the problems of excessive external points, unclear object edges and the like existing in the local point cloud. The traditional external point elimination mode based on a specific threshold lacks the integral observation of the point cloud distribution around the space point, completely depends on prior information during external point calculation, and cannot be well adapted to most scenes. The LOF algorithm considers the point distance and the surrounding point cloud distribution condition at the same time, can calculate the point cloud distribution around the point in a self-adaptive manner when calculating the external point, and then can judge whether the point is an outlier or not according to the surrounding point cloud distribution condition in a self-adaptive manner, so as to remove the external point. Compared with the traditional specific threshold strategy, the method has better elimination effect in the point cloud with uneven density. However, for the object edge, the situation that depth information is too continuous and different objects cannot be distinguished still exists in depth estimation, after trilateral filtering is adopted, areas with large chromatic aberration and large depth difference can be effectively distinguished, good smoothing effect is carried out on the areas with small depth and chromatic aberration difference, point cloud of the same object is more compact, and the adhesion phenomenon is solved. Experiments show that the point cloud reconstruction proportion of the method can reach twenty percent, which is much higher than that of an SLAM mode in the same scene. Meanwhile, scene information within 30m of a single-frame image scene can be guaranteed, point cloud can be guaranteed not to have a fault phenomenon during global reconstruction, and a good visualization effect can be achieved.

Drawings

FIG. 1 is a schematic flow chart of the method of the present invention;

FIG. 2 is a schematic diagram of a monocular image depth estimation process in the present invention;

FIG. 3 is a diagram illustrating the monocular image depth estimation result in the present invention;

FIG. 4 is a schematic diagram of the optimization results of the point cloud of the present invention;

FIG. 5 is a schematic illustration of the dimensional registration of the present invention; wherein (a) is the pre-scale registration case; (b) is the case after scale registration;

FIG. 6 is a schematic diagram of the three-dimensional scene reconstruction result of the present invention, (a) is the VI-SLAM positioning result, and (b) is the top view of the dense scene reconstruction result; (c) a full view of the results is reconstructed for the dense scene.

Detailed Description

The invention is described in further detail below with reference to the drawings and the examples of embodiment:

the unmanned aerial vehicle scene reconstruction method is basically operated by photographing a three-dimensional environment by using an unmanned aerial vehicle provided with an IMU element, transmitting acquired information to the rear end for processing, and outputting an unmanned aerial vehicle scene dense reconstruction point cloud effect map.

As shown in fig. 1, the method for three-dimensional reconstruction of an unmanned aerial vehicle based on VI-SLAM and a depth estimation network of the present invention includes the following steps:

(1) fixing an inertial navigation device IMU on the unmanned aerial vehicle, and calibrating internal parameters, external parameters and IMU external parameters of a monocular camera of the unmanned aerial vehicle;

(2) acquiring an image sequence and IMU information of an unmanned aerial vehicle scene by using an unmanned aerial vehicle monocular camera and an IMU;

(3) processing the image and IMU information acquired in the step (2) by using a VI-SLAM to obtain a camera pose with scale information;

(4) inputting monocular image information as an original view into a viewpoint generation network to obtain a right view, and inputting the original view and the right view into a depth estimation network to obtain depth information of the image;

(5) combining the camera attitude obtained in the step (3) with the depth map obtained in the step (4) to obtain local point cloud;

(6) and fusing the SLAM tracking track and the local point cloud through point cloud optimization and registration to obtain an unmanned aerial vehicle scene dense point cloud model.

The innovation made in each step is explained by combining the related drawings, and the specific content is as follows:

(1) monocular image depth estimation

As shown in fig. 2, the present invention divides the single image depth estimation into two subtasks of viewpoint generation and binocular depth estimation. The problem of monocular depth estimation is solved through two relatively independent tasks, wherein scene geometric information is fully considered in viewpoint generation and binocular depth estimation; in addition, the virtual data set can be used for training during model training, and the problem of insufficient real scene training data is effectively solved. And after the whole model training is finished, fine adjustment is carried out on the network estimation result by using an end-to-end training mode. The original monocular image is used as a left view and input into a Deep3D viewpoint generation network, probability distribution maps under different parallaxes are generated, and the weighted result of the image and the parallax probability is used as a right view. Deep3D is derivable for the amount to be optimized compared to the conventional disparity-corresponding pixel calculation, as shown in equation (1), where the specific calculation isIs the corresponding pixel value when the parallax is D, DdIs the weight corresponding to the disparity. And solving in a weighting mode in the new pixel calculation process.

And inputting the original view and the generated right view into a FlowNetCorr depth estimation network to predict the image depth information. After the three separate convolution operations are performed, the acquired feature information is input to the correlation layer corr for information fusion. The new features after fusion are further processed by convolution. In the deconvolution process, the feature information acquired by each level in the convolution process is used for fusion to finally acquire the target depth image.

(2) Point cloud processing and scale alignment

The network estimation result is not ideal at the edge part of the object, and mainly shows that the object edge estimation result is too smooth and cannot distinguish edge information; meanwhile, a large number of abnormal values exist in the depth information, for the generated local point cloud, the LOF (local outer factor) operator is used for eliminating outliers of the depth information, and the point cloud edge region division is enhanced by a filtering mode of fusing various scene information. The LOF algorithm judges whether the point is an outlier or not by using the distance relation between the space points, and has obvious removing effect on outliers in the dense point cloud.

Defining local achievable density lrdk(p) represents the inverse of the average reachable distance of a point within the neighborhood of point p to the target point. And calculating the LOF factor corresponding to the space point according to the reachable density, wherein the calculation formula is shown as formula (2). When LOFkWhen (p) is greater than 1, the current point distribution is more sparse than other points, and may be considered as a candidate outlier.

In the process of outlier rejection, spatial points with overlarge depth values are rejected by using parallax threshold value constraint. The scene depth estimation process depends on parallax information in an image, under the condition that the focal length of a camera is fixed, the larger the depth value is, the less obvious the parallax in the image is, and the more serious the depth error is, so that an extraction interval is set for the point cloud depth during local point cloud reconstruction, and according to an experimental result, space points in the range of 0-30m are mainly used in the reconstruction process. Experiments prove that the method can ensure the accuracy of the point cloud and the completeness of scene reconstruction.

In order to ensure the differentiability of the boundaries of different objects, the color difference information and the depth difference information are simultaneously blended on the basis of the traditional filtering to form a filtering scheme in a multi-edge form, and the calculation formula is as follows:

as shown in fig. 4, the single image test result of the point cloud optimization method on the KITTI data set is shown, wherein the region framed before the point cloud optimization represents the region in which the point cloud has problems in the reconstruction process, and the region framed after the point cloud optimization is the improvement of the point cloud optimization on the point cloud result. Before point cloud optimization, point cloud depth information comes from a depth estimation result, point cloud adhesion phenomena exist in object edge areas, point clouds among different objects are not distinguished obviously, point clouds of the same object can be seen to be more compact after diameter optimization, and the adhesion phenomena are solved. According to experimental results, the point cloud reconstruction proportion of the method can reach twenty percent, and is far higher than that of an SLAM mode in the same scene. Meanwhile, scene information within 30m of a single-frame image scene can be guaranteed, point cloud can be guaranteed not to have a fault phenomenon during global reconstruction, and a good visualization effect is achieved.

And registering three-dimensional space points corresponding to the SIFT feature points by using an iterative closest point algorithm (ICP) in the point cloud registration process. And the point cloud pose acquired by point cloud registration is compared with the pose tracked by the SLAM process, so that the point cloud and the camera pose can be fused. Because the point cloud is a space point recovered by depth information acquired by deep learning, the scene information precision is slightly insufficient compared with a tracking track. Therefore, the camera tracking pose is mainly used as a reference when the corresponding relation is calculated. Considering that the scale information does not affect the rotation, the rotation matrix is ignored in the gesture comparison process, and the difference between the displacement vectors is mainly considered. Meanwhile, the target scene of the reconstruction algorithm is an urban street, and a motion main direction exists in the motion process, so that the corresponding relation in the motion main direction is mainly referred to when the corresponding relation is calculated. As the geometric information of the scene is utilized in the viewpoint generation and the network construction of binocular depth estimation in the depth estimation process, the depth estimation result implies specific binocular baseline distance information, and the algorithm can ensure that the depth estimation result is consistent in global scale. Meanwhile, the SLAM process utilizes loop detection and global pose optimization to ensure good scale consistency. When the point cloud is aligned with the attitude, the invention mainly considers the scale proportion in the motion main direction. As shown in fig. 5, a scale registration contrast map is shown. And (a) in the graph is the condition before scale registration, and the point cloud is generated by directly fusing depth information and pose information. For the same object in the image, the point cloud reconstruction results are not aligned, and the effect is poor; (b) compared with the former point cloud after scale registration, the point cloud after registration has better fusion effect on the point cloud of the same object. The experimental result shows that the scene information registration strategy used by the invention is effective and has good performance in the aspect of scene information alignment.

The street scene reconstruction effect is shown in fig. 6. For KITTI 06 reconstruction results, the data set contains 1236 frames of images, 402 frames of key frames are extracted in the actual tracking process, and the total length of the path is 3.5 km. The motion trajectory in the figure forms a loop. The first part of the graph is a VI-SLAM graph building result, the second part is a scene dense reconstruction result, and the third part is a common result visual display effect. The dense point cloud comprised 4852 ten thousand spatial points. This scene reconstruction process takes about 18 minutes. The reconstruction result shows that complete street scene information can be reconstructed by rapidly reconstructing the scene, and the consistency of the point cloud splicing result can be ensured for trees, automobiles and other objects on two sides of the street.

Parts of the invention not described in detail are well known to those skilled in the art.

Finally, it should be noted that the above only describes a preferred embodiment of the present invention, and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present invention, and these improvements and modifications should also be construed as the protection scope of the present invention.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:可打印模型文件生成方法和相关产品

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!