Monocular depth estimation method based on deep learning

文档序号:1578317 发布日期:2020-01-31 浏览:18次 中文

阅读说明:本技术 基于深度学习的单目深度估计方法 (Monocular depth estimation method based on deep learning ) 是由 林立雄 黄国辉 汪青 何炳蔚 张立伟 陈彦杰 于 2019-10-10 设计创作,主要内容包括:本发明提出一种基于深度学习的单目深度估计方法,包括:基于用于单目深度估计的无监督卷积神经网络结构,包括:编码器、多尺度特征融合模块、门控自适应解码器和细化单元;包括以下步骤:步骤S1:数据集预处理;步骤S2:构造卷积神经网络的损失函数,输入训练集图像,使用反向传播算法计算损失函数损失值,通过反复迭代减小误差进行参数学习,使预测值逼近真实值,以获得卷积神经网络的最佳权重模型;步骤S3:加载步骤S2已训练好的权重模型,将测试集输入用于单目深度估计的无监督卷积神经网络,获得深度预测图像。其解决了离线训练时计算量大和深度重建中细节部分恢复效果差的问题。(The invention provides monocular depth estimation methods based on depth learning, which comprise an encoder, a multi-scale feature fusion module, a control self-adaptive decoder and a refining unit based on an unsupervised convolutional neural network structure for monocular depth estimation, and comprise the following steps of S1 data set preprocessing, S2 data set preprocessing, constructing a loss function of the convolutional neural network, inputting a training set image, calculating a loss value of the loss function by using a back propagation algorithm, carrying out parameter learning by reducing errors through repeated iteration, enabling a predicted value to approach a true value, and obtaining an optimal weight model of the convolutional neural network, and S3 data processing, loading the weight model trained in the step S2, inputting a test set into the unsupervised convolutional neural network for monocular depth estimation, and obtaining a depth prediction image.)

monocular depth estimation method based on deep learning, which is characterized in that based on an unsupervised convolutional neural network structure for monocular depth estimation, the method comprises an encoder, a multi-scale feature fusion module, a control self-adaptive decoder and a refinement unit;

the method comprises the following steps:

step S1: preprocessing a data set to generate a training set and a test set of a monocular original image and a real depth image corresponding to the monocular original image, and performing data enhancement on the monocular original image;

step S2: constructing a loss function of the convolutional neural network, inputting a training set image, calculating a loss value of the loss function by using a back propagation algorithm, and performing parameter learning by reducing errors through repeated iteration to enable a predicted value to approach a true value so as to obtain an optimal weight model of the convolutional neural network;

step S3: and loading the weight model trained in the step S2, and inputting the test set into the unsupervised convolutional neural network for monocular depth estimation to obtain a depth prediction image.

2. The method of claim 1, wherein the deep learning-based monocular depth estimation method comprises:

the encoder adopts a ResNet-50 network structure and has five layers, each -layer network sequentially performs convolution, regularization, activation and pooling operations, each layer network performs times of down-sampling on an input image, and a ReLU activation function is used

Figure FDA0002227978800000011

The multi-scale feature fusion module converts the low-resolution image from the encoder into a high-resolution image through sub-pixel convolution, and the high-resolution image is used as the input of a thinning unit: firstly, performing sub-pixel convolution on the output of the second to fifth-layer networks of the encoder, wherein the amplification factors are 2, 4, 8 and 16 times respectively, then fusing four layers of images, performing convolution, regularization and activation, and using a ReLU activation function

Figure FDA0002227978800000012

The -control adaptive decoder has five layers, each layer of network uses sub-pixel convolution to up-sample the image in turn, the up-sampling magnification is 2 times, wherein the third, fourth and fifth layer networks adopt control units to screen the image characteristics;

the thinning unit fuses outputs from the -controlled adaptive decoder and the multi-scale feature fusion module, splices the outputs according to dimension 1, performs convolution operations of 5 convolution kernels, 2 step lengths and 2 filling values twice, performs reduction and activation on the image after each convolution, and finally sets the number of channels of the output image to 1 by using the convolution to obtain the depth prediction image.

3. The method of claim 2, wherein the deep learning-based monocular depth estimation method comprises: the method of sub-pixel convolution specifically comprises the following steps: setting the resolution of an input image as H W C; wherein H, W, c represents the height, width and number of channels of the image, respectively; for the network composed of L layers, firstly, L-1 times of convolution operation is carried out to generate C r2The sheet resolution is H x W image; and generating a high-resolution image with the resolution of rH rW C through random operation.

4. The method of claim 2, wherein the -controlled adaptive decoder control unit, the control unit screens the output characteristics from the encoder and the upper decoder, the control unit has a -level convolutional kernel size of 3 and a step size of 1, and a LeakyReLU activation function is used

Figure FDA0002227978800000021

5. The method for monocular depth estimation based on deep learning of claim 1, wherein the step S1 specifically comprises the steps of:

step S11: classifying the original data set to generate a training set and a testing set and label files of the training set and the testing set, wherein the training set and the testing set both comprise original images and corresponding real depth images, and the label files comprise serial numbers and file directories of monocular original images and real depth images;

step S12: readjusting the image size of the training set;

step S13: randomly and horizontally turning the training set images;

step S14: carrying out random angle rotation on the training set images;

step S15: respectively adjusting monocular original images and real depth images in the training set to different sizes;

step S16: performing principal component analysis on monocular original images in the training set;

step S17: carrying out image brightness, contrast and saturation transformation on monocular original images in the training set;

and step S18, performing classification processing on the monocular original images in the training set, wherein the classification parameters are mean values and standard deviations.

6. The method for monocular depth estimation based on deep learning of claim 1, wherein in step S2, the loss function for constructing the convolutional neural network is to use a monocular original image and its corresponding real depth image as the input of the convolutional neural network, wherein the monocular original image is used to generate a depth prediction image containing a depth prediction value, the real depth image is used to calculate the loss function, and finally the depth prediction value and the real image depth value are simultaneously used as the input of the loss function.

7. The method of claim 6, wherein the deep learning-based monocular depth estimation is performed by: the loss function consists of three loss terms, namely: l ═ Ld+Lgrad+LnormalWherein:

Ldfor depth reconstruction errors, the difference between the depth prediction value and the true depth is calculated, namely:

Figure FDA0002227978800000031

Lgradfor the image gradient loss function, i.e., the L1 norm of the image gradient g:

Figure FDA0002227978800000033

Figure FDA0002227978800000034

wherein the intermediate parameter

Figure FDA0002227978800000035

8. The method for monocular depth estimation based on deep learning according to claim 1, wherein in step S3, the depth prediction image is compared with the real depth image, the error and the accuracy are calculated, and the weight model is detected.

9. The method of claim 8, wherein the error evaluation index for detecting the weight model comprises:

root Mean Square Error (RMSE):

Figure FDA0002227978800000036

absolute error (REL):

Figure FDA0002227978800000037

log root mean square error (Log 10):

Figure FDA0002227978800000041

threshold accuracy:

Figure FDA0002227978800000042

where n is the number of pixels of all depth maps.

Technical Field

The invention belongs to the field of image recognition and artificial intelligence, and particularly relates to monocular depth estimation methods based on deep learning.

Background

In recent years, with the development of computer technology, deep learning has made a series of breakthrough advances in the field of computer vision, and the depth of obtaining monocular images using deep learning has also become a field of hot research, depth images contain distance information in scenes, which is a basic task in three-dimensional reconstruction, navigation, target detection and recognition, and semantic segmentation, and is an important basis for environmental perception and scene understanding, although at present, laser radars and depth sensors are mainly used to obtain object distance information, but these sensors are expensive, and have requirements on the surrounding environment when in use, for example, under severe environments such as heavy rain, heavy smoke, heavy fog, etc., laser attenuation is sharply increased, which will directly affect propagation distance and measurement accuracy, therefore, obtaining distance information from images is still a preferred scheme, compared with other sensors, vision schemes are small and convenient, low in price, adaptability , advantages of universal use in actual life, monocular, binocular, and even multi-view cameras are generally used, and even multi-view cameras acquire original images, require fixed positions and careful calibration when stereo cameras are used, and a lot of time is consumed, and single-view cameras are available.

The recent research uses convolutional neural networks to learn the nonlinear mapping relationship between the real scene and the Depth Image, trains the neural networks by minimizing errors, and only needs to input the real Image into the neural networks to obtain the Depth images, and the methods have achieved good effects, however, the Depth images reconstructed by the current method cannot meet the actual use requirements, the precision of the Depth Estimation needs to be improved by steps, in order to better achieve the Depth reconstruction, a deeper network is built by using connection residual learning, more Feature information is learned, the precision of the Depth Estimation can be improved, the multi-scale connection also enables the effect of the Depth Estimation to be improved by using connection residual learning, and the high-Resolution of the Depth Estimation is improved by using a high-Resolution computing algorithm, such as a texture calculation algorithm, a high-Resolution computing algorithm, and a high-Resolution computing algorithm.

Disclosure of Invention

In view of the defects of the prior art, the technical problem to be solved by the present invention is to provide monocular depth estimation methods based on deep learning, so as to solve the problems of large calculation amount when deep estimation is performed by using deep learning and poor recovery effect of detail parts in deep reconstruction.

In order to solve the technical problems, the method comprises the following design key points of 1) preprocessing an image file of a data set to generate a training set and a testing set for data enhancement, 2) designing an unsupervised convolutional neural network structure for monocular depth estimation, wherein the network comprises four units, namely an encoder, a multi-scale feature fusion module, an -controlled adaptive decoder and a refining unit, and the training set is used for learning model parameters to realize the end-to-end depth estimation of a monocular image, 3) constructing a loss function of the convolutional neural network, training by using the loss function and iteratively optimizing the model parameters, and 4) testing the trained convolutional neural network model by using the testing set.

The following technical scheme is adopted specifically:

monocular depth estimation method based on deep learning, which is characterized in that, based on the unsupervised convolutional neural network structure for monocular depth estimation, the method comprises an encoder, a multi-scale feature fusion module, a control self-adaptive decoder and a thinning unit, wherein the monocular image is used as input, and the depth image with depth information is output;

the method comprises the following steps:

step S1: preprocessing a data set to generate a training set and a test set of a monocular original image and a real depth image corresponding to the monocular original image, and performing data enhancement on the monocular original image;

step S2: constructing a loss function of the convolutional neural network, inputting a training set image, calculating a loss value of the loss function by using a back propagation algorithm, and performing parameter learning by reducing errors through repeated iteration to enable a predicted value to approach a true value so as to obtain an optimal weight model of the convolutional neural network;

step S3: and loading the weight model trained in the step S2, and inputting the test set into the unsupervised convolutional neural network for monocular depth estimation to obtain a depth prediction image.

Preferably, the encoder adopts a ResNet-50 network structure, and has five layers, each layer network sequentially performs convolution, regularization, activation and pooling, each layer network performs times of downsampling on an input image, and a ReLU activation function is used

Figure BDA0002227978810000031

Assuming that the resolution of the -th layer input image is 320 × 256 and the number of channels is 3, after five successive downsampling, the resolution of the image output by the final encoder is 10 × 8 and the number of channels is 2048;

the multi-scale feature fusion module converts the low-resolution image from the encoder into a high-resolution image through sub-pixel convolution, and the high-resolution image is used as the input of a thinning unit: firstly, performing sub-pixel convolution on the output of the second to fifth-layer networks of the encoder, wherein the amplification times are respectively 2, 4, 8 and 16 times to obtain a high-resolution image with the resolution of 160 x 128, then fusing the four layers of images, performing convolution, regularization and activation, and using a ReLU activation function

Figure BDA0002227978810000032

Finally, outputting a high-resolution image with the resolution of 160 x 128 and the number of channels of 120;

the -controlled adaptive decoder has five layers, each layer of network uses sub-pixel convolution to sequentially up-sample the image, the up-sampling magnification is 2 times, wherein the third, fourth and fifth layer networks adopt control units to screen the image characteristics, the resolution of the image finally output by the decoder is 160 x 128, and the number of channels is 4;

the thinning unit fuses the outputs from the control adaptive decoder and the multi-scale feature fusion module, images with the resolution of 160 x 128 output by the control adaptive decoder and the multi-scale feature fusion module are spliced according to dimension 1 (column), the convolution operation with the convolution kernel size of 5, the step size of 2 and the filling value of 2 is performed twice, after each convolution, the images need to be subjected to conversion and activation processing, and finally the number of channels of the output images is set to 1 by using convolution, so that the depth prediction images are obtained.

Preferably, the method of sub-pixel convolution specifically includes the following steps: setting the resolution of an input image as H W C; h, W, C respectively represents the height, width and channel number of the image, and the image needs to be enlarged by r times, namely rH rW C; for a network composed of L layers, first L-1 convolutions are performedCalculating to generate C r2And (3) generating a high-resolution image with the resolution of H × W, and generating a high-resolution image with the resolution of rH × rW × C through random operation.

Preferably, the control unit of the -controlled adaptive decoder, the control unit screen the output characteristics from the encoder and the decoder of the upper layer, the size of the convolution kernel of the layer of the control unit is 3, the step size is 1, and a LeakyReLU activation function is used

Figure BDA0002227978810000033

The convolution kernel size of the second layer network is 1, and the step length is 1.

Preferably, the step S1 specifically includes the following steps:

step S11: classifying the original data set to generate a training set and a testing set and label files of the training set and the testing set, wherein the training set and the testing set both comprise original images and corresponding real depth images, and the label files comprise serial numbers and file directories of monocular original images and real depth images;

step S12: readjusting the image size of the training set;

step S13: randomly and horizontally turning the training set images;

step S14: carrying out random angle rotation on the training set images;

step S15: respectively adjusting monocular original images and real depth images in the training set to different sizes;

step S16: performing Principal Component Analysis (PCA) on monocular original images in the training set to reduce the feature number, eliminate noise and redundancy and reduce the possibility of overfitting;

step S17: carrying out image brightness, contrast and saturation transformation on monocular original images in the training set;

step S18, performing classification processing on the monocular original images in the training set, wherein the classification parameters are mean values and standard deviations;

in steps S12 and S15, bilinear interpolation is used to scale the image pixels to a specified size.

Preferably, in step S2, the loss function of the convolutional neural network is constructed by using a monocular original image and its corresponding real depth image as the input of the convolutional neural network, wherein the monocular original image is used to generate a depth prediction image containing a depth prediction value, the real depth image is used to calculate the loss function, and finally, the depth prediction value and the real image depth value are simultaneously used as the input of the loss function.

Preferably, the loss function consists of three loss terms, namely: l ═ Ld+Lgrad+LnormalWherein:

Ldfor depth reconstruction errors, the difference between the depth prediction value and the true depth is calculated, namely:

Figure BDA0002227978810000041

where p is defined as the coordinates of the pixels in the image, N is the total number of pixels in the image, dpRepresenting the depth value of the real image,

Figure BDA0002227978810000042

representing a depth prediction value, epRepresenting an L1 norm between the depth prediction value and the real image depth value, wherein α is a parameter value, 0.5 is taken, and omega is an area to which an image pixel belongs;

Lgradfor the image gradient loss function, i.e., the L1 norm of the image gradient g:

Figure BDA0002227978810000051

wherein g isxAnd gyAre each epDerivatives in the x and y components;

Lnormalis an image surface normal loss function for measuring the accuracy of the surface normal of the depth prediction image relative to the real depth image, namely:

Figure BDA0002227978810000052

wherein the intermediate parameter

Figure BDA0002227978810000053

Preferably, in step S3, the depth prediction image is compared with the real depth image, an error and an accuracy are calculated, and the weight model is detected.

Preferably, the error evaluation index for detecting the weight model includes:

root Mean Square Error (RMSE):

Figure BDA0002227978810000054

absolute error (REL):

Figure BDA0002227978810000055

log root mean square error (Log 10):

threshold accuracy:

Figure BDA0002227978810000057

where n is the number of pixels of all depth maps.

In addition, the precision of the depth map is improved by using an image super-resolution technology, and particularly, a good effect is shown in the depth estimation of a remote scene.

Compared with the prior art, the invention and the preferred scheme thereof have the following outstanding beneficial effects:

1. in the network up-sampling structure, Sub-pixel Convolution (Sub-pixel Convolution) is used for replacing the traditional bilinear interpolation up-sampling mode to carry out super-resolution processing on the image, so that the training speed is improved, and the detail recovery effect of the depth image is better.

2. The multi-scale feature fusion module is adopted to carry out super-resolution processing on each layer of network output in the encoder network, then the output is fused and input to the refinement unit, and high-level information under different perception domains is captured by learning the image features of different layers of networks, so that the information of the output image is more complete.

3. The -control adaptive decoder is used for respectively connecting the outputs of the encoder network and the decoder network to the control unit, the characteristics of the low-resolution images in the encoder network are fully utilized, better characteristic mapping between low resolution and high resolution is achieved, algorithm precision is improved, meanwhile, the control unit is used for removing redundant information in an adaptive mode, useful information is screened out to serve as the input of the decoder network of the lower layer, and computing efficiency is improved.

4. And a thinning unit is added to fuse the output from the -controlled adaptive decoder network and the multi-scale feature fusion module, and the image is further thinned by steps, so that the accuracy of the algorithm is improved.

Drawings

The invention is further described in detail in connection with the following figures and detailed description:

FIG. 1 is a schematic overall flow diagram of an embodiment of the present invention;

FIG. 2 is a schematic diagram of an unsupervised convolutional neural network structure according to an embodiment of the present invention;

FIG. 3 is a schematic diagram of a control unit according to an embodiment of the present invention;

FIG. 4 is a comparison diagram of the algorithm results according to the embodiment of the present invention.

Detailed Description

In order to make the features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail as follows:

as shown in fig. 1, the scheme flow provided by this embodiment includes the following steps:

1) preprocessing a data set to generate a training set and a testing set, and performing data enhancement on an original image acquired by a monocular head and a real depth image corresponding to the original image, wherein the specific steps are as follows:

1-1) classifying an original data set to generate a training set and a test set and label files of the training set and the test set, wherein 50688 pairs of images are taken as the training set, 654 pairs of images are taken as the test set, each pair of images of the training set and the test set comprises an original image and a corresponding real depth image, and the label files comprise serial numbers and file directories of the original image and the real depth image;

1-2) resizing the training set image to make the short edge pixel value 400;

1-3) randomly and horizontally turning the training set images according to the probability of 0.5;

1-4) randomly rotating the training set image to fix an angle, wherein the value range of the selected angle is (-5 degrees and 5 degrees);

1-5) respectively adjusting the original image and the real depth image in the training set to different sizes, wherein the resolution of the original image is adjusted to 512 × 384, and the resolution of the corresponding real depth image is 256 × 192;

1-6) Principal Component Analysis (PCA) of the training set images to reduce the number of features, reduce noise and redundancy, reduce the likelihood of overfitting, eigval and eigvec refer to the eigenvalues and eigenvectors of covariance, respectively:

eigval=(0.2175,0.0188,0.0045),

Figure BDA0002227978810000071

1-7) carrying out color transformation on the images of the training set, wherein the color transformation comprises image brightness, contrast and saturation, and the values are 0.4, 0.4 and 0.4 respectively;

1-8) performing grouping processing on the training set images, wherein the grouping parameters are mean and standard deviation, mean and std respectively refer to the mean and the standard deviation:

mean=(0.485,0.456,0.406),std=(0.229,0.224,0.225)

2) as shown in FIG. 2, an unsupervised convolutional neural network structure for monocular depth estimation is designed, the network comprises four units, namely an encoder, a multi-scale feature fusion module, an -controlled adaptive decoder and a refinement unit, the whole neural network completes feature extraction, nonlinear mapping and depth image reconstruction of images, and the unsupervised convolutional neural network structure is end-to-end unsupervised learning processes.

Adopting ResNet-50 as coder, five layers, each layer network executing convolution, regularization, activation and pooling operation, each layer network making times down sampling for input image, using ReLU activation function

Figure BDA0002227978810000072

The th layer input image resolution is 320 × 256, the number of channels is 3, and after five successive downsampling, the final encoder output image resolution is 16 × 12, and the number of channels is 2048.

And the multi-scale feature fusion module performs super-resolution on the low-resolution image of the encoder into a high-resolution image through sub-pixel convolution, and the high-resolution image is used as the input of the thinning unit. The specific process is that the sub-pixel convolution is carried out on the output of the second to the fifth layer networks of the encoder, the magnification is 2, 4, 8 and 16 times respectively, four layers of images with the resolution of 160 x 128 are obtained, the four layers of images are fused, the convolution, the regularization and the activation are carried out, and a ReLU activation function is used

Figure BDA0002227978810000081

Finally, a high-resolution image with the resolution of 160 x 128 and the number of channels of 120 is output.

adaptive decoder has five layers, Using Sub-Pixel convolution (refer specifically to the paper Shi W, Cabilllero J, Husz F, et al. real-Time Single Image and Video Super-Resolution Using and effective Sub-Pixel convolution with the input of second and third layers of decoder being the output of upper layer Network and the output of multi-scale feature fusion module, 2016: 1874-.

As shown in FIG. 3, the control unit is mainly composed of two convolutionsLayer composition the input to the control element is a low resolution image from the i-th layer output of the encoder

Figure BDA0002227978810000082

And high resolution image output from the j-th layer of the decoder

Figure BDA0002227978810000083

Output as a high resolution imageThrough a simple convolution structure, the output characteristics from the encoder and the decoder at the upper layers are screened, useful information is selected in a self-adaptive mode, redundant information is reduced, the calculation efficiency is improved, meanwhile, the characteristics of low-resolution images in the encoder network are fully utilized, better characteristic mapping between low resolution and high resolution is realized, the convolution kernel size of the -level network of the control unit is 3, the step length is 1, and a LeakyReLU activation function is used

Figure BDA0002227978810000085

The convolution kernel size of the second layer network is 1, and the step length is 1.

And the thinning unit is used for fusing the outputs from the -controlled adaptive decoder and the multi-scale feature fusion module, specifically, images with the resolution of 160 x 128 output by the -controlled adaptive decoder at the last layer and the multi-scale feature fusion module are spliced according to the dimension 1 (column), the convolution operation with the convolution kernel size of 5, the step size of 2 and the filling value of 2 is performed twice, the image needs to be subjected to the reduction and activation processing after each convolution, and finally the number of channels of the output image is set to 1 by using the convolution, so that the estimated depth image is obtained.

3) Constructing the loss function of the convolutional neural network, and iteratively calculating the loss function by using a back propagation algorithm to obtain an optimal weight model of the convolutional neural network, wherein the training process of the neural network is actually construction target functions

Figure BDA0002227978810000086

By iterative computation of losses by back-propagation algorithmsA loss function L, minimizing the loss function to solve the objective function,

Figure BDA0002227978810000087

representing a depth prediction value, IpFor every pixel values of the input image, p is defined as the coordinates of the pixel in the image.

The loss function is divided into three terms, namely depth reconstruction error loss, gradient smoothing loss and surface normal error loss, th loss term depth reconstruction error LdCalculating the difference between the depth predicted value and the real depth, and adopting a log function for calculation for convenience, namely:

Figure BDA0002227978810000091

where d denotes the real image depth value, epRepresenting the L1 norm between the depth predictor and the real image depth value, α is the parameter value, 0.5 is taken, and Ω is the area to which the image pixel belongs.

Second loss term LgradDefined as the L1 norm over the image gradient g, i.e.:

Figure BDA0002227978810000092

wherein g isxAnd gyAre each epDerivatives in the x and y components.

Third loss term LnormalMeasuring the accuracy of the surface normal of the depth prediction image relative to the true depth map, namely:

Figure BDA0002227978810000093

wherein

Figure BDA0002227978810000094

The final loss function consists of the above three terms, namely:

L=Ld+Lgrad+Lnormal

after the design and the target function construction of the convolutional neural network are completed, inputting a preprocessed training set, calculating a loss value of a loss function by using a back propagation algorithm, and performing parameter learning by reducing errors through repeated iteration to obtain an optimal weight model of the convolutional neural network, performing 20 times of circular training in the actual training process, setting batch processing parameters to be 4, using Adam optimization, setting the learning rate to be 0.0001, reducing the learning rate by 10% every 5 times of circular, setting the weight attenuation coefficient to be 0.0001, and performing β superparameters1And β2The values are 0.9 and 0.999 respectively.

4) And loading the trained weight model, inputting the test set into a convolutional neural network, directly obtaining a depth image, comparing the obtained depth image with a real depth image, calculating error and precision, and evaluating the weight model.

The effect of the present invention is further illustrated by the following simulation experiment .

1. Simulation conditions

(1) And selecting 654 pairs of images in the test set as test images, wherein each pair of images comprises an original image and a real depth image, and converting the resolution of each pair of images into 320 × 256.

(2) Experimental parameter settings, the mean value and standard deviation of the normalized parameters were respectively set as:

mean=(0.485,0.456,0.406),std=(0.229,0.224,0.225)

(3) the experimental environment is as follows: the operating system is Ubuntu16.04, the graphics card model is NVIDIA Tesla M40, and PyTorch deep learning framework and Python2.7 programming language are used.

2. Simulation content and results

Simulation content using test set images with a resolution of 320 x 256 as input, outputting depth images with a resolution of 160 x 128, and comparing the error evaluation index of system with the results of other algorithms, wherein the error evaluation index is as follows:

root Mean Square Error (RMSE):

Figure BDA0002227978810000101

absolute error (REL):

Figure BDA0002227978810000102

log root mean square error (Log 10):

Figure BDA0002227978810000103

threshold accuracy:

Figure BDA0002227978810000104

where n is the number of pixels of all depth maps,

Figure BDA0002227978810000105

is a depth prediction value, dpIs the true depth value.

The experimental results are as follows:

the experimental results are shown in table 1, compared with the monocular depth estimation algorithm provided by junjie.hu, the error and threshold precision evaluation indexes of the monocular depth estimation algorithm provided by junjie.hu are superior to those of the algorithm of junjie.hu, the speed of the method provided by the invention is 3.45 times of that of the junjie.hu algorithm during off-line training, as shown in fig. 4, the depth reconstruction quality is obviously superior to that of the algorithm of junjie.hu in detail recovery and deeper scenes, and the method can better meet the actual application requirements.

TABLE 1

Figure BDA0002227978810000111

The present invention is not limited to the above-mentioned preferred embodiments, and various other forms of monocular depth estimation based on deep learning can be derived by anyone based on the teaching of the present invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:浮水式海底数据测量方法、装置和电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!