Multi-view depth prediction method based on asymmetric depth convolution neural network

文档序号:1568327 发布日期:2020-01-24 浏览:28次 中文

阅读说明:本技术 基于非对称深度卷积神经网络的多视角深度预测方法 (Multi-view depth prediction method based on asymmetric depth convolution neural network ) 是由 裴炤 田龙伟 汶得强 张艳宁 马苗 汪西莉 陈昱莅 武杰 杨红红 于 2019-10-18 设计创作,主要内容包括:本公开揭示了一种基于非对称深度卷积神经网络的多视角深度预测方法,包括:构建一种非对称深度卷积神经网络;结合参考图像,将相邻图像构造为平面扫描卷;在现有数据集上预先训练第一神经网络;使用第一神经网络的模型参数进一步初始化第二神经网络这一非对称深度卷积神经网络;最终通过第二神经网络完成多视角深度预测。本公开允许输入任意数量和任意分辨率的不同视角的图像,减少了繁琐的手工操作,也减少限制约束,从而实现了对多目图像所产生的不同视角图像的深度的高精度预测。(The invention discloses a multi-view depth prediction method based on an asymmetric depth convolution neural network, which comprises the following steps: constructing an asymmetric deep convolutional neural network; constructing adjacent images into a plane scanning volume by combining the reference images; pre-training a first neural network on an existing data set; further initializing the asymmetric deep convolutional neural network of the second neural network using the model parameters of the first neural network; and finally, completing multi-view depth prediction through a second neural network. The method and the device allow the input of any number of images with any resolution and different view angles, reduce complicated manual operation and limit constraint, and therefore achieve high-precision prediction of the depth of the images with different view angles generated by the multi-view image.)

1. A multi-view depth prediction method based on an asymmetric depth convolution neural network comprises the following steps:

s100, defining a first image sequence, wherein the first image sequence has the following characteristics: the first image sequence does not limit the number of images in the first image sequence, whether the resolution of each image in the first image sequence is the same or not, and the images in the first image sequence at least comprise a plurality of images with different view angles aiming at a certain scene;

s200, randomly selecting an image in the first image sequence as a reference image in the determined scene;

s300, calculating the overlapping rate of the rest images in the image sequence and the reference image, and selecting N images with the highest overlapping rate as all adjacent images, wherein the minimum N can be 1; then, for each of the N neighboring images, the neighboring images are displayed at each disparity level according to the reference image pair

Figure FDA0002239546100000011

s400, constructing a first neural network, wherein the first neural network comprises the following components which are connected in sequence from front to back: the device comprises a feature fusion module, a first parallax prediction core module and a feature aggregation module, wherein:

the feature fusion module is used for fusing features of each parallax level of the reference image and the adjacent images after affine transformation in the planar scanning volume and outputting a fused feature map, wherein the feature fusion module comprises four 5-by-5 convolution layers which are sequentially connected from front to back;

the first parallax prediction core module is used for performing feature extraction and prediction on parallax information according to the fused feature map output by the previous module, wherein the first parallax prediction core module comprises two convolution layers which are sequentially connected from front to back, one convolution layer is used for feature extraction, and the other convolution layer is used for predicting the parallax information so as to predict information on each parallax level;

the feature aggregation module is used for aggregating information on each parallax level predicted by the previous module by utilizing maximum pooling to obtain a depth map, wherein the feature aggregation module comprises a pooling layer and two convolution layers which are sequentially connected, an aggregated feature map is generated through the two convolution layers, the aggregated feature map is optimized through a fully-connected conditional random field to obtain a parallax map of a channel 1, and the depth map of the reference image in the determined scene is obtained through reciprocal calculation;

s500, setting a learning rate to 10 for the first neural network-5Limit L2 paradigm does not exceed 1.0, and: selecting a plurality of images on a first data set as the first image sequence according to the first image sequence defined in the step S100, then obtaining corresponding reference images and planar scanning volumes according to the steps S200 and S300, respectively, and using the reference images and the planar scanning volumes as input to train the first neural network in advance; after 320000 times of training iteration, saving the model parameters of the first neural network; wherein the step iteratively trains the first neural network using an adaptive moment estimation method (Adam) and saves model parameters, and:

the formula of the loss function used to train the first neural network is:

Figure FDA0002239546100000021

wherein n is the number of parallax levels, y is the parallax level corresponding to the true value, and x is a group of predicted values of one parallax level;

s600, constructing a second neural network, wherein the second neural network is the asymmetric deep convolution neural network; and, the second neural network: the feature fusion module and the feature aggregation module in the first neural network are adopted as they are, but the first parallax prediction core module in the first neural network is replaced by the second parallax prediction core module, so as to form a structure that is connected from front to back in sequence: a feature fusion module, a second disparity prediction core module, a feature aggregation module, and:

wherein the content of the first and second substances,

the second parallax prediction core module comprises the following components in sequence from front to back: the second disparity prediction core module performs feature extraction through the 1 st to 6 th feature extraction convolution blocks and performs disparity information prediction through the 1 st to 5 th disparity prediction convolution blocks;

s700, setting the learning rate to 10 for the second neural network-6Restricting the L2 paradigm to not exceed 0.1, initializing the second neural network with the model parameters of the first neural network saved in step S500, and: selecting a plurality of images from the first image sequence on the second data set and the optional third data set as defined in the step S100 as the first image sequence, then obtaining corresponding reference images and planar scanning volumes according to the steps S200 and S300, respectively, and using the reference images and the planar scanning volumes as input for training the second neural network, and after 320000 times of training iteration, storing model parameters of the second neural network; wherein the step iteratively trains the second neural network and saves model parameters using an adaptive moment estimation method (Adam), and:

the formula of the loss function used for training the second neural network is:

wherein n is the number of parallax levels, y is the parallax level corresponding to the true value, and x is a group of predicted values of one parallax level;

and S800, taking a plurality of images of different view angles of another certain determined scene to be predicted as a first image sequence, then respectively obtaining a corresponding reference image and a corresponding plane scanning volume according to the steps S200 and S300, taking the reference image and the plane scanning volume as input, and obtaining a depth map of the reference image in the determined scene through the second neural network obtained by training in the step S700.

2. The method according to claim 1, wherein the disparity level in step S200 is preferably determined by:

inputting the first image sequence into a three-dimensional reconstruction COLMAP system, respectively estimating the camera attitude and the distance between each feature in the sparse reconstruction model by using the three-dimensional reconstruction COLMAP system, wherein the largest one is the maximum parallax, the maximum parallax is taken as the highest parallax level, the maximum parallax is equally divided, and each equal part is taken as a parallax level

Figure FDA0002239546100000032

the sparse reconstruction model is a point cloud model reconstructed by the three-dimensional reconstruction COLMAP system according to color and depth information contained in the received image sequence;

each feature is a feature of an object captured in the point cloud model and is characterized by point clouds corresponding to objects at different depths.

3. The method of claim 1, wherein the second neural network in step S600 further comprises:

1 st to 3 rd parallax enhancement volume blocks;

the second neural network also acts on the 3 rd to 5 th parallax prediction volume blocks through the 1 st to 3 rd parallax enhancement volume blocks, respectively, so as to double the spatial features and optimize the final output prediction result: information at each disparity level of the prediction.

4. The method of claim 1, wherein:

for the feature fusion module, the number of channels of the feature map output by the four convolutional layers is respectively: 64, 96, 32, 4;

for the feature aggregation module, the number of channels of the aggregated feature map is 400 and 100, respectively, and the two convolution layers of the feature aggregation module finally generate the aggregated feature map with 100 channels.

5. The method of claim 1, wherein:

the first data set, the second data set and the third data set are ImageNet, DeMoN and MVS-SYNTH data sets respectively.

6. The method of claim 1, wherein:

the second data set comprises any one or a combination of the following two data sets: a real data set, a synthetic data set;

the third data set is a composite data set for the second data set.

7. The method of claim 1, wherein:

for the second disparity prediction kernel, both the 1 st and 2 nd feature extraction convolution blocks are: the characteristic extraction volume block comprises a first 5 × 5 volume layer and a second 5 × 5 volume layer which are connected in sequence from front to back, wherein the step length of the first volume layer of the 2 nd characteristic extraction volume block is 2, and the 3 rd to 6 th characteristic extraction volume blocks are all composed of: the multilayer transformer is composed of a first 3 x 3 convolution layer and a second 3 x 3 convolution layer which are connected in sequence from front to back, wherein the step length of the first convolution layer is 2.

8. The method of claim 1, wherein:

for the second disparity prediction core module, the 1 st to 5 th disparity prediction volume blocks are all: the device comprises an upper sampling layer, a first 3 x 3 convolution layer and a second 3 x 3 convolution layer which are connected in sequence from front to back.

9. The method of claim 3, wherein:

for the second disparity prediction core module, the 1 st to 3 rd disparity-enhanced volume blocks are all: the device is composed of a 3 x 3 convolution layer and an upper sampling layer which are connected in sequence from front to back, and the following steps:

the input to the convolutional layer in the 1 st disparity-enhanced convolutional block is derived from the output of the second 3 x 3 convolutional layer in the 2 nd disparity-predicted convolutional block;

the upsampled layer in the 1 st disparity enhanced convolution block is further output to a second 3 x 3 convolution layer in a 3 rd disparity predicted convolution block;

the input to the convolutional layer in the 2 nd disparity-enhanced convolutional block is derived from the output of the second 3 x 3 convolutional layer in the 3 rd disparity-predicted convolutional block;

the upsampled layer in the 2 nd disparity enhanced convolution block is further output to a second 3 x 3 convolution layer in a 4 th disparity prediction convolution block;

the input to the convolutional layer in the 3 rd disparity enhanced convolutional layer block is derived from the output of the second 3 x 3 convolutional layer in the 4 th disparity predicted convolutional layer block;

the upsampled layer in the 3 rd disparity enhanced convolution block is further output to the second 3 x 3 convolution layer in the 5 th disparity predicted convolution block.

10. The method of claim 3, wherein:

the channel numbers of the feature maps output by the 1 st to 6 th feature extraction volume blocks are respectively as follows: 600, 800, 1000, 1000, 1000, 1000;

the feature map channels output by the 1 st to 5 th parallax prediction volume blocks are respectively as follows: 1000, 1000, 800, 600, 800;

the channel numbers of the feature maps output by the 1 st to 3 rd parallax enhancement volume blocks are respectively as follows: 100, 100, 100;

and the number of the first and second electrodes,

a jump connection structure is arranged between a feature extraction volume block and a parallax prediction volume block which output feature maps with the same size, and the structure of the feature extraction volume block and the result of parallax prediction are spliced together in channel dimension, and the method comprises the following steps:

a jump connection structure is arranged between the 1 st feature extraction volume block and the 5 th parallax prediction volume block;

a jump connection structure is arranged between the 2 nd feature extraction volume block and the 4 th parallax prediction volume block;

a jump connection structure is arranged between the 3 rd feature extraction volume block and the 3 rd parallax prediction volume block;

a skip connection structure is arranged between the 4 th feature extraction volume block and the 2 nd parallax prediction volume block;

the 5 th feature extracts that there is a skip connection structure between the convolution block and the 1 st disparity prediction convolution block.

Technical Field

The disclosure belongs to the technical field of computer vision, and particularly relates to a multi-view depth prediction method based on an asymmetric depth convolution neural network.

Background

The depth information contained in the mined image can generate an accurate depth map, and depth prediction research is applied to the field of 3D reconstruction at present and achieves remarkable results. Compared with the depth learning method, the traditional image depth prediction method needs a large amount of resources and a large amount of tedious manual operations, such as stereo matching, manual marking and the like. At present, the image depth prediction method based on depth learning mainly uses a monocular image for prediction. The multi-view depth prediction method based on depth learning can reduce tedious manual operation and limit conditions. Particularly, the method is more accurate and stable than the traditional method for predicting the detailed scenes with simple structure, no structure and the like. The convolutional neural network is applied to the visual reconstruction problem, early work mainly focuses on stereo matching by using image similarity, and recent research uses end-to-end learning to perform stereo reconstruction, however, the methods have limitations on the relative pose of a camera or the number of input images, or generate rough volume reconstruction.

Disclosure of Invention

In order to solve the technical problem, the present disclosure discloses a multi-view depth prediction method based on an asymmetric depth convolutional neural network, including the following steps:

s100, defining a first image sequence, wherein the first image sequence has the following characteristics: the first image sequence does not limit the number of images in the first image sequence, whether the resolution of each image in the first image sequence is the same or not, and the images in the first image sequence at least comprise a plurality of images with different view angles aiming at a certain scene;

s200, randomly selecting an image in the first image sequence as a reference image in the determined scene;

s300, calculating the overlapping rate of the rest images in the image sequence and the reference image, and selecting N images with the highest overlapping rate as all adjacent images, wherein the minimum N can be 1; then, for each of the N neighboring images, the neighboring images are displayed at each disparity level according to the reference image pair

Figure BDA0002239546110000011

Performing WarpAffinine affine transformation, and storing the adjacent images after affine transformation in a planar scanning volume to construct a planar scanning volume

Figure BDA0002239546110000021

Planar scanning volume of adjacent images after affine transformation;

s400, constructing a first neural network, wherein the first neural network comprises the following components which are connected in sequence from front to back: the device comprises a feature fusion module, a first parallax prediction core module and a feature aggregation module, wherein:

the feature fusion module is used for fusing features of each parallax level of the reference image and the adjacent images after affine transformation in the planar scanning volume and outputting a fused feature map, wherein the feature fusion module comprises four 5-by-5 convolution layers which are sequentially connected from front to back;

the first parallax prediction core module is used for performing feature extraction and prediction on parallax information according to the fused feature map output by the previous module, wherein the first parallax prediction core module comprises two convolution layers which are sequentially connected from front to back, one convolution layer is used for feature extraction, and the other convolution layer is used for predicting the parallax information so as to predict information on each parallax level;

the feature aggregation module is used for aggregating information on each parallax level predicted by the previous module by utilizing maximum pooling to obtain a depth map, wherein the feature aggregation module comprises a pooling layer and two convolution layers which are sequentially connected, an aggregated feature map is generated through the two convolution layers, the aggregated feature map is optimized through a fully-connected conditional random field to obtain a parallax map of a channel 1, and the depth map of the reference image in the determined scene is obtained through reciprocal calculation;

s500, setting a learning rate to 10 for the first neural network-5Limit L2 paradigm does not exceed 1.0, and: selecting a plurality of images on a first data set as the first image sequence according to the first image sequence defined in the step S100, then obtaining corresponding reference images and planar scanning volumes according to the steps S200 and S300, respectively, and using the reference images and the planar scanning volumes as input to train the first neural network in advance; after 320000 times of training iteration, saving the model parameters of the first neural network; wherein the step iteratively trains the first neural network using an adaptive moment estimation method (Adam) and saves model parameters, and:

the formula of the loss function used to train the first neural network is:

wherein n is the number of parallax levels, y is the parallax level corresponding to the true value, and x is a group of predicted values of one parallax level;

s600, constructing a second neural network, wherein the second neural network is the asymmetric deep convolution neural network; and, the second neural network: the feature fusion module and the feature aggregation module in the first neural network are adopted as they are, but the first parallax prediction core module in the first neural network is replaced by the second parallax prediction core module, so as to form a structure that is connected from front to back in sequence: a feature fusion module, a second disparity prediction core module, a feature aggregation module, and:

wherein the content of the first and second substances,

the second parallax prediction core module comprises the following components in sequence from front to back: the second disparity prediction core module performs feature extraction through the 1 st to 6 th feature extraction convolution blocks and performs disparity information prediction through the 1 st to 5 th disparity prediction convolution blocks;

s700, setting the learning rate to 10 for the second neural network-6Restricting the L2 paradigm to not exceed 0.1, initializing the second neural network with the model parameters of the first neural network saved in step S500, and: selecting a plurality of images from the first image sequence on the second data set and the optional third data set as defined in the step S100 as the first image sequence, then obtaining corresponding reference images and planar scanning volumes according to the steps S200 and S300, respectively, and using the reference images and the planar scanning volumes as input for training the second neural network, and after 320000 times of training iteration, storing model parameters of the second neural network; wherein the step iteratively trains the second neural network and saves model parameters using an adaptive moment estimation method (Adam), and:

the formula of the loss function used for training the second neural network is:

wherein n is the number of parallax levels, y is the parallax level corresponding to the true value, and x is a group of predicted values of one parallax level;

and S800, taking a plurality of images of different view angles of another certain determined scene to be predicted as a first image sequence, then respectively obtaining a corresponding reference image and a corresponding plane scanning volume according to the steps S200 and S300, taking the reference image and the plane scanning volume as input, and obtaining a depth map of the reference image in the determined scene through the second neural network obtained by training in the step S700.

Therefore, the multi-view depth prediction method does not limit the number of images and the resolution of the images. According to the depth prediction method and device, multi-view depth prediction of a scene can be achieved by using one depth neural network model, accuracy and robustness of depth prediction are improved, and a clear depth map is obtained. Even if the image is an RGB image, it is not limited, that is, the present disclosure can be used to fit a relationship between the RGB image and the disparity map, and then convert the fitted disparity map into a depth map.

In addition, each convolutional layer in the asymmetric convolutional neural network disclosed by the disclosure can further preferentially use a nonlinear activation function, and the nonlinear activation function is used for introducing nonlinearity into the asymmetric convolutional neural network disclosed by the disclosure, so that the asymmetric convolutional neural network has the capability of fitting nonlinearity. For example, fitting the relationship between the RGB image and the disparity map, and then converting the fitted disparity map into a depth map.

Drawings

Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:

FIG. 1 is a process flow diagram;

FIG. 2 is a diagram of a deep convolutional neural network architecture for use with the present invention;

fig. 3a and 3b are graphs of test results and effects, where fig. 3a is an original graph and fig. 3b is a depth graph.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described in detail and completely with reference to fig. 1 to 3a and 3b of the embodiments of the present disclosure, and it is obvious that the described embodiments are some embodiments of the present disclosure, but not all embodiments. It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.

It should be noted that the terms "first", "second", etc. in the description and claims of the present disclosure and the accompanying drawings are only used for distinguishing some objects and are not used for describing a specific order or sequence. It is to be understood that the terms so used are interchangeable under appropriate circumstances for describing the embodiments of the disclosure herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.

It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.

Further, the term "forward and backward" as used in this disclosure follows the forward propagation characteristics of the art.

In one embodiment, the present disclosure discloses a multi-view depth prediction method based on an asymmetric depth convolutional neural network, including the following steps:

s100, defining a first image sequence, wherein the first image sequence has the following characteristics: the first image sequence does not limit the number of images in the first image sequence, whether the resolution of each image in the first image sequence is the same or not, and the images in the first image sequence at least comprise a plurality of images with different view angles aiming at a certain scene;

s200, randomly selecting an image in the first image sequence as a reference image in the determined scene;

s300, calculating the overlapping rate of the rest images in the image sequence and the reference image, and selecting N images with the highest overlapping rate as all adjacent images, wherein the minimum N can be 1; then, for each of the N neighboring images, the neighboring images are displayed at each disparity level according to the reference image pair

Figure BDA0002239546110000051

Performing WarpAffinine affine transformation, and storing the adjacent images after affine transformation in a planar scanning volume to construct a planar scanning volume

Figure BDA0002239546110000052

Planar scanning volume of adjacent images after affine transformation;

s400, constructing a first neural network, wherein the first neural network comprises the following components which are connected in sequence from front to back: the device comprises a feature fusion module, a first parallax prediction core module and a feature aggregation module, wherein:

the feature fusion module is used for fusing features of each parallax level of the reference image and the adjacent images after affine transformation in the planar scanning volume and outputting a fused feature map, wherein the feature fusion module comprises four 5-by-5 convolution layers which are sequentially connected from front to back;

the first parallax prediction core module is used for performing feature extraction and prediction on parallax information according to the fused feature map output by the previous module, wherein the first parallax prediction core module comprises two convolution layers which are sequentially connected from front to back, one convolution layer is used for feature extraction, and the other convolution layer is used for predicting the parallax information so as to predict information on each parallax level;

the feature aggregation module is used for aggregating information on each parallax level predicted by the previous module by utilizing maximum pooling to obtain a depth map, wherein the feature aggregation module comprises a pooling layer and two convolution layers which are sequentially connected, an aggregated feature map is generated through the two convolution layers, the aggregated feature map is optimized through a fully-connected conditional random field to obtain a parallax map of a channel 1, and the depth map of the reference image in the determined scene is obtained through reciprocal calculation;

s500, setting a learning rate to 10 for the first neural network-5Limit L2 paradigm does not exceed 1.0, and: selecting a plurality of images on a first data set as the first image sequence according to the first image sequence defined in the step S100, then obtaining corresponding reference images and planar scanning volumes according to the steps S200 and S300, respectively, and using the reference images and the planar scanning volumes as input to train the first neural network in advance; after 320000 times of training iteration, saving the model parameters of the first neural network; wherein the step iteratively trains the first neural network using an adaptive moment estimation method (Adam) and saves model parameters, and:

the formula of the loss function used to train the first neural network is:

Figure BDA0002239546110000061

wherein n is the number of parallax levels, y is the parallax level corresponding to the true value, and x is a group of predicted values of one parallax level;

s600, constructing a second neural network, wherein the second neural network is the asymmetric deep convolution neural network; and, the second neural network: the feature fusion module and the feature aggregation module in the first neural network are adopted as they are, but the first parallax prediction core module in the first neural network is replaced by the second parallax prediction core module, so as to form a structure that is connected from front to back in sequence: a feature fusion module, a second disparity prediction core module, a feature aggregation module, and:

wherein the content of the first and second substances,

the second parallax prediction core module comprises the following components in sequence from front to back: the second disparity prediction core module performs feature extraction through the 1 st to 6 th feature extraction convolution blocks and performs disparity information prediction through the 1 st to 5 th disparity prediction convolution blocks;

s700, setting the learning rate to 10 for the second neural network-6Restricting the L2 paradigm to not exceed 0.1, initializing the second neural network with the model parameters of the first neural network saved in step S500, and: selecting a plurality of images from the first image sequence on the second data set and the optional third data set as defined in the step S100 as the first image sequence, then obtaining corresponding reference images and planar scanning volumes according to the steps S200 and S300, respectively, and using the reference images and the planar scanning volumes as input for training the second neural network, and after 320000 times of training iteration, storing model parameters of the second neural network; wherein the step iteratively trains the second neural network and saves model parameters using an adaptive moment estimation method (Adam), and:

the formula of the loss function used for training the second neural network is:

Figure BDA0002239546110000071

wherein n is the number of parallax levels, y is the parallax level corresponding to the true value, and x is a group of predicted values of one parallax level;

and S800, taking a plurality of images of different view angles of another certain determined scene to be predicted as a first image sequence, then respectively obtaining a corresponding reference image and a corresponding plane scanning volume according to the steps S200 and S300, taking the reference image and the plane scanning volume as input, and obtaining a depth map of the reference image in the determined scene through the second neural network obtained by training in the step S700.

By now it can be appreciated that when the second neural network is trained as a key of the method, it can naturally be used to solve the depth prediction problem associated with another certain actual scene to be predicted. It should be noted that, by using the adaptive time estimation method, the embodiment can obtain an adaptive learning rate by letting each parameter in the neural network model training optimization process, which is to achieve dual improvement of optimization quality and speed. The loss function is then used to optimize the output depth map of the model, which relates to the probability of the true disparity level.

In another embodiment, the parallax level in step S200 is determined by:

inputting the first image sequence into a three-dimensional reconstruction COLMAP system, respectively estimating the camera attitude and the distance between each feature in the sparse reconstruction model by using the three-dimensional reconstruction COLMAP system, wherein the largest one is the maximum parallax, the maximum parallax is taken as the highest parallax level, the maximum parallax is equally divided, and each equal part is taken as a parallax levelWherein the content of the first and second substances,

the sparse reconstruction model is a point cloud model reconstructed by the three-dimensional reconstruction COLMAP system according to color and depth information contained in the received image sequence;

each feature is a feature of an object captured in the point cloud model and is characterized by point clouds corresponding to objects at different depths.

For the purposes of this embodiment, it gives a way of determining the level of disparity, which makes use of the three-dimensional reconstruction COLMAP system. It will be appreciated that the disparity level may also be determined in other suitable ways.

In another embodiment, wherein the second neural network in step S600 further comprises:

1 st to 3 rd parallax enhancement volume blocks;

the second neural network also acts on the 3 rd to 5 th parallax prediction volume blocks through the 1 st to 3 rd parallax enhancement volume blocks, respectively, so as to double the spatial features and optimize the final output prediction result: information at each disparity level of the prediction.

For this embodiment, the disparity enhancement convolution block can double the spatial features, optimizing the prediction result.

In another embodiment, wherein:

for the feature fusion module, the number of channels of the feature map output by the four convolutional layers is respectively: 64, 96, 32, 4;

for the feature aggregation module, the number of channels of the aggregated feature map is 400 and 100, respectively, and the two convolution layers of the feature aggregation module finally generate the aggregated feature map with 100 channels.

It is to be understood that this is a specific definition of the passage in question and it will be obvious that this disclosure does not exclude other reasonable, practical, specific definitions of the passage.

In another embodiment, wherein:

the first data set, the second data set and the third data set are ImageNet, DeMoN and MVS-SYNTH data sets respectively.

It should be noted that the neural network is trained using the public data set DeMoN (which includes a series of real scene data sets SUN3D, RGB-D SLAM, CITYWALL and achtech-TURM and a synthetic data set scens 11) and MVS-SYNTH, and is rooted in: the DeMoN data set comprises tens of thousands of real indoor and outdoor scenes, including corridors, offices, study rooms, libraries, warehouses, buildings, parks and the like, wherein each scene comprises a plurality of images with different quantities and different resolutions; the MVS-SYNTH data set was cut from a game scene, containing 120 scenes, each scene containing 100 images with a resolution of 1920 x 1080, and when used in this disclosure, data enhancement was performed by changing the resolutions of 1280 x 720 and 960 x 540, resulting in a 3-fold expansion of the composite scene data, and also resulting in images of different resolutions. It will be appreciated that preferably the images from different perspectives in each scene constitute a sequence of images.

In another embodiment, wherein:

the second data set comprises any one or a combination of the following two data sets: a real data set, a synthetic data set;

the third data set is a composite data set for the second data set.

For this embodiment, the dataset of the real scene contains measurement errors in terms of the dataset, whereas the synthetic dataset has a non-real appearance and cannot exhibit certain features like a real image, such as illumination, depth of field, etc. The synthetic data set may therefore be a supplement to the real data set. For example, as will be described later, the test set is the ETH3D data set and the self-photographed 10 sets of outdoor scene data, and the ETH3D includes 13 sets of real indoor and outdoor scenes and image depth maps obtained by a high-precision laser scanner.

In another embodiment, wherein:

for the second disparity prediction kernel, both the 1 st and 2 nd feature extraction convolution blocks are: the characteristic extraction volume block comprises a first 5 × 5 volume layer and a second 5 × 5 volume layer which are connected in sequence from front to back, wherein the step length of the first volume layer of the 2 nd characteristic extraction volume block is 2, and the 3 rd to 6 th characteristic extraction volume blocks are all composed of: the multilayer transformer is composed of a first 3 x 3 convolution layer and a second 3 x 3 convolution layer which are connected in sequence from front to back, wherein the step length of the first convolution layer is 2.

In another embodiment, wherein:

for the second disparity prediction core module, the 1 st to 5 th disparity prediction volume blocks are all: the device comprises an upper sampling layer, a first 3 x 3 convolution layer and a second 3 x 3 convolution layer which are connected in sequence from front to back.

In another embodiment, wherein:

for the second disparity prediction core module, the 1 st to 3 rd disparity-enhanced volume blocks are all: the device is composed of a 3 x 3 convolution layer and an upper sampling layer which are connected in sequence from front to back, and the following steps:

the input to the convolutional layer in the 1 st disparity-enhanced convolutional block is derived from the output of the second 3 x 3 convolutional layer in the 2 nd disparity-predicted convolutional block;

the upsampled layer in the 1 st disparity enhanced convolution block is further output to a second 3 x 3 convolution layer in a 3 rd disparity predicted convolution block;

the input to the convolutional layer in the 2 nd disparity-enhanced convolutional block is derived from the output of the second 3 x 3 convolutional layer in the 3 rd disparity-predicted convolutional block;

the upsampled layer in the 2 nd disparity enhanced convolution block is further output to a second 3 x 3 convolution layer in a 4 th disparity prediction convolution block;

the input to the convolutional layer in the 3 rd disparity enhanced convolutional layer block is derived from the output of the second 3 x 3 convolutional layer in the 4 th disparity predicted convolutional layer block;

the upsampled layer in the 3 rd disparity enhanced convolution block is further output to the second 3 x 3 convolution layer in the 5 th disparity predicted convolution block.

In another embodiment, wherein:

the channel numbers of the feature maps output by the 1 st to 6 th feature extraction volume blocks are respectively as follows: 600, 800, 1000, 1000, 1000, 1000;

the feature map channels output by the 1 st to 5 th parallax prediction volume blocks are respectively as follows: 1000, 1000, 800, 600, 800;

the channel numbers of the feature maps output by the 1 st to 3 rd parallax enhancement volume blocks are respectively as follows: 100, 100, 100;

and the number of the first and second electrodes,

a jump connection structure is arranged between a feature extraction volume block and a parallax prediction volume block which output feature maps with the same size, and the structure of the feature extraction volume block and the result of parallax prediction are spliced together in channel dimension, and the method comprises the following steps:

a jump connection structure is arranged between the 1 st feature extraction volume block and the 5 th parallax prediction volume block;

a jump connection structure is arranged between the 2 nd feature extraction volume block and the 4 th parallax prediction volume block;

a jump connection structure is arranged between the 3 rd feature extraction volume block and the 3 rd parallax prediction volume block;

a skip connection structure is arranged between the 4 th feature extraction volume block and the 2 nd parallax prediction volume block;

the 5 th feature extracts that there is a skip connection structure between the convolution block and the 1 st disparity prediction convolution block.

It should be noted that the jump connection structure can make full use of spatial features of different scales to improve the prediction result.

In the present disclosure, the nonlinear activation functions of all convolutional layers preferably employ scaled exponential linear cell activation functions. This is because, for the multi-view depth prediction problem to be solved by the present disclosure, the inventors found that: the linear unit activation function with the scaling index can prevent overfitting, and compared with other activation functions, the gradient dispersion problem caused by the fact that other activation functions enter a nonlinear saturation region can be solved.

The inventors tested embodiments of the present disclosure as follows:

the invention takes the Invitta GPU as a computing platform and uses a PyTorch deep learning frame as an asymmetric deep convolution neural network frame. Due to the GPU memory constraints, we set the disparity level to 100 and the number of neighboring images to 4, and compute a planar scan volume of 4 x 100.

According to the flow shown in fig. 1, the network structure shown in fig. 2 is used, and the effect is shown in fig. 3a and fig. 3 b. The experimental environment is as follows: a display card: NVIDIA TITAN XP, PyTorch: 0.3.1 version, tested using the ETH3D dataset and compared to conventional algorithms and to algorithms based on deep networks. The result of the invention is evaluated by the following evaluation method, and the smaller the calculation result of the three parameters is, the higher the prediction precision of the network is, the stronger the prediction capability is:

Figure BDA0002239546110000111

Figure BDA0002239546110000112

Figure BDA0002239546110000113

wherein d isiFor the depth value to be predicted,

Figure BDA0002239546110000114

a true depth value is represented which is,

l1-inv denotes the L1 distance between the predicted and true values. L1-rel represents the relative error between the predicted and true values. SC-inv denotes the scale-invariant error of the predicted and true values.

And (4) comparing the results:

Figure BDA0002239546110000116

the method disclosed by the invention has higher accuracy and robustness on objects such as sky, branches, glass and the like, enhances the expandability, obviously improves the depth prediction performance and obtains a good technical effect by combining the attached drawings.

The above is merely a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, which may be variously modified and varied by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于机器视觉的牛体尺测量方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!