Disparity map acquisition method, device, terminal and storage medium

文档序号:551858 发布日期:2021-05-14 浏览:15次 中文

阅读说明:本技术 一种视差图获取方法、装置、终端和存储介质 (Disparity map acquisition method, device, terminal and storage medium ) 是由 徐彬 杨晓立 余宇山 黄源浩 肖振中 于 2021-01-19 设计创作,主要内容包括:本申请适用于计算机视觉领域,提供了一种视差图获取方法、装置、终端和存储介质。其中,该视差图获取方法包括:通过对基于左视点图像和右视点图像进行代价计算得到的第一代价空间进行降维操作,并基于降维后的第一代价空间进行代价聚合等操作,再基于双边格网进行上采样,计算出左视点图像和右视点图像之间的第一视差图。本申请的实施例提供的方法能够在保障视差图精度的同时,提高获取视差图的实时性。(The application is applicable to the field of computer vision, and provides a disparity map acquisition method, a disparity map acquisition device, a terminal and a storage medium. The disparity map acquisition method comprises the following steps: the method comprises the steps of performing dimensionality reduction operation on a first cost space obtained by performing cost calculation on the basis of a left viewpoint image and a right viewpoint image, performing cost aggregation and other operations on the basis of the first cost space subjected to dimensionality reduction, performing upsampling on the basis of a bilateral grid, and calculating a first disparity map between the left viewpoint image and the right viewpoint image. The method provided by the embodiment of the application can guarantee the accuracy of the disparity map and improve the real-time performance of obtaining the disparity map.)

1. A disparity map acquisition method is characterized by comprising the following steps:

acquiring a left viewpoint image and a right viewpoint image, inputting the left viewpoint image and the right viewpoint image into a feature extraction network, and acquiring a left feature image corresponding to the left viewpoint image and a right feature image corresponding to the right viewpoint image which are output by the feature extraction network;

performing cost calculation based on the left feature image and the right feature image to obtain a first cost space of the left feature image and the right feature image;

reducing the dimension of the first price space, and performing cost aggregation on the first price space after dimension reduction to obtain a second price space;

establishing a double-side grid corresponding to the second price space, and performing interpolation operation on the double-side grid to obtain a third price space;

and calculating a first disparity map corresponding to the left viewpoint image and the right viewpoint image by using the third price space.

2. The method for acquiring a disparity map according to claim 1, wherein the establishing a double-edge mesh corresponding to the second cost space and performing an interpolation operation on the double-edge mesh to obtain a third cost space includes:

performing convolution operation on the second price space to obtain a double-edge grid corresponding to the second price space;

and acquiring a guide map, and performing interpolation operation on the bilateral grid by using the guide map to obtain the third price space.

3. The method according to claim 1, wherein the left feature image includes a left feature sub-image output by each convolution layer of the feature extraction network, the right feature image includes a right feature sub-image output by each convolution layer of the feature extraction network, and the performing cost calculation based on the left feature image and the right feature image to obtain a first cost space of the left feature image and the right feature image includes:

respectively splicing the left characteristic sub-image and the right characteristic sub-image to obtain a left unary characteristic diagram corresponding to the left characteristic image and a right unary characteristic diagram corresponding to the right characteristic image;

cascading the left unary features in the left unary feature map and the right unary features in the right unary feature map under each parallax level to obtain feature pairs with a cascading relationship;

and grouping the feature pairs to obtain a plurality of feature groups, and performing cost calculation on the left unary feature and the right unary feature in each feature pair in each feature group to obtain the first cost space.

4. The method according to claim 3, wherein the performing the cost calculation on the left unary feature and the right unary feature in each feature pair in each feature group comprises:

using cost calculation formulaPerforming cost calculation on the left unary feature and the right unary feature in each feature pair in each feature group to obtain each cost value C in the first cost spacegwc

Wherein d represents the parallax error,representing the left meta-feature after grouping, (x, y) representing a location of the left meta-feature in the left feature image,representing a right unary feature, N, associated with the left unary feature after groupingCRepresenting the number of left unary features in the left feature image,a number of groups representing the set of features,representing the number of said left unary features within one of said feature sets.

5. The disparity map acquisition method according to claim 1, wherein the inputting the left viewpoint image and the right viewpoint image into a feature extraction network, comprises:

inputting the left viewpoint image into a first feature extraction network, and inputting the right viewpoint image into a second feature extraction network, wherein weights of the first feature extraction network and the second feature extraction network are shared.

6. The method for acquiring a disparity map according to claim 1, wherein the calculating a first disparity map corresponding to the left viewpoint image and the right viewpoint image using the third price space includes:

performing parallax regression processing on the third price space by using a differentiable soft argmin function to obtain a second parallax map;

and thinning the second disparity map, and updating the disparity values in the second disparity map to obtain the first disparity map.

7. A disparity map acquisition apparatus, comprising:

the characteristic extraction module is used for acquiring a left viewpoint image and a right viewpoint image, inputting the left viewpoint image and the right viewpoint image into a characteristic extraction network of the disparity map acquisition model, and outputting a left characteristic image corresponding to the left viewpoint image and a right characteristic image corresponding to the right viewpoint image by the characteristic extraction network;

the cost calculation module is used for obtaining the model through the disparity map and calculating the cost based on the left characteristic image and the right characteristic image to obtain a first cost space of the left characteristic image and the right characteristic image;

the cost aggregation module is used for obtaining the model through the parallax map, reducing the dimension of the first price space and performing cost aggregation on the reduced first price space to obtain a second price space;

the bilateral grid module is used for acquiring a model through the parallax map, establishing a bilateral grid corresponding to the second price space, and performing interpolation operation on the bilateral grid to obtain a third price space;

and the disparity map acquisition module is used for calculating a first disparity map corresponding to the left viewpoint image and the right viewpoint image by utilizing the third price space through the disparity map acquisition model.

8. The apparatus according to claim 7, wherein the apparatus further comprises a training module, configured to input a sample image into a to-be-trained disparity map acquisition model to obtain a sample disparity map, input the sample disparity map and a pre-stored real disparity map into a loss function, update parameters of the feature extraction module, the cost calculation module, the cost aggregation module, the bilateral grid module, and the disparity map acquisition module in the to-be-trained disparity map acquisition model in a manner of minimizing a loss value of the loss function by using a gradient descent method until a change rate of a loss value of the loss function is smaller than a preset threshold, obtain a trained disparity map acquisition model, and use the trained disparity map acquisition model as the apparatus for acquiring disparity maps.

9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.

10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.

Technical Field

The present application belongs to the field of computer vision, and in particular, to a disparity map acquisition method, apparatus, terminal, and storage medium.

Background

Parallax (Disparity) refers to the difference in direction that results from viewing the same object from two points that are some distance away. A disparity map (disparity map) is an image in which the element value is a disparity value with reference to any one of a left-viewpoint image and a right-viewpoint image. Since the disparity value includes distance information of a scene, acquiring a disparity map and calculating a depth map based on the disparity map are the most active fields in binocular vision research.

The precision of the disparity map influences the later three-dimensional reconstruction effect, and the speed of disparity map acquisition ensures the real-time property of the binocular vision system in processing the acquired information. However, the precision and the speed of the current method for acquiring the disparity map are in a constraint relationship, and the high-precision disparity map algorithm mostly uses a graph cutting method or a global method such as confidence propagation, so that the running speed is slow, a frame of image can be processed within several minutes, and the requirement on real-time property cannot be met.

Disclosure of Invention

The embodiment of the application provides a method, a device, a terminal and a storage medium for obtaining a disparity map, which can improve the real-time property of obtaining the disparity map while ensuring the precision of the disparity map.

A first aspect of an embodiment of the present application provides a disparity map obtaining method, including:

acquiring a left viewpoint image and a right viewpoint image, inputting the left viewpoint image and the right viewpoint image into a feature extraction network, and acquiring a left feature image corresponding to the left viewpoint image and a right feature image corresponding to the right viewpoint image which are output by the feature extraction network;

performing cost calculation based on the left feature image and the right feature image to obtain a first cost space of the left feature image and the right feature image;

reducing the dimension of the first price space, and performing cost aggregation on the first price space after dimension reduction to obtain a second price space;

establishing a double-side grid corresponding to the second price space, and performing interpolation operation on the double-side grid to obtain a third price space;

and calculating a first disparity map corresponding to the left viewpoint image and the right viewpoint image by using the third price space.

A second aspect of the embodiments of the present application provides a disparity map obtaining apparatus, including:

and the feature extraction module is used for acquiring the left viewpoint image and the right viewpoint image, inputting the left viewpoint image and the right viewpoint image into a feature extraction network of the disparity map acquisition model, and outputting a left feature image corresponding to the left viewpoint image and a right feature image corresponding to the right viewpoint image by the feature extraction network.

And the cost calculation module is used for calculating the cost based on the left characteristic image and the right characteristic image through the parallax image acquisition model to obtain a first cost space of the left characteristic image and the right characteristic image.

And the cost aggregation module is used for obtaining the model through the parallax map, reducing the dimension of the first price space, and performing cost aggregation on the reduced first price space to obtain a second price space.

The bilateral grid module is used for acquiring a model through the parallax map, establishing a bilateral grid corresponding to the second price space, and performing interpolation operation on the bilateral grid to obtain a third price space;

and the disparity map acquisition module is used for calculating a first disparity map corresponding to the left viewpoint image and the right viewpoint image by utilizing the third price space through the disparity map acquisition model.

A third aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.

A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the above method.

A fifth aspect of embodiments of the present application provides a computer program product, which when run on a terminal, causes the terminal to perform the steps of the method.

It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.

In the embodiment of the application, the dimensionality reduction operation is carried out on the first cost space obtained by carrying out cost calculation on the basis of the left viewpoint image and the right viewpoint image, the operations such as cost aggregation and the like are carried out on the basis of the first cost space after dimensionality reduction, and then the upsampling is carried out on the basis of the bilateral grid, so that the first disparity map between the left viewpoint image and the right viewpoint image is calculated, the operation can be carried out in the low-resolution cost space obtained after dimensionality reduction, the rate of obtaining the disparity map is improved, and meanwhile, the upsampling is carried out by using the bilateral grid, so that the precision of the disparity map is guaranteed. Therefore, the method provided by the application can ensure the accuracy of the disparity map and improve the real-time property of disparity map acquisition.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.

Fig. 1 is a schematic flow chart illustrating an implementation of a disparity map obtaining method according to an embodiment of the present application;

fig. 2 is a schematic flowchart of a specific implementation of step S104 according to an embodiment of the present application;

fig. 3 is a schematic flowchart of a specific implementation of step S105 provided in an embodiment of the present application;

fig. 4 is a schematic flowchart of a specific implementation of step S102 according to an embodiment of the present application;

fig. 5 is a schematic flow chart illustrating an implementation of obtaining a disparity map based on a disparity map obtaining model according to an embodiment of the present application;

fig. 6 is a schematic structural diagram of a disparity map obtaining apparatus according to an embodiment of the present application;

fig. 7 is a schematic structural diagram of a terminal according to an embodiment of the present application.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.

Parallax (Disparity) refers to the difference in direction that results from viewing the same object from two points that are some distance away. A disparity map (disparity map) is an image in which the element value is a disparity value with reference to any one of a left-viewpoint image and a right-viewpoint image. Since the disparity value includes distance information of a scene, acquiring a disparity map and calculating a depth map based on the disparity map are the most active fields in binocular vision research.

The precision of the disparity map influences the later three-dimensional reconstruction effect, and the speed of disparity map acquisition ensures the real-time property of the binocular vision system in processing the acquired information. However, the precision and the speed of the current method for acquiring the disparity map are in a constraint relationship, and the high-precision disparity map algorithm mostly uses a graph cutting method or a global method such as confidence propagation, so that the running speed is slow, a frame of image can be processed within several minutes, and the requirement on real-time property cannot be met.

In order to explain the technical means of the present application, the following description will be given by way of specific examples.

Fig. 1 shows a schematic implementation flow diagram of a disparity map acquisition method provided in an embodiment of the present application, where the method can be applied to a terminal and is applicable to a situation where the real-time property of acquiring a disparity map needs to be improved while the accuracy of the disparity map is guaranteed.

The terminal can be a computer, a smart phone and other terminals.

Specifically, the above-described disparity map acquisition method may include the following steps S101 to S105.

Step S101, a left viewpoint image and a right viewpoint image are obtained, the left viewpoint image and the right viewpoint image are input into a feature extraction network, and a left feature image corresponding to the left viewpoint image and a right feature image corresponding to the right viewpoint image output by the feature extraction network are obtained.

In the embodiment of the present application, in order to calculate the parallax between the left viewpoint image and the right viewpoint image, it is necessary to perform feature extraction on the left viewpoint image and the right viewpoint image. Specifically, the left viewpoint image and the right viewpoint image are input into the feature extraction network, so that a left feature image corresponding to the left viewpoint image and a right feature image corresponding to the right viewpoint image output by the feature extraction network are obtained.

In order to improve the real-time performance of disparity map acquisition, in some embodiments of the present application, the inputting the left viewpoint image and the right viewpoint image into the feature extraction network may include: the left viewpoint image is input to a first feature extraction network, and the right viewpoint image is input to a second feature extraction network. Wherein the weights of the first feature extraction network and the second feature extraction network are shared.

That is, in some embodiments of the present application, a left viewpoint image and a right viewpoint image may be input into two weight-shared feature extraction networks, respectively, and a left feature image or a right feature image may be obtained.

In addition, in some embodiments of the present application, before the left viewpoint image and the right viewpoint image are input to the feature extraction network, an alignment operation may be performed on the left viewpoint image and the right viewpoint image, and image resolutions of the aligned left viewpoint image and right viewpoint image are the same, so that a corresponding relationship exists between pixel points in the left viewpoint image and the right viewpoint image.

For example, the left and right viewpoint images after alignment may be input to a weight-sharing residual network model, and left and right feature maps of a target resolution may be extracted 1/8 using the residual network model. The target resolution refers to the resolution of the finally output disparity map, and the number of channels of the obtained feature map may be 32. Preferably, the residual network model may be 3 convolutional layers of 3 × 3 in the first three layers, the step sizes of the convolutional layers of the first three layers are 2, 1 and 1, respectively, and 4 residual layers of 3 × 3 are connected to the convolutional layers of the first three layers, and the step sizes of the residual layers are 1, 2, 2 and 1, respectively.

In the above embodiment, the feature extraction network is an example of a residual network model, but in practical applications, the feature extraction network may also be a common convolutional layer network model such as PVANET, and the present application does not limit this. Also, the feature extraction network of the left viewpoint image input and the feature extraction network of the right viewpoint image input may be network models of different types but shared weight.

In order to improve the reliability of the disparity map, in some embodiments of the present application, the feature extraction network may further include a pooling layer for expanding a receptive field, and the global environment information is utilized to enrich image features in the process of obtaining the disparity map, so as to construct a cost space.

Specifically, the Pooling layer may be an Spatial Pyramid Pool (SPP) module, and the Spatial Pyramid pool is connected to the last layer of the convolutional layer in the feature extraction network.

And S102, performing cost calculation based on the left characteristic image and the right characteristic image to obtain a first cost space of the left characteristic image and the right characteristic image.

In the embodiment of the application, the cost calculation refers to calculating a cost value of each pixel point on the left feature image for matching with a corresponding point on the right feature image with all parallax possibilities, or calculating a cost value of each pixel point on the right feature image for matching with a corresponding point on the left feature image with all parallax possibilities. The calculated cost value may be stored in a first cost space (cost volume).

Specifically, in the embodiment of the present application, cost calculation may be performed based on the left feature image and the right feature image by using a Sum of Absolute Differences (SAD), a Sum of Squared Differences (SSD), normalized cross-correlation (NCC), or other cost calculation algorithms, so as to obtain a first cost space of the left feature image and the right feature image.

And S103, reducing the dimension of the first price space, and performing cost aggregation on the reduced first price space to obtain a second price space.

In some embodiments of the present application, a convolution layer may be utilized to perform a convolution operation on the first cost space to achieve effective dimensionality reduction of the first cost space, increase the width of the network and increase the adaptability of the network to multiple scales without increasing network parameters.

Specifically, the first cost space may be reduced in dimension by a cost aggregation module, and the cost aggregation module may include two convolution layers for performing convolution operation on the first cost space, so as to reduce the number of cost channels. For example, the number of cost channels may be reduced from 44 to 16.

In an embodiment of the present application, the cost aggregation refers to aggregating cost values in a first cost space by summing, averaging, or other methods, so as to obtain an accumulated cost of a pixel point on an image at a parallax d. Through cost aggregation, the influence of abnormal points can be reduced, the Signal to Noise Ratio (SNR) is improved, and the matching precision is further improved.

Specifically, in some embodiments of the present application, the first reduced-dimension cost space may be cost aggregated through a U-Net network, and the second aggregated cost space is obtained through Element-wise addition (Element-wise Summation).

It should be noted that the cost aggregation may also be performed by other convolution networks similar to U-Net to reduce the amount of computation, and the specific manner of cost aggregation is not limited herein.

And step S104, establishing a double-side grid corresponding to the second price space, and performing interpolation operation on the double-side grid to obtain a third price space.

The Bilateral Grid (BG) is a data structure, and based on the second price space, a Bilateral Grid corresponding to the second price space may be established, and a value may be filled in each position of the Bilateral Grid. In the embodiment of the application, a bilateral grid corresponding to the second price space may be established, and Slicing (Slicing) operation is performed on the bilateral grid, that is, linear interpolation operation is performed on the bilateral grid, so as to obtain the third price space. Wherein the third price space has a higher resolution than the second price space.

That is, in the embodiment of the present application, the two-sided grid may be used to upsample the second cost space with low resolution, and obtain the third cost space with high three-dimensional resolution.

Specifically, as shown in fig. 2, the establishing of the double-sided grid corresponding to the second price space and performing interpolation operation on the double-sided grid to obtain the third price space may include steps S201 to S202.

Step S201, performing convolution operation on the second price space to obtain a double-edge grid corresponding to the second price space.

Specifically, the second cost space may be converted to a bilateral mesh β using a three-dimensional convolution in a convolutional network.

Step S202, a guide map is obtained, interpolation operation is carried out on the bilateral grid by the guide map, and a third price space is obtained.

Specifically, each cost value in the third cost space is

CH(x,y,d)=β(sx,sy,sd,sGG(x,y));

Wherein, CH(x, y, d) represents each cost value in the third price space, d represents parallax, x and y represent the abscissa and ordinate of the third price space, respectively, sx and sy represent the width ratio and height ratio of the bilateral grid to the guide map G, respectively, sGIs the ratio of the gray levels of the pilot map and the double-sided grid.

In some embodiments of the present application, the guide map G may be obtained from the left feature image or the right feature image output based on the feature extraction network in step S101 by using two 3 × 3 convolution layers. Note that the guide information of a pixel having (x, y) coordinates is the same in the parallax dimension.

Step S105, a first disparity map corresponding to the left viewpoint image and the right viewpoint image is calculated using the third price space.

In some embodiments of the present application, using the third price space, a point with the best accumulated cost may be selected as a corresponding matching point in the parallax search range, and the parallax corresponding to the point is the required parallax, so as to obtain the first parallax map.

Specifically, in some embodiments of the present application, as shown in fig. 3, the calculating the first disparity map corresponding to the left viewpoint image and the right viewpoint image by using the third price space may include the following steps S301 to S302.

Step S301, performing parallax regression processing on the third price space by using a differentiable soft argmin function to obtain a second parallax map.

Specifically, the differentiable soft argmin operation is defined as:

wherein D ismaxDenotes maximum parallax, d denotes parallax, CH(x, y, z) represents a stereoscopic pixel point in the third valence space.

And step S302, thinning the second disparity map, and updating the disparity values in the second disparity map to obtain the first disparity map.

In some embodiments of the present application, the second disparity map may be fine-tuned by a residual network model, which includes four convolutional layers with different expansion rates (disparity) to utilize semantic information of the color image.

Specifically, the left feature image or the right feature image extracted in step S101 and the second disparity map obtained in step S301 may be input to a residual network model, the left feature image is output through 4 residual convolution layers with expansion rates of 1, 2, 4, and 8, and the number of channels is 16, the residual map is added to the first disparity map for refinement, the sum of the residual map and the first disparity map is calculated, and the disparity value of the first disparity map is updated by using the obtained sum, so as to obtain the second disparity map.

Note that, each of the residual convolution layers (except the last layer) is connected to a bn (batch normalization) layer and a ReLU layer; the last layer does not contain a BN layer and a ReLU layer, and the aim is to enable the learned residual error amount to be positive and negative; and the sum of the second disparity map and the residual error passes through a ReLU layer to ensure that the output disparity is not negative. In some embodiments of the present application, the dimension of the first disparity map is H × W × 1, where H, W denotes the height and width of the first disparity map; the dimension of the second disparity map can be H/2 xW/2 x 1.

In order to reduce the amount of calculation, in some embodiments of the present application, the second disparity map may also be used as the first disparity map.

In the embodiment of the application, the dimensionality reduction operation is performed on the first cost space obtained by performing cost calculation on the basis of the left viewpoint image and the right viewpoint image, the operations such as cost aggregation and the like are performed on the basis of the dimensionality reduced first cost space, then the up-sampling is performed on the basis of the bilateral grid, and the first disparity map between the left viewpoint image and the right viewpoint image is calculated, so that the operation can be performed in the low-resolution cost space obtained after the dimensionality reduction, the rate of obtaining the disparity map is improved, and meanwhile, the up-sampling is performed by using the bilateral grid, and the precision of the disparity map is ensured.

In practical application, the whole disparity map is high in operation speed and large in operation frame number, 44 frames of images can be operated at one time, and the operation speed is greatly improved compared with that of the prior art.

In order to further improve the accuracy of the disparity map of the present application and the real-time property of disparity map acquisition, the left feature image may include left feature sub-images output by the respective convolution layers of the feature extraction network, and similarly, the right feature image may include right feature sub-images output by the respective convolution layers of the feature extraction network.

In this case, as shown in fig. 4, the performing cost calculation based on the left feature image and the right feature image to obtain the first cost space of the left feature image and the right feature image may include: step S401 to step S403.

Step S401, respectively performing stitching processing on the left feature sub-image and the right feature sub-image to obtain a left unary feature map corresponding to the left feature image and a right unary feature map corresponding to the right feature image.

That is, after the left viewpoint image is input into the feature extraction network, each convolution layer of the feature extraction network outputs a left feature sub-image, and at this time, the features in each left feature sub-image can be subjected to stitching processing to obtain a left-element feature map. Similarly, after the right viewpoint image is input into the feature extraction network, each convolution layer of the feature extraction network outputs a right feature sub-image, and at this time, the features in each right feature sub-image can be spliced to obtain a right unary feature map.

For example, based on the feature extraction network in step S101, five left feature sub-images to be output by each convolution layer of the feature extraction network are obtained, where feature channels corresponding to each left feature sub-image are 128, 128, 32, 32, and 32, respectively, and then 5 left feature sub-images are subjected to stitching processing, so that 352 left primitive feature maps to be matched can be formed.

Step S402, concatenating the left unary feature in the left unary feature map and the right unary feature in the right unary feature map of the pair at each disparity level to obtain a feature pair having a concatenation relationship.

Each of the parallax levels refers to a parallax within a parallax range, and the parallaxes corresponding to the respective parallax levels are different.

That is, at each disparity level, a left unary feature may be concatenated with a right unary feature to form a pair of features having a concatenated relationship.

Specifically, a point with an x abscissa in the left-ary feature map may be concatenated to a point with an x-d abscissa in the left-ary feature map. With different levels of disparity, i.e. different disparities d, the right unary feature to which the same left unary feature is concatenated will be different.

Step S403, grouping the feature pairs to obtain a plurality of feature groups, and performing cost calculation on the left unary feature and the right unary feature in each feature pair in each feature group to obtain a first cost space.

Specifically, the above-mentioned manner of grouping the features may be selected according to actual situations. In some embodiments of the present application, an average grouping manner may be directly adopted, and the features are averagely grouped into each feature group according to a preset number of feature groups, so as to obtain a plurality of feature groups.

In this case, the above calculating the cost of the left unary feature and the right unary feature in each feature pair in each feature group to obtain the first cost space may include: using cost calculation formula Performing cost calculation on the left unary feature and the right unary feature in each feature pair in each feature group to obtain each cost value C in the first cost spacegwc

Wherein d represents the parallax error,representing the left meta-feature after the grouping, (x, y) representing a position of the left meta-feature in the left feature image,representing a right unary feature associated with the left unary feature after grouping, (x-d, y) representing a position of the right unary feature associated with the left unary feature after grouping in the right feature image, NCRepresenting the number of left-first features in the left feature image,number of groups representing characteristic groups, NgRepresenting the number of left-ary features within a feature set.

In the embodiment of the application, the left feature sub-image and the right feature sub-image are respectively spliced to obtain a left unary feature map corresponding to the left feature image and a right unary feature map corresponding to the right feature image, then the left unary feature in the left unary feature map and the right unary feature in the right unary feature map are cascaded under each parallax level to obtain feature pairs with a cascade relation, the feature pairs are grouped to obtain a plurality of feature groups, cost calculation is performed on the left unary feature and the right unary feature in each feature group to obtain a first cost space, and group correlation of features is realized.

In the prior art, left and right unary features are either fully correlated or concatenated (concatenated) at different disparity levels. However, full correlation loses too much information because it outputs only a single channel of the correlation map at each disparity level. The collocation amount does not contain information of feature similarity, so more parameters are needed for next cost aggregation, and similarity measurement is learned from the beginning. Therefore, the method divides the features into a plurality of groups, and then calculates the first cost space according to the feature groups, namely each feature group corresponds to one cost space, and at the moment, the group correlation quantity only needs fewer parameters to obtain good results. Therefore, excessive information cannot be lost, the accuracy of the disparity map is guaranteed, meanwhile, the difficulty of subsequent cost aggregation is reduced, and the instantaneity of obtaining the disparity map is guaranteed.

In some embodiments of the present application, the method for acquiring a disparity map may be implemented based on a disparity map acquisition model, specifically, as shown in fig. 5, the method for acquiring a disparity map may include steps S501 to S505, which are detailed as follows:

step S501, a left viewpoint image and a right viewpoint image are obtained, the left viewpoint image and the right viewpoint image are input into a feature extraction network of a disparity map obtaining model, and a left feature image corresponding to the left viewpoint image and a right feature image corresponding to the right viewpoint image are output by the feature extraction network.

Step S502, cost calculation is carried out on the basis of the left characteristic image and the right characteristic image through a disparity map acquisition model, and first cost spaces of the left characteristic image and the right characteristic image are obtained.

And S503, obtaining a model through the disparity map, reducing the dimension of the first price space, and performing cost aggregation on the reduced first price space to obtain a second price space.

Step S504, a model is obtained through the parallax map, a double-side grid corresponding to the second price space is established, interpolation operation is carried out on the double-side grid, and a third price space is obtained.

Step S505, a first disparity map corresponding to the left viewpoint image and the right viewpoint image is calculated by using the third price space through the disparity map acquisition model.

The specific working manner of the disparity map obtaining model in the above steps S501 to S505 can refer to the description of the foregoing steps S101 to S105. Moreover, the disparity map acquisition model may include more modules to implement the methods described in fig. 2 to fig. 4.

In order to achieve the acquisition of the disparity map based on the disparity map acquisition model, in some embodiments of the present application, before performing disparity map acquisition, a disparity map acquisition model to be trained may be trained, so as to obtain the disparity map acquisition model.

Specifically, the training of the disparity map acquisition model to be trained may include: inputting a sample image into a to-be-trained disparity map acquisition model to obtain a sample disparity map, inputting the sample disparity map and a pre-stored real disparity map into a loss function, updating parameters of a feature extraction module, a cost calculation module, a cost aggregation module, a bilateral grid module and a disparity map acquisition module in the to-be-trained disparity map acquisition model in a mode of minimizing a loss value of the loss function by using a gradient descent method until a change rate of the loss value of the loss function is smaller than a preset threshold value, and obtaining a trained disparity map acquisition model.

In some embodiments of the present application, L1 (Smooth) may be smoothed by smoothingL1) Loss function, namely a first disparity map output by the disparity map acquisition model to be trained according to the disparity map acquisition methodAnd a real disparity map DgtAnd calculating the loss value of a certain pixel point p.

In particular, the method comprises the following steps of,

loss function

Wherein x represents a first disparity mapThe parallax and the real parallax image D calculated by a certain pixel pointgtAnd calculating errors between the real parallaxes of the corresponding pixel points. Lambda [ alpha ]0And λ1Respectively, representing the weight of the loss function.

In some embodiments of the application, after a sample disparity map and a pre-stored real disparity map are input to a loss function, a batch gradient descent method is used for back propagation, learning parameters of a model, such as weight and bias, are updated, and a trained disparity map acquisition model is obtained through continuous iterative training until the change rate of the loss value of the loss function is smaller than a preset threshold value, so that the model is suitable for a general scene, and the trained disparity map acquisition model is conveniently used for binocular stereo matching subsequently.

It should be noted that, for simplicity of description, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders.

Fig. 6 is a schematic structural diagram of a disparity map obtaining apparatus 600 according to an embodiment of the present disclosure, where the disparity map obtaining apparatus 600 is configured on a terminal. The disparity map obtaining apparatus 600 may be a network model for outputting disparity maps.

Wherein, the disparity map obtaining apparatus 600 may include:

the feature extraction module 601 is configured to acquire a left viewpoint image and a right viewpoint image, input the left viewpoint image and the right viewpoint image into a feature extraction network of the disparity map acquisition model, and output a left feature image corresponding to the left viewpoint image and a right feature image corresponding to the right viewpoint image by the feature extraction network.

And the cost calculation module 602 is configured to perform cost calculation based on the left feature image and the right feature image through the disparity map acquisition model to obtain a first cost space of the left feature image and the right feature image.

And the cost aggregation module 603 is configured to obtain a model through the disparity map, perform dimension reduction on the first cost space, and perform cost aggregation on the first cost space after the dimension reduction to obtain a second cost space.

The bilateral grid module 604 is configured to obtain a model through the disparity map, establish a bilateral grid corresponding to the second price space, and perform interpolation operation on the bilateral grid to obtain a third price space;

the disparity map obtaining module 605 is configured to calculate, by using the disparity map obtaining model, a first disparity map corresponding to the left viewpoint image and the right viewpoint image by using the third price space.

In some embodiments of the present application, the dual-edge mesh module 604 may further be configured to: performing convolution operation on the second price space to obtain a double-edge grid corresponding to the second price space; and acquiring a guide map, and performing interpolation operation on the bilateral grid by using the guide map to obtain the third price space.

In some embodiments of the application, the left feature image includes a left feature sub-image output by each convolution layer of the feature extraction network, and the right feature image includes a right feature sub-image output by each convolution layer of the feature extraction network, and the cost calculation module 602 may be further configured to: respectively splicing the left characteristic sub-image and the right characteristic sub-image to obtain a left unary characteristic diagram corresponding to the left characteristic image and a right unary characteristic diagram corresponding to the right characteristic image; cascading the left unary features in the left unary feature map and the right unary features in the right unary feature map under each parallax level to obtain feature pairs with a cascading relationship; and grouping the feature pairs to obtain a plurality of feature groups, and performing cost calculation on the left unary feature and the right unary feature in each feature pair in each feature group to obtain the first cost space.

In some embodiments of the present application, the cost calculation module 602 may further be configured to: using cost calculation formulaPerforming cost calculation on the left unary feature and the right unary feature in each feature pair in each feature group to obtain each cost value C in the first cost spacegwc(ii) a Wherein d represents the parallax error,representing the left meta-feature after grouping, (x, y) representing a location of the left meta-feature in the left feature image,representing a right unary feature, N, associated with the left unary feature after groupingCRepresenting the number of left unary features in the left feature image,a number of groups representing the set of features,representing the number of said left unary features within one of said feature sets.

In some embodiments of the present application, the feature extraction module 601 may further be configured to: inputting the left viewpoint image into a first feature extraction network, and inputting the right viewpoint image into a second feature extraction network, wherein weights of the first feature extraction network and the second feature extraction network are shared.

In some embodiments of the present application, the disparity map obtaining module 605 may further be configured to: performing parallax regression processing on the third price space by using a differentiable soft argmin function to obtain a second parallax map; and thinning the second disparity map, and updating the disparity values in the second disparity map to obtain the first disparity map.

In some embodiments of the present application, the disparity map obtaining apparatus 600 further includes a training module, configured to input a sample image into a disparity map obtaining model to be trained, to obtain a sample disparity map, input the sample disparity map and a pre-stored real disparity map into a loss function, update parameters of a feature extraction module, a cost calculation module, a cost aggregation module, a bilateral grid module, and a disparity map obtaining module in the disparity map obtaining model to be trained in a manner of minimizing a loss value of the loss function by using a gradient descent method until a change rate of the loss value of the loss function is smaller than a preset threshold, to obtain a trained disparity map obtaining model, and use the trained disparity map obtaining model as the disparity map obtaining apparatus.

In some embodiments of the present application, the training module may be further configured to: by L1 (Smooth) with smoothingL1) Loss function, namely a first disparity map output by the disparity map acquisition model to be trained according to the disparity map acquisition methodAnd a real disparity map DgtCalculating the loss value of a certain pixel point p;

loss function

Wherein x represents a first disparity mapThe parallax and the real parallax image D calculated by a certain pixel pointgtTrue calculated by corresponding pixel pointError between real disparities. Lambda [ alpha ]0And λ1Respectively, representing the weight of the loss function.

It should be noted that, for convenience and simplicity of description, the specific working process of the disparity map obtaining apparatus 600 may refer to the corresponding process of the method described in fig. 1 to fig. 4, and is not described herein again.

Fig. 7 is a schematic diagram of a terminal according to an embodiment of the present application. The terminal 7 may include: a processor 70, a memory 71 and a computer program 72, such as a disparity map acquisition program, stored in said memory 71 and executable on said processor 70. The processor 70, when executing the computer program 72, implements the steps in the above-described embodiments of the disparity map acquisition method, such as the steps S101 to S105 shown in fig. 1. Alternatively, the processor 70, when executing the computer program 72, implements the functions of each module/unit in the above-mentioned apparatus embodiments, such as the functions of the feature extraction module, the cost calculation module, the cost aggregation module, the bilateral grid module, and the disparity map acquisition module shown in fig. 6.

The computer program may be divided into one or more modules/units, which are stored in the memory 41 and executed by the processor 40 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal.

For example, the computer program may be divided into: the device comprises a feature extraction module, a cost calculation module, a cost aggregation module, a bilateral grid module and a disparity map acquisition module. The specific functions of each unit are as follows:

and the feature extraction module is used for acquiring the left viewpoint image and the right viewpoint image, inputting the left viewpoint image and the right viewpoint image into a feature extraction network of the disparity map acquisition model, and outputting a left feature image corresponding to the left viewpoint image and a right feature image corresponding to the right viewpoint image by the feature extraction network.

And the cost calculation module is used for calculating the cost based on the left characteristic image and the right characteristic image through the parallax image acquisition model to obtain a first cost space of the left characteristic image and the right characteristic image.

And the cost aggregation module is used for obtaining the model through the parallax map, reducing the dimension of the first price space, and performing cost aggregation on the reduced first price space to obtain a second price space.

The bilateral grid module is used for acquiring a model through the parallax map, establishing a bilateral grid corresponding to the second price space, and performing interpolation operation on the bilateral grid to obtain a third price space;

and the disparity map acquisition module is used for calculating a first disparity map corresponding to the left viewpoint image and the right viewpoint image by utilizing the third price space through the disparity map acquisition model.

The terminal may include, but is not limited to, a processor 70, a memory 71. It will be appreciated by those skilled in the art that fig. 7 is only an example of a terminal and is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or different components, for example, the terminal may also include input output devices, network access devices, buses, etc.

The Processor 70 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.

The memory 71 may be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 71 may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the memory 71 may also include both an internal storage unit and an external storage device of the terminal. The memory 71 is used for storing the computer program and other programs and data required by the terminal. The memory 71 may also be used to temporarily store data that has been output or is to be output.

It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.

In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.

The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.

In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.

The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.

The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于深度学习的单目绝对深度估计方法和装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!