Image depth information fusion method and system, electronic equipment and storage medium

文档序号:1954767 发布日期:2021-12-10 浏览:17次 中文

阅读说明:本技术 一种图像深度信息融合方法、系统、电子设备及存储介质 (Image depth information fusion method and system, electronic equipment and storage medium ) 是由 王小亮 尹玉成 辛梓 贾腾龙 刘奋 于 2021-07-27 设计创作,主要内容包括:本发明提供一种图像深度信息融合方法、系统、电子设备及存储介质,该方法包括:获取连续多帧RGB图像,基于在先图像的先验信息,通过多视几何三角化测量法计算目标图像中共视点的第一深度信息和第一协方差矩阵;基于训练好的深度学习网络,计算所述目标图像对应的深度图,提取深度图中共视点处的第二深度信息及对应的第二协方差矩阵;基于第一深度信息、第一协方差矩阵、第二深度信息和第二协方差矩阵计算融合后共视点的深度值;根据所述深度图中像素深度的相对关系,计算融合图像中共视点邻域的深度值。从而可以提升图像深度精确性的同时,大幅增加深度的空间覆盖范围,能够不受场景影响得到视场内所有目标深度信息。(The invention provides an image depth information fusion method, an image depth information fusion system, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring continuous multi-frame RGB images, and calculating first depth information and a first covariance matrix of a common viewpoint in a target image by a multi-view geometric triangulation method based on prior information of a previous image; calculating a depth map corresponding to the target image based on the trained deep learning network, and extracting second depth information and a corresponding second covariance matrix at a common viewpoint in the depth map; calculating the depth value of the fused common viewpoint based on the first depth information, the first covariance matrix, the second depth information and the second covariance matrix; and calculating the depth value of the common viewpoint neighborhood in the fusion image according to the relative relation of the pixel depths in the depth image. Therefore, the image depth accuracy can be improved, the depth space coverage range is greatly increased, and all target depth information in a view field can be obtained without being influenced by a scene.)

1. An image depth information fusion method is characterized by comprising the following steps:

acquiring continuous multi-frame RGB images, and calculating first depth information and a first covariance matrix of a common viewpoint in a target image by a multi-view geometric triangulation method based on prior information of a previous image;

calculating a depth map corresponding to the target image based on the trained deep learning network, and extracting second depth information and a corresponding second covariance matrix at a common viewpoint in the depth map;

calculating the depth value of the fused common viewpoint based on the first depth information, the first covariance matrix, the second depth information and the second covariance matrix;

and calculating the depth value of the common viewpoint neighborhood in the fusion image according to the relative relation of the pixel depths in the depth image.

2. The method of claim 1, wherein the a priori information comprises at least position information, pose information, and two-frame image co-view pixel position information of an image frame.

3. The method of claim 1, wherein calculating the depth value of the merged consensus point based on the first depth information, the first covariance matrix, the second depth information, and the second covariance matrix comprises:

according to formula (1), calculating the depth value of the merged common viewpoint:

Da=Db+W·(Do-Db);

wherein D isaRepresenting co-view depth values, DoRepresenting a first depth value, DbRepresenting a second depth value, W being an intermediate variable,Ωodenotes a first covariance matrix, ΩbRepresenting a second covariance matrix.

4. The method of claim 1, wherein the common-view neighborhood is a region of a radius of pixels corresponding to a common-view location.

5. The method of claim 1, wherein calculating the depth values of the neighborhood of the consensus point in the fused image according to the relative relationship of the pixel depths in the depth map comprises:

according to the formula (2), calculating the depth value of the pixel point of the common viewpoint neighborhood:

D(m,n)=f({Dw||Pw-P(m,n)|<δ});

wherein D is(m,n)Representing the depth value of the pixel at (m, n), f (-) representing the non-linear mapping, DwRepresenting depth information in the neighborhood of the current pixel point, delta representing the neighborhood radius, Pw、P(m,n)Respectively representing the current pixel point and the pixel point at (m, n) in the neighborhood, wherein m and n represent the pixel point seat in the neighborhoodAnd (4) marking.

6. An image depth information fusion system, comprising:

the first depth information acquisition module is used for acquiring continuous multi-frame RGB images, and calculating first depth information and a first covariance matrix of a common viewpoint in a target image through a multi-view geometric triangulation method based on prior information of a previous image;

the second depth information acquisition module is used for calculating a depth map corresponding to the target image based on the trained deep learning network, and extracting second depth information at a common view point in the depth map and a corresponding second covariance matrix;

the first fusion calculation module is used for calculating the depth value of the fused common viewpoint based on the first depth information, the first covariance matrix, the second depth information and the second covariance matrix;

and the second fusion calculation module is used for calculating the depth value of the common viewpoint neighborhood in the fusion image according to the relative relation of the pixel depths in the depth image.

7. The system of claim 6, wherein calculating the depth value of the fused consensus point based on the first depth information, the first covariance matrix, the second depth information, and the second covariance matrix comprises:

according to formula (1), calculating the depth value of the merged common viewpoint:

Da=Db+W·(Do-Db);

wherein D isaRepresenting co-view depth values, DoRepresenting a first depth value, DbRepresenting a second depth value, W being an intermediate variable,Ωodenotes a first covariance matrix, ΩbRepresenting a second covariance matrix.

8. The system of claim 6, wherein calculating the depth values of the neighborhood of the consensus point in the fused image according to the relative relationship of the pixel depths in the depth map comprises:

according to the formula (2), calculating the depth value of the pixel point of the common viewpoint neighborhood:

D(m,n)=f({Dw||Pw-P(m,n)|<δ});

wherein D is(m,n)Representing the depth value of the pixel at (m, n), f (-) representing the non-linear mapping, DwRepresenting depth information in the neighborhood of the current pixel point, delta representing the neighborhood radius, Pw、P(m,n)Respectively representing the current pixel point and the pixel point in the neighborhood (m, n), wherein m and n represent the coordinates of the pixel point in the neighborhood.

9. A terminal device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor implements the steps of an image depth information fusion method according to any one of claims 1 to 5 when executing said computer program.

10. A computer-readable storage medium storing a computer program, wherein the computer program is configured to implement the steps of an image depth information fusion method according to any one of claims 1 to 5 when executed.

Technical Field

The invention belongs to the field of computer vision three-dimensional reconstruction, and particularly relates to an image depth information fusion method and system, electronic equipment and a storage medium.

Background

The image depth information refers to the distance value of each point in space relative to the camera in computer vision, and the mutual distance of each point in the actual scene can be conveniently calculated based on the distance information. However, the estimation of the depth information of the spatial visual image faces the problem that the spatial coverage and the accuracy are difficult to be considered, general depth information can be obtained through measurement of a sensor, such as laser radar, an optical camera and the like, by combining algorithm calculation, and the depth information of the image can be estimated through a deep learning model.

At present, image depth information is obtained based on equipment such as a laser radar and a depth camera, the equipment is expensive, the image depth is obtained based on a common optical camera, the image depth is easily influenced by scenes (the number of characteristic points, texture information and the like), and the acquirable target is limited, and the depth information is obtained by adopting a deep learning network, so that the extraction precision of the depth information is limited.

Disclosure of Invention

In view of this, embodiments of the present invention provide an image depth information fusion method, an image depth information fusion system, an electronic device, and a storage medium, which are used to solve the problems that an existing image depth information calculation method is expensive in acquisition device, is easily affected by a scene, or is limited in extraction accuracy.

In a first aspect of the embodiments of the present invention, there is provided an image depth information fusion method, including:

acquiring continuous multi-frame RGB images, and calculating first depth information and a first covariance matrix of a common viewpoint in a target image by a multi-view geometric triangulation method based on prior information of a previous image;

calculating a depth map corresponding to the target image based on the trained deep learning network, and extracting second depth information and a corresponding second covariance matrix at a common viewpoint in the depth map;

calculating the depth value of the fused common viewpoint based on the first depth information, the first covariance matrix, the second depth information and the second covariance matrix;

and calculating the depth value of the common viewpoint neighborhood in the fusion image according to the relative relation of the pixel depths in the depth image.

In a second aspect of the embodiments of the present invention, there is provided an image depth information fusion system, including:

the first depth information acquisition module is used for acquiring continuous multi-frame RGB images, and calculating first depth information and a first covariance matrix of a common viewpoint in a target image through a multi-view geometric triangulation method based on prior information of a previous image;

the second depth information acquisition module is used for calculating a depth map corresponding to the target image based on the trained deep learning network, and extracting second depth information at a common view point in the depth map and a corresponding second covariance matrix;

the first fusion calculation module is used for calculating the depth value of the fused common viewpoint based on the first depth information, the first covariance matrix, the second depth information and the second covariance matrix;

and the second fusion calculation module is used for calculating the depth value of the common viewpoint neighborhood in the fusion image according to the relative relation of the pixel depths in the depth image.

In a third aspect of the embodiments of the present invention, there is provided an apparatus, including a memory, a processor, and a computer program stored in the memory and executable by the processor, where the processor executes the computer program to implement the steps of the method according to the first aspect of the embodiments of the present invention.

In a fourth aspect of the embodiments of the present invention, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor implements the steps of the method provided by the first aspect of the embodiments of the present invention.

In the embodiment of the invention, based on the collected optical image, the depth information is extracted through a multi-view intersection algorithm and deep learning respectively, and the depth information is fused, so that the depth information with high accuracy and space coverage can be obtained, the problems that the traditional depth information is low in extraction accuracy, is easy to limit by scene equipment and the like are solved, the image depth accuracy is improved, the space coverage of depth is greatly increased, a dense depth map with high reliability is obtained, and the integrity of a three-dimensional reconstruction scene is guaranteed.

Drawings

In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.

Fig. 1 is a schematic flowchart of an image depth information fusion method according to an embodiment of the present invention;

fig. 2 is another schematic flow chart of an image depth information fusion method according to an embodiment of the present invention;

fig. 3 is a schematic structural diagram of an image depth information fusion system according to an embodiment of the present invention.

Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.

Detailed Description

In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

It should be understood that the terms "comprises" and "comprising," when used in this specification or claims and in the accompanying drawings, are intended to cover a non-exclusive inclusion, such that a process, method or system, or apparatus that comprises a list of steps or elements is not limited to the listed steps or elements. In addition, "first" and "second" are used to distinguish different objects, and are not used to describe a specific order.

Referring to fig. 1, a flow diagram of an image depth information fusion method according to an embodiment of the present invention includes:

s101, acquiring continuous multi-frame RGB images, and calculating first depth information and a first covariance matrix of a common viewpoint in a target image through a multi-view geometric triangulation method based on prior information of a previous image;

the multi-frame RGB image can be an image continuously acquired by a common optical camera, generally at least two frames or more than two frames, and the cost of the acquisition equipment can be reduced based on the image acquired by the common optical camera. The previous image refers to a frame or a plurality of consecutive frames of images before the current image. The prior information at least comprises position information and attitude information of an image frame and position information of a common viewpoint pixel of two frames of images.

The multi-view geometric triangulation method is based on two images of the same spatial point, and combines camera parameters and a camera model to determine the coordinates of the spatial point, so that the depth information of a common viewpoint (spatial point) can be obtained.

The first depth information is a depth value of the common viewpoint calculated by multi-view geometric triangulation, and the first covariance matrix is a covariance matrix of the common viewpoint calculated by multi-view geometric triangulation.

S102, calculating a depth map corresponding to the target image based on the trained deep learning network, and extracting second depth information and a corresponding second covariance matrix at a common view point in the depth map;

and collecting RGB images collected by the optical camera, marking the depth of the images to be used as samples, and training and testing the deep learning network. And extracting the depth map corresponding to the image to be recognized through the trained deep learning network. Depth information at the common view point and a corresponding covariance matrix can be directly obtained based on the depth map.

The second depth information is a depth value at the common-view point extracted from the depth map, and the second covariance matrix is a covariance at the common-view point calculated based on the depth map.

S103, calculating the depth value of the fused co-viewpoint based on the first depth information, the first covariance matrix, the second depth information and the second covariance matrix;

specifically, the depth value of the merged common viewpoint is calculated according to formula (1) by combining the first depth information, the first covariance matrix, the second depth information and the second covariance matrix:

Da=Db+W·(Do-Db);

wherein D isaRepresenting co-view depth values, DoRepresenting a first depth value, DbRepresenting a second depth value, W being an intermediate variable,Ωodenotes a first covariance matrix, ΩbRepresenting a second covariance matrix.

And S104, calculating the depth value of the common viewpoint neighborhood in the fusion image according to the relative relation of the pixel depths in the depth image.

The common viewpoint neighborhood is an area which takes a pixel corresponding to a common viewpoint position as a center and takes a certain range as a neighborhood by taking the common viewpoint pixel as the center, and the depth value of the neighborhood is calculated.

Specifically, based on the relative relationship of the pixel depths in the depth map obtained by deep learning, the pixel depth values of the common viewpoint neighborhood in the fusion image are calculated according to the formula (2):

D(m,n)=f({Dw||Pw-P(m,n)|<δ});

wherein D is(m,n)Representing the depth value of the pixel at (m, n), f (-) representing the non-linear mapping, DwRepresenting depth information in the neighborhood of the current pixel point, delta representing the neighborhood radius, Pw、P(m,n)Respectively represent the current pixel point andand (m, n) pixel points in the neighborhood, wherein m and n represent the coordinates of the pixel points in the neighborhood.

It should be noted that, based on the depth value at the common viewpoint obtained in S103, a neighborhood of a certain range, such as a neighborhood with a radius δ, may be obtained for the pixel in the common viewpoint, and the depth value of the neighborhood may be obtained, and the neighborhood and the area outside the common viewpoint are represented by depth map information obtained by deep learning. By calculating the depth value of the neighborhood, the precision and the space coverage of the common viewpoint can be improved, and the integrity and the consistency of the depth map after fusion are guaranteed.

In the embodiment, the image depth accuracy can be effectively improved, and meanwhile, the depth space coverage range is greatly increased. The problem that the traditional multi-view geometric method faces the depth sparsity is solved, the image depth information based on deep learning is limited in precision is solved, and the accuracy and the space coverage of the depth information can be considered at the same time by fusing the two kinds of depth information.

In another embodiment, as shown in fig. 2, for the acquired RGB image frames, the corresponding depth map and covariance are calculated based on the multi-view geometry method and deep learning, respectively, and a dense depth map can be obtained by the fusion algorithm in S201. In step S201, the depth value of the merged common view and the depth value of the pixel at the corresponding position of the common view are calculated, respectively. Thus, the depth information and scale information of the common viewpoint can be obtained.

It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.

Fig. 3 is a schematic structural diagram of a depth image fusion system according to an embodiment of the present invention, where the system includes:

the first depth information acquisition module 310 is configured to acquire continuous multi-frame RGB images, and calculate, based on prior information of previous images, first depth information and a first covariance matrix of a common viewpoint in a target image by a multi-view geometric triangulation method;

the prior information at least comprises position information and attitude information of an image frame and position information of a common viewpoint pixel of two frames of images.

A second depth information obtaining module 320, configured to calculate a depth map corresponding to the target image based on the trained deep learning network, and extract second depth information and a corresponding second covariance matrix at a common view point in the depth map;

a first fusion calculation module 330, configured to calculate depth values of the fused common view point based on the first depth information, the first covariance matrix, the second depth information, and the second covariance matrix;

specifically, according to formula (1), the depth value of the merged common viewpoint is calculated:

Da=Db+W·(Do-Db);

wherein D isaRepresenting co-view depth values, DoRepresenting a first depth value, Db representing a second depth value, W being an intermediate variable,Ωodenotes a first covariance matrix, ΩbRepresenting a second covariance matrix.

And the second fusion calculation module 340 is configured to calculate a depth value of a common view neighborhood in the fusion image according to the relative relationship between the pixel depths in the depth map.

Specifically, according to formula (2), the depth value of the pixel point of the common viewpoint neighborhood is calculated:

D(m,n)=f({Dw||Pw-P(m,n)|<δ});

wherein D is(m,n)Representing the depth value of the pixel at (m, n), f (-) representing the non-linear mapping, DwRepresenting depth information in the neighborhood of the current pixel point, delta representing the neighborhood radius, Pw、P(m,n)Respectively representing the current pixel point and the pixel point in the neighborhood (m, n), wherein m and n represent the coordinates of the pixel point in the neighborhood.

It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the modules described above may refer to corresponding processes in the foregoing method embodiments, and are not described herein again.

Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device is used for image depth information fusion and three-dimensional reconstruction, and is usually a computer. As shown in fig. 4, the electronic apparatus 4 of this embodiment includes: a memory 410, a processor 420, and a system bus 430, the memory 410 including an executable program 4101 stored thereon, it being understood by those skilled in the art that the electronic device configuration shown in fig. 4 does not constitute a limitation of electronic devices and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.

The following describes each component of the electronic device in detail with reference to fig. 4:

the memory 410 may be used to store software programs and modules, and the processor 420 executes various functional applications and data processing of the electronic device by operating the software programs and modules stored in the memory 410. The memory 410 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as cache data) created according to the use of the electronic device, and the like. Further, the memory 410 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.

The executable program 4101 of the network request method is contained on the memory 410, the executable program 4101 may be divided into one or more modules/units, the one or more modules/units are stored in the memory 410 and executed by the processor 420 to implement the driving data consistency check and the like, and the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used for describing the execution process of the computer program 4101 in the electronic device 4. For example, the computer program 4101 may be divided into a depth information acquisition module and a fusion calculation module.

The processor 420 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 410 and calling data stored in the memory 410, thereby performing overall status monitoring of the electronic device. Alternatively, processor 420 may include one or more processing units; preferably, the processor 420 may integrate an application processor, which mainly handles operating systems, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 420.

The system bus 430 is used to connect functional units inside the computer, and can transmit data information, address information, and control information, and may be, for example, a PCI bus, an ISA bus, a VESA bus, etc. The instructions of the processor 420 are transmitted to the memory 410 through the bus, the memory 410 feeds data back to the processor 420, and the system bus 430 is responsible for data and instruction interaction between the processor 420 and the memory 410. Of course, the system bus 430 may also access other devices such as network interfaces, display devices, and the like.

In this embodiment of the present invention, the executable program executed by the process 420 included in the electronic device includes:

acquiring continuous multi-frame RGB images, and calculating first depth information and a first covariance matrix of a common viewpoint in a target image by a multi-view geometric triangulation method based on prior information of a previous image;

calculating a depth map corresponding to the target image based on the trained deep learning network, and extracting second depth information and a corresponding second covariance matrix at a common viewpoint in the depth map;

calculating the depth value of the fused common viewpoint based on the first depth information, the first covariance matrix, the second depth information and the second covariance matrix;

and calculating the depth value of the common viewpoint neighborhood in the fusion image according to the relative relation of the pixel depths in the depth image.

It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.

In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.

The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:深度信息获取方法、装置、电子设备和计算机可读介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!