Poor texture tunnel modeling method and system based on vision-laser radar coupling

文档序号:1939538 发布日期:2021-12-07 浏览:19次 中文

阅读说明:本技术 基于视觉-激光雷达耦合的贫纹理隧洞建模方法及系统 (Poor texture tunnel modeling method and system based on vision-laser radar coupling ) 是由 何斌 朱琪琪 李刚 沈润杰 程斌 王志鹏 陆萍 朱忠攀 周艳敏 于 2021-08-17 设计创作,主要内容包括:本发明涉及一种基于视觉-激光雷达耦合的贫纹理隧洞建模方法及系统,包括以下步骤:获取深度相机采集的点云信息、激光雷达采集的激光信息以及无人机的运动信息;基于激光信息生成栅格地图,基于运动信息得到无人机的位姿信息;采用贝叶斯融合方法将点云信息、栅格地图和位姿信息进行融合,得到地图模型;基于上一时刻的地图模型,通过特征匹配对最新的地图模型进行修正。与现有技术相比,本发明融合深度相机和激光雷达进行SLAM建图,充分利用激光雷达较大的范围信息和深度相机较为丰富的局部信息,互补提高了信息的精确度,使地图模型的建立更加接近于真实隧洞环境。(The invention relates to a poor texture tunnel modeling method and system based on vision-laser radar coupling, which comprises the following steps: acquiring point cloud information acquired by a depth camera, laser information acquired by a laser radar and motion information of an unmanned aerial vehicle; generating a grid map based on the laser information, and obtaining pose information of the unmanned aerial vehicle based on the motion information; fusing point cloud information, a grid map and pose information by adopting a Bayesian fusion method to obtain a map model; and correcting the latest map model through feature matching based on the map model at the previous moment. Compared with the prior art, the method provided by the invention integrates the depth camera and the laser radar to carry out SLAM mapping, makes full use of the larger range information of the laser radar and the richer local information of the depth camera, complements and improves the accuracy of the information, and enables the establishment of a map model to be closer to a real tunnel environment.)

1. A lean texture tunnel modeling method based on vision-laser radar coupling is characterized by modeling through an unmanned aerial vehicle carrying a depth camera and a laser radar, and comprising the following steps:

s1, point cloud information acquired by a depth camera, laser information acquired by a laser radar and motion information of an unmanned aerial vehicle are acquired;

s2, filtering the laser information to generate a grid map, and obtaining pose information of the unmanned aerial vehicle based on the motion information;

s3, fusing the point cloud information, the grid map and the pose information by adopting a Bayesian fusion method to obtain a map model;

and S4, repeating the steps S1 to S3 to obtain a new map model, correcting the latest map model through feature matching based on the map model at the previous moment, and repeating the step S4 until the construction of the map model is completed.

2. The vision-lidar coupling-based lean texture tunnel modeling method of claim 1, wherein step S1 is preceded by: and determining a relative transformation relation between the point cloud information and the laser information according to the position relation of the depth camera and the laser radar on the unmanned aerial vehicle.

3. The vision-lidar coupling-based lean texture tunnel modeling method according to claim 2, wherein a transformation relation from a point cloud under a lidar coordinate system to a depth camera coordinate system is as follows:

wherein, (X, Y, Z)TRepresenting the laser radar coordinate system(X) of (C)c,Yc,Yc)TRepresenting coordinates under the depth camera coordinate System, (u, v, 1)TRepresenting pixel coordinates on an imaging plane of the depth camera, r is a rotation matrix determined based on the position of the depth camera and the lidar on the drone, t is a translation matrix determined based on the position of the depth camera and the lidar on the drone, and K represents an intrinsic parameter matrix of the depth camera.

4. The vision-lidar coupling-based lean texture tunnel modeling method of claim 1, wherein the motion information of the drone is measured by an Inertial Measurement Unit (IMU) and a odometer, including speed, acceleration and distance; and in the step S2, the pose information of the unmanned aerial vehicle is obtained by fusing the motion information through Kalman filtering.

5. The method for modeling a poor texture tunnel based on visual-lidar coupling according to claim 1, wherein in step S4, based on the map model at the previous moment, the step of modifying the latest map model by feature matching specifically comprises:

s41, obtaining a map model of the previous moment as a reference frame; obtaining a latest map model, and finding a region corresponding to the map model at the last moment from the latest map model as a current frame;

s42, feature point in reference frame uses PiDenotes that the feature point in the current frame uses QiIndicates that the number of feature points in the current frame and the reference frame is the same;

s43, establishing an interframe change model:

{Qi}=R{Pi}+T

wherein R represents a rotation parameter and T represents a translation parameter;

s44, substituting the characteristic points in the reference frame and the characteristic points in the current frame, and iteratively solving the rotation parameters and the translation parameters;

and S45, obtaining the matching relation between the map model at the last moment and the latest map model based on the rotation parameter and the translation parameter, and correcting the latest map model.

6. The vision-lidar coupling-based lean texture tunnel modeling method according to claim 5, wherein in step S44, the iterative solution of the rotation parameter and the translation parameter is specifically:

substituting the characteristic points in the reference frame and the characteristic points in the current frame into an interframe change model, establishing an objective function based on the interframe change model, and enabling a function value of the objective function to be a rotation parameter and a translation parameter of a minimum value, namely the finally obtained rotation parameter and translation parameter, wherein the objective function has the formula:

wherein L represents a function value of the objective function, piRepresenting a feature point in a reference frame, qiRepresenting one feature point in the current frame and N representing the number of feature points.

7. A poor texture tunnel modeling system based on vision-lidar coupling is characterized in that the poor texture tunnel modeling method based on vision-lidar coupling as claimed in any one of claims 1 to 6 is adopted, the poor texture tunnel modeling system comprises an unmanned aerial vehicle body, a depth camera, a lidar, a computing unit and a controller are mounted on the unmanned aerial vehicle body, the controller is in communication connection with the depth camera, the lidar and the computing unit, and the following steps are executed in the flight process of the unmanned aerial vehicle:

t1, the controller acquires point cloud information acquired by the depth camera, laser information acquired by the laser radar and motion information of the unmanned aerial vehicle, and sends the point cloud information, the laser information and the motion information to the computing unit;

t2, filtering the laser information by a computing unit to generate a grid map, and obtaining pose information of the unmanned aerial vehicle based on the motion information;

t3, fusing the point cloud information, the grid map and the pose information by adopting a Bayesian fusion method to obtain a map model;

and T4, repeating the steps from T1 to T3 to obtain a new map model, correcting the latest map model through feature matching based on the map model at the previous moment, and repeating the step T4 until the construction of the map model is completed.

8. The vision-lidar coupling-based lean texture tunnel modeling system according to claim 7, wherein in step T4, based on the map model at the previous moment, the modification of the latest map model through feature matching is specifically:

t41, obtaining a map model of the previous moment as a reference frame; obtaining a latest map model, and finding a region corresponding to the map model at the last moment from the latest map model as a current frame;

t42, feature Point usage in reference frame { PiDenotes that the feature point in the current frame uses QiIndicates that the number of feature points in the current frame and the reference frame is the same;

t43, establishing an interframe change model:

{Qi}=R{Pi}+T

wherein R represents a rotation parameter and T represents a translation parameter;

t44, substituting the characteristic points in the reference frame and the characteristic points in the current frame, and iteratively solving the rotation parameters and the translation parameters;

and T45, obtaining the matching relation between the map model at the last moment and the latest map model based on the rotation parameter and the translation parameter, and correcting the latest map model.

9. The vision-lidar coupling-based lean texture tunnel modeling system of claim 8, wherein in step T44, the iterative solution of the rotation parameter and the translation parameter is specifically:

substituting the characteristic points in the reference frame and the characteristic points in the current frame into an interframe change model, establishing an objective function based on the interframe change model, and enabling a function value of the objective function to be a rotation parameter and a translation parameter of a minimum value, namely the finally obtained rotation parameter and translation parameter, wherein the objective function has the formula:

wherein L represents a function value of the objective function, piRepresenting a feature point in a reference frame, qiRepresenting one feature point in the current frame and N representing the number of feature points.

10. The vision-lidar coupling-based lean texture tunnel modeling system as claimed in claim 7, wherein a storage unit is further mounted on the drone body, and the storage unit is connected with the control unit and the calculation unit and used for storing the constructed map model.

Technical Field

The invention relates to the technical field of unmanned aerial vehicle tunnel modeling, in particular to a poor texture tunnel modeling method and system based on vision-laser radar coupling.

Background

The maintenance and modeling work of the tunnel is an indispensable part in the safety of geological engineering and is related to the normal operation of the whole engineering. The traditional tunnel modeling method generally needs to firstly acquire data information on site in a tunnel, acquire the data information by using an unmanned aerial vehicle in a narrow area where people cannot enter, and then convert the information into a 3D model by using 3D modeling software. However, under the condition of tunnel texture missing, the acquisition of information becomes very difficult. In addition, in some narrow dark areas, the difficulty of information acquisition and modeling is further increased, and the quality of 3D modeling completely depends on the information acquired in situ, so that the method is likely to cause a large difference between modeling and an actual scene, and the modeling accuracy needs to be improved.

Slam (simultaneous localization and mapping), that is, instantaneous localization and mapping, can be understood as a subject carrying a specific sensor, such as a robot and an unmanned aerial vehicle system, estimating its own posture in an unknown environment while building a map of the surrounding environment. SLAM is mainly used for solving the problems of positioning navigation and map construction when a mobile robot runs in an unknown environment. The SLAM algorithm is mainly used for modeling in the tunnel by using the unmanned aerial vehicle. Common SLAM algorithms include visual SLAM, laser SLAM and the like, the basic principle is that pose estimation and mapping are carried out through feature point matching in adjacent frames, and the modeling accuracy needs to be improved in a tunnel environment with missing texture and high image repeatability.

Disclosure of Invention

The invention aims to overcome the defects in the prior art and provide a poor texture tunnel modeling method and system based on vision-laser radar coupling.

The purpose of the invention can be realized by the following technical scheme:

a lean texture tunnel modeling method based on vision-laser radar coupling is characterized in that modeling is carried out through an unmanned aerial vehicle carrying a depth camera and a laser radar, and the method comprises the following steps:

s1, point cloud information acquired by a depth camera, laser information acquired by a laser radar and motion information of an unmanned aerial vehicle are acquired;

s2, filtering the laser information to generate a grid map, and obtaining pose information of the unmanned aerial vehicle based on the motion information;

s3, fusing the point cloud information, the grid map and the pose information by adopting a Bayesian fusion method to obtain a map model;

and S4, repeating the steps S1 to S3 to obtain a new map model, correcting the latest map model through feature matching based on the map model at the previous moment, and repeating the step S4 until the construction of the map model is completed.

Further, step S1 is preceded by: and determining a relative transformation relation between the point cloud information and the laser information according to the position relation of the depth camera and the laser radar on the unmanned aerial vehicle.

Further, the transformation relation from the point cloud under the laser radar coordinate system to the depth camera coordinate system is as follows:

wherein, (X, Y, Z)TRepresents the coordinates in the laser radar coordinate system (X)c,Yc,Yc)TRepresenting coordinates under the depth camera coordinate System, (u, v, 1)TRepresenting pixel coordinates on an imaging plane of the depth camera, r is a rotation matrix determined based on the position of the depth camera and the lidar on the drone, t is a translation matrix determined based on the position of the depth camera and the lidar on the drone, and K represents an intrinsic parameter matrix of the depth camera.

Further, the motion information of the unmanned aerial vehicle is obtained by measurement of an inertial measurement unit IMU and a milemeter, and comprises speed, acceleration and distance; because the GPS signal in the tunnel is poor or even lost, the pose estimation of the unmanned aerial vehicle is obtained by fusing the motion information through Kalman filtering based on the motion information obtained by the inertial measurement unit IMU and the odometer.

Further, in step S4, the step of correcting the latest map model by feature matching based on the map model at the previous time is specifically as follows:

s41, obtaining a map model of the previous moment as a reference frame; obtaining a latest map model, and finding a region corresponding to the map model at the last moment from the latest map model as a current frame;

s42, feature point in reference frame uses PiDenotes that the feature point in the current frame uses QiIndicates that the number of feature points in the current frame and the reference frame is the same;

s43, establishing an interframe change model:

{Qi}=R{Pi}+T

wherein R represents a rotation parameter and T represents a translation parameter;

s44, substituting the characteristic points in the reference frame and the characteristic points in the current frame, and iteratively solving the rotation parameters and the translation parameters;

and S45, obtaining the matching relation between the map model at the last moment and the latest map model based on the rotation parameter and the translation parameter, and correcting the latest map model.

Furthermore, because the map model is built in the flight process of the unmanned aerial vehicle, errors can exist when the map model is built at two moments. Theoretically, if the rotation parameter and the translation parameter are accurate, each feature point in the reference frame coincides with a feature point in the current frame, but the hundred percent coincidence cannot be achieved in consideration of the existence of noise and errors, so that an objective function is defined; in step S44, the iterative solution of the rotation parameter and the translation parameter specifically includes:

substituting the characteristic points in the reference frame and the characteristic points in the current frame into an interframe change model, establishing an objective function based on the interframe change model, and enabling a function value of the objective function to be a rotation parameter and a translation parameter of a minimum value, namely the finally obtained rotation parameter and translation parameter, wherein the objective function has the formula:

wherein L represents a function value of the objective function, piRepresenting a feature point in a reference frame, qiRepresenting one feature point in the current frame and N representing the number of feature points.

The poor texture tunnel modeling system based on the visual-lidar coupling comprises an unmanned aerial vehicle body, wherein a depth camera, a laser radar, a calculation unit and a controller are arranged on the unmanned aerial vehicle body, the controller is in communication connection with the depth camera, the laser radar and the calculation unit, and the following steps are executed in the flying process of the unmanned aerial vehicle:

t1, the controller acquires point cloud information acquired by the depth camera, laser information acquired by the laser radar and motion information of the unmanned aerial vehicle, and sends the point cloud information, the laser information and the motion information to the computing unit;

t2, filtering the laser information by a computing unit to generate a grid map, and obtaining pose information of the unmanned aerial vehicle based on the motion information;

t3, fusing the point cloud information, the grid map and the pose information by adopting a Bayesian fusion method to obtain a map model;

and T4, repeating the steps from T1 to T3 to obtain a new map model, correcting the latest map model through feature matching based on the map model at the previous moment, and repeating the step T4 until the construction of the map model is completed.

Further, in step T4, based on the map model at the previous time, the step of correcting the latest map model by feature matching specifically includes:

t41, obtaining a map model of the previous moment as a reference frame; obtaining a latest map model, and finding a region corresponding to the map model at the last moment from the latest map model as a current frame;

t42, feature Point usage in reference frame { PiDenotes that the feature point in the current frame uses QiIndicates that the number of feature points in the current frame and the reference frame is the same;

t43, establishing an interframe change model:

{Qi}=R{Pi}+T

wherein R represents a rotation parameter and T represents a translation parameter;

t44, substituting the characteristic points in the reference frame and the characteristic points in the current frame, and iteratively solving the rotation parameters and the translation parameters;

and T45, obtaining the matching relation between the map model at the last moment and the latest map model based on the rotation parameter and the translation parameter, and correcting the latest map model.

Further, in step T44, the iterative solution of the rotation parameter and the translation parameter specifically includes:

substituting the characteristic points in the reference frame and the characteristic points in the current frame into an interframe change model, establishing an objective function based on the interframe change model, and enabling a function value of the objective function to be a rotation parameter and a translation parameter of a minimum value, namely the finally obtained rotation parameter and translation parameter, wherein the objective function has the formula:

wherein L represents a function value of the objective function, piRepresenting a feature point in a reference frame, qiRepresenting one feature point in the current frame and N representing the number of feature points.

Further, the unmanned aerial vehicle body is also provided with a storage unit, and the storage unit is connected with the control unit and the calculation unit and used for storing the constructed map model.

Compared with the prior art, the invention has the following beneficial effects:

(1) the depth camera and the laser radar are fused to carry out SLAM mapping, the large range information of the laser radar and the abundant local information of the depth camera are fully utilized, the accuracy of the information is complementarily improved, and the map model is closer to the real tunnel environment.

(2) And fusing the point cloud information, the grid map and the pose information by using a Bayesian fusion method to obtain a map model, wherein the map model is suitable for uncertainty information with additive Gaussian noise, and the obtained map model has smaller error.

(3) And rotation parameters and translation parameters are calculated between the latest map model and the map model at the previous moment by a feature point matching method, so that the map model is corrected, and the accuracy is further improved.

Drawings

FIG. 1 is a flow chart of a method for modeling a lean texture tunnel based on a visual-lidar coupling;

fig. 2 is a schematic diagram of the framework of SLAM.

Detailed Description

The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.

In the drawings, structurally identical elements are represented by like reference numerals, and structurally or functionally similar elements are represented by like reference numerals throughout the several views. The size and thickness of each component shown in the drawings are arbitrarily illustrated, and the present invention is not limited to the size and thickness of each component. Parts are exaggerated in the drawing where appropriate for clarity of illustration.

Example 1:

a poor texture tunnel modeling system based on vision-laser radar coupling comprises an unmanned aerial vehicle body, wherein a depth camera, a laser radar, a computing unit and a controller are mounted on the unmanned aerial vehicle body, and the controller is in communication connection with the depth camera, the laser radar and the computing unit; the modeling system adopts a lean texture tunnel modeling method based on vision-laser radar coupling, carries out modeling through an unmanned aerial vehicle carrying a depth camera and a laser radar, has a basic flow as shown in figure 1, and comprises the following steps:

s1, point cloud information acquired by a depth camera, laser information acquired by a laser radar and motion information of an unmanned aerial vehicle are acquired;

s2, filtering the laser information to generate a grid map, and obtaining pose information of the unmanned aerial vehicle based on the motion information;

s3, fusing the point cloud information, the grid map and the pose information by adopting a Bayesian fusion method to obtain a map model;

and S4, repeating the steps S1 to S3 to obtain a new map model, correcting the latest map model through feature matching based on the map model at the previous moment, and repeating the step S4 until the construction of the map model is completed.

The unmanned aerial vehicle body is also provided with a storage unit, and the storage unit is connected with the control unit and the calculation unit and used for storing the constructed map model.

The laser radar has the characteristics of high precision, good stability, large information acquisition range and the like, but the acquired data information is not rich enough; the visual sensor has the advantages of low price, light weight, abundant acquired environment information, easiness in data association and the like, but has the characteristic of poor depth estimation capability, and because the visual sensor is sensitive to illumination change or low-texture environment, the visual SLAM is poor in performance and even cannot complete tasks in environments lacking illumination and texture characteristics. Because the laser SLAM and the visual SLAM have certain limitation under the condition of independent use, the method and the device consider that the laser SLAM and the visual SLAM are fused to build the map, and use the depth camera and the laser radar to fuse the map to make up the respective defects of the laser SLAM and the visual SLAM, finally improve the accuracy and the robustness of the SLAM map building, and improve the accuracy and the speed of the map building.

Taking a visual SLAM as an example, the SLAM framework is basically as shown in fig. 2, and sensor data- > front-end visual odometer- > back-end nonlinear optimization- > loop detection- > mapping is performed. Sensor data, i.e. received data; the visual SLAM mainly comprises image information, and the task of the front-end visual odometer is to calculate the motion information of a camera according to the image information on adjacent time from the acquired image information and construct a local map; the back-end nonlinear optimization mainly reduces the error of a map constructed by the visual odometer; loop detection determines whether this position has been reached before, primarily to account for drift in position estimates over time.

Because the positions of the depth camera and the laser radar are different, the collected point cloud information and the collected laser information are in different coordinate systems, and therefore coordinate change is needed to unify the coordinates. The relative change relation of the coordinates can be determined by a calibration method before the unmanned aerial vehicle flies. Namely, the relative transformation relation between the point cloud information and the laser information is determined according to the position relation of the depth camera and the laser radar on the unmanned aerial vehicle.

The transformation relation from the point cloud under the laser radar coordinate system to the depth camera coordinate system is as follows:

wherein, (X, Y, Z)TRepresents the coordinates in the laser radar coordinate system (X)c,Yc,Yc)TRepresenting coordinates under the depth camera coordinate System, (u, v, 1)TThe method includes the steps of representing pixel coordinates on an imaging plane of a depth camera, r being a rotation matrix determined based on positions of the depth camera and a laser radar on an unmanned aerial vehicle, t being a translation matrix determined based on positions of the depth camera and the laser radar on the unmanned aerial vehicle, and K representing an internal parameter matrix of the depth camera and being a fixed value.

The motion information of the unmanned aerial vehicle is obtained by measurement of an inertial measurement unit IMU and a milemeter, and comprises speed, acceleration and distance; because GPS signals in the tunnel are poor and even lost, based on the motion information obtained by the inertial measurement unit IMU and the odometer, the pose estimation of the unmanned aerial vehicle is obtained by fusing the motion information through Kalman filtering.

Bayes estimation is a statistical data fusion algorithm based on Bayes theorem conditions or posterior probability, is suitable for uncertainty information with additive Gaussian noise, and can estimate n-dimensional vectors under unknown states through known vectors. According to the method and the device, point cloud information collected by a depth camera, a grid map constructed based on a laser radar and pose information obtained based on a motion camera are considered, a map model is obtained by Bayesian fusion, advantages and disadvantages of the depth camera and the laser radar are comprehensively considered, the motion of an unmanned aerial vehicle is fused, and errors are further reduced.

In step S4, the specific step of correcting the latest map model by feature matching based on the map model at the previous time is:

s41, obtaining a map model of the previous moment as a reference frame; obtaining a latest map model, and finding a region corresponding to the map model at the last moment from the latest map model as a current frame;

s42, feature point in reference frame uses PiDenotes that the feature point in the current frame uses QiIndicates that the number of feature points in the current frame and the reference frame is the same;

s43, establishing an interframe change model:

{Qi}=R{Pi}+T

wherein R represents a rotation parameter and T represents a translation parameter;

s44, substituting the characteristic points in the reference frame and the characteristic points in the current frame, and iteratively solving the rotation parameters and the translation parameters;

and S45, obtaining the matching relation between the map model at the last moment and the latest map model based on the rotation parameter and the translation parameter, and correcting the latest map model.

Because the map model is constructed in the flight process of the unmanned aerial vehicle, errors can exist when the map model is constructed at two moments. Theoretically, if the rotation parameter and the translation parameter are accurate, each feature point in the reference frame coincides with a feature point in the current frame, but the hundred percent coincidence cannot be achieved in consideration of the existence of noise and errors, so that an objective function is defined; in step S44, the iterative solution of the rotation parameter and the translation parameter specifically includes:

substituting the characteristic points in the reference frame and the characteristic points in the current frame into an interframe change model, establishing an objective function based on the interframe change model, and enabling a function value of the objective function to be a rotation parameter and a translation parameter of a minimum value, namely a rotation parameter and a translation parameter which are finally obtained, wherein the formula of the objective function is as follows:

wherein L represents a function value of the objective function, piRepresenting a feature point in a reference frame, qiRepresenting one feature point in the current frame and N representing the number of feature points.

The foregoing detailed description of the preferred embodiments of the invention has been presented. It should be understood that numerous modifications and variations could be devised by those skilled in the art in light of the present teachings without departing from the inventive concepts. Therefore, the technical solutions available to those skilled in the art through logic analysis, reasoning and limited experiments based on the prior art according to the concept of the present invention should be within the scope of protection defined by the claims.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:融合激光雷达和IMU的同时定位建图方法、装置和存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!