Map fusion method based on laser SLAM and visual SLAM

文档序号:1566029 发布日期:2020-01-24 浏览:5次 中文

阅读说明:本技术 一种基于激光slam和视觉slam地图融合方法 (Map fusion method based on laser SLAM and visual SLAM ) 是由 李郑慧 王煜聪 孙玲玲 胡莉英 刘新宇 于 2019-09-09 设计创作,主要内容包括:本发明提出了一种基于激光SLAM和视觉SLAM地图融合方法。本发明使用多种传感器采集数据,再进行原数据处理,之后将数据分别传入对应的SLAM算法,两种SLAM算法分别计算位姿与构建地图,将视觉SLAM构建完成的地图转换为2D栅格概率图,最后将两种2D栅格图通过栅格占用法则进行融合。本发明克服了单种SLAM具有的缺陷,使得融合后的地图更加精准,更加适用于导航。(The invention provides a map fusion method based on laser SLAM and visual SLAM. The method comprises the steps of collecting data by using various sensors, processing the original data, then respectively transmitting the data into corresponding SLAM algorithms, respectively calculating poses and constructing a map by using the two SLAM algorithms, converting the map constructed by the visual SLAM into a 2D grid probability map, and finally fusing the two 2D grid maps by using a grid occupation rule. The invention overcomes the defects of single SLAM, so that the fused map is more accurate and is more suitable for navigation.)

1. A map fusion method based on laser SLAM and visual SLAM is characterized by comprising the following steps:

step one, data acquisition and reading

Acquiring the acceleration and the angular velocity of the robot through the IMU; a binocular camera of the robot acquires left and right images, a two-dimensional laser radar acquires a distance value at 360 degrees, a coder acquires the speed of the robot, and an industrial personal computer reads data of the sensor in real time;

step two, data processing

Converting the pulse value acquired by each encoder into a speed value corresponding to each wheel, calculating the overall moving speed of the robot through a motion calculation algorithm, and calculating displacement; integrating the acceleration acquired by the IMU, calculating the speed and the displacement, and fusing the speed and the displacement generated by the two sensors by using extended Kalman filtering to generate a pose;

step three, establishing a graph by using laser SLAM and visual SLAM

Transmitting the two-dimensional laser radar data and the robot attitude and position information obtained in the second step into a laser SLAM, and establishing a 2D grid map; transmitting image information acquired by a binocular camera and acceleration and angular velocity acquired by an IMU (inertial measurement Unit) into a visual SLAM (simultaneous localization and mapping) to generate a three-dimensional sparse point cloud picture;

step four, visual SLAM map conversion

Converting the pose generated by the visual SLAM and the three-dimensional sparse point cloud image into a three-dimensional map represented by a skip list tree, and projecting the three-dimensional map on a plane to form a 2D grid image;

step five, map fusion

Fusing the local grid map generated by the laser SLAM and the converted local grid map of the visual SLAM in real time by using a map fusion algorithm to generate a local fusion map, storing the local fusion map in real time, and repeatedly circulating the processes all the time to form a global fusion map; the map fusion algorithm specifically comprises the following steps: expressing each grid of a local grid map generated by a laser SLAM and a local grid map generated by a visual SLAM by 0-1 probability, setting a threshold T1 of the local grid map generated by the laser SLAM and a threshold T2 of the local grid map obtained by converting the visual SLAM, then comparing the occupancy rate with the preset threshold, if the occupancy rate is greater than the threshold, the occupancy rate is empty if the occupancy rate is less than the threshold, when the occupancy rate is stored and displayed, 1 represents occupancy, 0 represents empty, and-1 represents indeterminacy; and judging the two probabilities according to a grid occupation rule, wherein the two probabilities are empty and are judged to be empty, when the two probabilities are not confirmed, the two probabilities are judged to be uncertain, and when the two probabilities are not confirmed, the other probabilities are judged to be occupied, so that a local fusion map is generated.

Technical Field

The invention belongs to an image fusion method, and particularly relates to a map fusion method based on laser SLAM and visual SLAM.

Background

Simultaneous localization and mapping (SLAM) is the basis and key for solving various problems of exploration, investigation, navigation and the like of a mobile robot in an unknown environment. The robot acquires observation information according to a sensor carried by the robot to create an environment map, and estimates the pose of the robot according to a part of the created map.

Multi-sensor data fusion has been applied to many aspects of robots, such as map creation, positioning, path planning, exploration. Once the environment map is created, the mobile robot with the layered architecture can complete corresponding functions by applying modules such as positioning, path planning, obstacle avoidance and control, and the like, so that the robot can move completely and autonomously.

Due to the complex environment, maps built by using SLAM by only one sensor have respective defects. Pure VSLAM has the problems of low map building precision, light influence on a sensor and the like, but semantic information is rich; the pure laser SLAM has the problems of lack of semantic information, high price of the multi-line laser radar and the like.

Disclosure of Invention

Aiming at various defects of a single SLAM, the invention provides a method for fusing maps established by two SLAMs by using a grid occupation rule, so that the map is more accurate, and the robot can be more accurately positioned and navigated.

The technical scheme of the invention is realized as follows:

step one, data acquisition and reading

Acquiring the acceleration and the angular velocity of the robot through the IMU; a binocular camera of the robot acquires left and right images, a two-dimensional laser radar acquires a distance value at 360 degrees, a coder acquires the speed of the robot, and an industrial personal computer reads data of the sensor in real time;

step two, data processing

Converting the pulse value acquired by each encoder into a speed value corresponding to each wheel, calculating the overall moving speed of the robot through a motion calculation algorithm, and calculating displacement; integrating the acceleration acquired by the IMU, calculating the speed and the displacement, and fusing the speed and the displacement generated by the two sensors by using extended Kalman filtering to generate a pose.

Step three, establishing a graph by using laser SLAM and visual SLAM

And (4) transmitting the two-dimensional laser radar data and the robot attitude and position information obtained in the step two into a laser SLAM, and establishing a 2D grid map. And transmitting the image information acquired by the binocular camera and the acceleration and the angular velocity acquired by the IMU into a visual SLAM to generate a three-dimensional sparse point cloud picture.

Step four, visual SLAM map conversion

Converting the pose generated by the visual SLAM and the three-dimensional sparse point cloud image into a three-dimensional map represented by a skip list tree, and projecting the three-dimensional map on a plane to form a 2D grid image;

step five, map fusion

Fusing the local grid map generated by the laser SLAM and the converted local grid map of the visual SLAM in real time by using a map fusion algorithm to generate a local fusion map, storing the local fusion map in real time, and repeatedly circulating the processes all the time to form a global fusion map; the map fusion algorithm specifically comprises the following steps: expressing each grid of a local grid map generated by a laser SLAM and a local grid map generated by a visual SLAM by 0-1 probability, setting a threshold T1 of the local grid map generated by the laser SLAM and a threshold T2 of the local grid map obtained by converting the visual SLAM, then comparing the occupancy rate with the preset threshold, if the occupancy rate is greater than the threshold, the occupancy rate is empty if the occupancy rate is less than the threshold, when the occupancy rate is stored and displayed, 1 represents occupancy, 0 represents empty, and-1 represents indeterminacy; and judging the two probabilities according to a grid occupation rule, wherein the two probabilities are empty and are judged to be empty, when the two probabilities are not confirmed, the two probabilities are judged to be uncertain, and when the two probabilities are not confirmed, the other probabilities are judged to be occupied, so that a local fusion map is generated.

The invention has the advantages of

1) The method solves the problems that the map building precision of the pure vision SLAM is low, the sensor is influenced by light rays, the pure 2D laser SLAM lacks semantic information and the like. Experiments show that the observation area, robustness and fault tolerance of map creation are effectively improved.

2) The invention provides a map generated by combining two SLAMs, so that the robot has more accurate positioning and better obstacle avoidance capability during navigation.

3) Aiming at the characteristic that two SLAMs can generate 2D grid maps, the invention utilizes the grid occupation rule to fuse two probability grid maps and fully fuses two map information.

Drawings

FIG. 1 is an overall flow chart of the present invention;

FIG. 2 is an overall block diagram of the visual SLAM of the present invention;

FIG. 3 is an overall block diagram of a laser SLAM of the present invention;

fig. 4(a) is a 2D grid map of the laser SLAM;

FIG. 4(B) is a three-dimensional sparse point cloud map of a visual SLAM;

fig. 4(C) is a fused image of the present invention.

Detailed Description

As shown in FIG. 1, the invention adopts a 2D laser SLAM and a visual SLAM to respectively calculate the real-time moving pose of the robot and establish a map, and then uses a grid occupation rule to carry out fusion.

Step one, data acquisition and reading

The industrial computer reads the acceleration and the angular velocity of IMU collection, the left and right images of binocular camera collection, the distance value of 360 degrees collection of two-dimensional laser radar, the robot movement speed of encoder collection through the USB interface in real time.

Step two, data processing

And converting the data acquired by all the sensors into the same coordinate system, and labeling the time stamp. Converting the pulse value acquired by each encoder into a speed value corresponding to each wheel by using a linear velocity model, calculating the overall moving speed of the robot by using a motion calculation algorithm, and calculating displacement; integrating the acceleration acquired by the IMU, calculating the speed and the displacement, and fusing the speed and the displacement generated by the two sensors by using extended Kalman filtering to generate a pose.

Step three, establishing a graph by using laser SLAM and visual SLAM

And transmitting the laser radar data, the processed IMU data and the mileage counting data into a laser SLAM, establishing a motion model, and then estimating the pose by using particle filtering and establishing a 2D grid map. Transmitting the image into a visual SLAM at a speed of 200HZ and IMU data by 20HZ, and calculating the pose through pose estimation of a VIO front-end visual odometer; the back end (Bundle Adjustment) carries out nonlinear optimization on the pose calculated by the front end; performing loop detection to judge loops; finally, establishing a three-dimensional sparse point cloud chart for the feature points, as shown in fig. 2 and 3;

step four, visual SLAM map conversion

The method comprises the steps of receiving key frame postures and map points of a sparse point cloud picture, carrying out color-based visual segmentation on a scene by utilizing image data, dividing the scene into a ground area and a non-ground area, and optimizing the mapping of point cloud data to idle units in a grid picture through local counters and global counters. Obstacles are established through a slope threshold algorithm, edge false obstacles are eliminated through a Canny boundary detection algorithm, a 2D grid map is formed, and the 2D grid map is convenient to fuse with a map established by a laser SLAM.

Step five, map fusion

The two kinds of probability grid maps are fused by using a grid occupancy rule, and the laser SLAM accuracy is high, so that the threshold value is set to be 0.5, the visual SLAM accuracy is low, and the threshold value is set to be 0.6. Probability maps generated by two types of SLAMs in the same coordinate system are received, then the occupancy rate is compared with preset threshold values, if the occupancy rate is greater than the threshold value, the probability map is occupied, and if the occupancy rate is less than the threshold value, the probability map is empty. The two probabilities are then discriminated according to the grid occupancy rule, which is shown in table 1. And generating a local fusion map, storing the local fusion map in real time, and repeatedly circulating the processes all the time to form a global fusion map. The final results are shown in FIG. 4(A to C).

Figure BDA0002201255100000041

Table 1.

7页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种智能导航机

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!