Multi-robot relative positioning method based on multi-sensor fusion

文档序号:1798030 发布日期:2021-11-05 浏览:26次 中文

阅读说明:本技术 一种基于多传感器融合的多机器人相对定位方法 (Multi-robot relative positioning method based on multi-sensor fusion ) 是由 黄攀峰 杨立 张帆 张夷斋 于 2021-07-19 设计创作,主要内容包括:本发明涉及一种基于多传感器融合的多机器人相对定位方法,旨在实现多无人机协同运输时的构型感知功能,为控制模块进行阵型保持提供位姿信息。所采用的技术方案包括以下步骤:单机对视觉可见范围内其他无人机的视觉位姿估计;IMU信息与视觉估计和UWB测距信息融合;多机间定位结果融合。由于采用了确定尺寸的合作标识,使得视觉定位的鲁棒性、精度和计算速度均得以提高,加快观测更新频率;采用了IMU预测+UWB测距+视觉图像观测的多传感器组合解算,使得多机器人间的定位精度得以进一步提高,并且可以在不依靠GPS信息的情况下保持系统相对构型。采用了环形网络拓扑进行阵型估计,使得系统在部分观测失效的情况下仍能保持运作,具有一定的冗余性。(The invention relates to a multi-robot relative positioning method based on multi-sensor fusion, aiming at realizing a configuration sensing function during multi-unmanned aerial vehicle collaborative transportation and providing pose information for a control module to carry out matrix maintenance. The adopted technical scheme comprises the following steps: estimating the visual pose of other unmanned aerial vehicles in the visual visible range by the single machine; IMU information is fused with visual estimation and UWB ranging information; and fusing the positioning results among multiple machines. Due to the adoption of the cooperative identification with the determined size, the robustness, the precision and the calculation speed of the visual positioning are improved, and the observation updating frequency is increased; the method adopts the multi-sensor combined calculation of IMU prediction, UWB ranging and visual image observation, so that the positioning accuracy among multiple robots is further improved, and the relative configuration of the system can be kept without depending on GPS information. The ring network topology is adopted for array type estimation, so that the system can still keep running under the condition of partial observation failure, and certain redundancy is realized.)

1. A multi-robot relative positioning method based on multi-sensor fusion is characterized by comprising the following steps:

step 1, estimating the visual poses of the unit robots on other robots in a visual visible range:

identifying each robot by using AprilTag as a cooperation identifier;

any unit robot in the system only detects and tracks a specific object;

reading an image from the onboard camera by any unit robot, converting pixel information into cv (matrix of matrix) by using OpenCV (open channel vision library), and converting the image into a gray-scale image;

pose initialization: using AprilTag algorithm to carry out full-image search and find out the position of the target cooperation mark on the image planeSolving homogeneous matrices from camera to collaboration mark using PnP method

And (3) clipping the subsequent input image: position of cooperative mark in image according to last timeCutting an input image at the next moment, and only reserving an image area near the cooperation mark; finding cooperation identification for image area, calculating position of cooperation identification on image planeAnd homogeneous matrixThe vision calculation result is broadcasted in the ROS communication network;

if the cooperation mark is not found in the image area, enlarging the clipping area in the next frame image until the cooperation mark can be found;

step 2, fusing IMU information with visual estimation and UWB ranging information:

resolving information in any unit robot IMU to obtain pose increment delta Rij,Δvij,ΔpijSum covariance matrix δ Φij,δvij,δpijAs a prediction unit, the position and covariance of the robot at the next moment, i.e. the inertial navigation prediction result; using a UWB (ultra wide band) tag to carry out DS-TWR (direct sequence-time two-way radio) ranging, and acquiring distance information between machines;

each unit robot broadcasts the calculated inertial navigation data and UWB ranging information of the robot in an ROS communication network;

each unit takes the vision calculation result and the UWB ranging information result in the step 1 as observation, performs fusion filtering with the inertial navigation prediction result, and broadcasts the fusion result in an ROS communication network; if the long-time visual observation fails or the result covariance is larger than the threshold value, attaching an Exception label to the filtering result, and marking the filtering result as untrusted data;

step 3, fusing positioning results among multiple robots:

the piloted unit robot calculates the relative configuration of the current system according to the fused relative pose data, and the ring network topology tolerates one member positioning failure at most;

if there are two or more member positioning failures or communication errors, the system suspends the execution of the current task, maintains the hovering state and reports the error on the ground station.

2. The multi-robot relative positioning method based on multi-sensor fusion as claimed in claim 1, wherein: whenever the ground station has the highest priority, an override is made to the current mission state.

3. The multi-robot relative positioning method based on multi-sensor fusion according to claim 1 or 2, characterized in that: the cooperation mark adopts black and white colors, and the size of an outer frame is 0.5 m.

4. The multi-robot relative positioning method based on multi-sensor fusion according to claim 1 or 2, characterized in that: the resolution of the read-in image is greater than 1920 x 1080.

Technical Field

The invention belongs to the field of unmanned aerial vehicle sensing and positioning research, relates to a multi-robot relative positioning method based on multi-sensor fusion, and particularly relates to a multi-unmanned aerial vehicle cooperative transportation system mutual positioning method based on multi-sensor fusion.

Background

The multi-robot cooperative system has the characteristics of convenience in combination, flexibility, redundancy and the like, and has been developed greatly in recent years. Compared with the dispatching and transportation of a single large airplane, the multi-unmanned aerial vehicle system has the advantages of low single-machine cost and convenience in maintenance in the aspect of air carrying operation. The load capacity of the system can be increased by increasing the number of units according to the weight of the load. The multi-unmanned-aerial-vehicle cooperative transportation system is composed of an unmanned-aerial-vehicle unit, a flexible tether, a load connecting device and a ground station. Each unmanned aerial vehicle unit is provided with a camera, an inertial navigation sensor, an Ultra Wide Band (UWB) beacon, a computing unit and a control system, and real-time communication can be carried out among the cameras, the inertial navigation sensor, the UWB beacon, the computing unit and the control system. When the unmanned aerial vehicle works, the distance between unmanned aerial vehicle units is 10-20 m. The ground station is mainly used for monitoring the state of the unmanned aerial vehicle, and is switched to manual control in time when potential safety hazards appear.

The mutual positioning mode of the multi-unmanned aerial vehicle cooperative transportation system is various, the current mainstream method is to position an absolute position by using satellite positioning systems such as a Global Positioning System (GPS)/a Beidou satellite navigation system (BDS) and the like, then to calculate a relative position, and the method is already developed and applied in the industry at present. The civil precision of satellite positioning systems such as GPS is usually in the meter level, and the positioning requirement of single machine operation can only be met. The GPS-RTK (GPS-Real-time-kinematic) is a positioning method using carrier phase dynamic Real-time difference, and the positioning accuracy of the method can reach centimeter level, but the method cannot be widely used because the equipment is expensive and needs a ground base station. In addition, compared with unmanned aerial vehicle formation, the cooperative transportation of multiple unmanned aerial vehicles is a strongly coupled system, the tension distribution on the tether directly influences the endurance time of the whole system, so that the system is more sensitive to relative configuration and height difference, and higher requirements are provided for relative pose positioning. In order to solve the problems, a multi-sensor fusion positioning scheme based on inertial navigation, Ultra Wide Band (UWB) technology and vision is provided, and relative positioning errors are minimized through redundancy solution. In addition, because the sensors do not depend on an external base station, the system can work normally under the condition that the GPS is interfered.

In the field of robot positioning, a positioning scheme of multi-sensor fusion has been widely adopted by the industry. For example, chinese patent application No. CN202011388909.0 proposes a robot positioning method based on IMU data, wheel speed odometer data, image data, laser data, and UWB data, and obtains robust pose data by pre-integrating IMU data, aligning with image observation, and fusing with sensor data such as laser radar. Chinese patent application No. CN202011071053.4 proposes a fusion positioning method based on images, depth maps, IMU data and 2D lidar. It can be seen that the high-precision and high-robustness positioning scheme of a single machine is relatively mature, but a large gap exists in the high-precision relative positioning aspect of multiple machines. The unmanned aerial vehicle can move freely in the three-dimensional Euclidean space, and the distance is far (larger than the working range of the common laser radar), and more restrictions are provided for the model selection and fusion scheme of the sensor.

Disclosure of Invention

Technical problem to be solved

In order to avoid the defects of the prior art, the invention provides a multi-robot relative positioning method based on multi-sensor fusion, and aims to design a mutual positioning method based on multi-sensor fusion. The configuration perception function when aiming at realizing many unmanned aerial vehicles transportation in coordination provides position appearance information for control module carries out the formation and keeps.

Technical scheme

A multi-robot relative positioning method based on multi-sensor fusion is characterized by comprising the following steps:

step 1, estimating the visual poses of the unit robots on other robots in a visual visible range:

identifying each robot by using AprilTag as a cooperation identifier;

any unit robot in the system only detects and tracks a specific object;

reading an image from the onboard camera by any unit robot, converting pixel information into cv (matrix of matrix) by using OpenCV (open channel vision library), and converting the image into a gray-scale image;

pose initialization: using AprilTag algorithm to carry out full-image search and find out the position of the target cooperation mark on the image planeSolving homogeneous matrices from camera to collaboration mark using PnP method

And (3) clipping the subsequent input image: position of cooperative mark in image according to last timeCutting an input image at the next moment, and only reserving an image area near the cooperation mark; finding cooperation identification for image area, calculating position of cooperation identification on image planeAnd homogeneous matrixThe vision calculation result is broadcasted in the ROS communication network;

if the cooperation mark is not found in the image area, enlarging the clipping area in the next frame image until the cooperation mark can be found;

step 2, fusing IMU information with visual estimation and UWB ranging information:

resolving information in any unit robot IMU to obtain pose increment delta Rij,Δvij,ΔpijSum covariance matrix δ Φij,δvij,δpijAs a prediction unit, the position and covariance of the robot at the next moment, i.e. the inertial navigation prediction result; using a UWB (ultra wide band) tag to carry out DS-TWR (direct sequence-time two-way radio) ranging, and acquiring distance information between machines;

each unit robot broadcasts the calculated inertial navigation data and UWB ranging information of the robot in an ROS communication network;

each unit takes the vision calculation result and the UWB ranging information result in the step 1 as observation, performs fusion filtering with the inertial navigation prediction result, and broadcasts the fusion result in an ROS communication network; if the long-time visual observation fails or the result covariance is larger than the threshold value, attaching an Exception label to the filtering result, and marking the filtering result as untrusted data;

step 3, fusing positioning results among multiple robots:

the piloted unit robot calculates the relative configuration of the current system according to the fused relative pose data, and the ring network topology tolerates one member positioning failure at most;

if there are two or more member positioning failures or communication errors, the system suspends the execution of the current task, maintains the hovering state and reports the error on the ground station.

Whenever the ground station has the highest priority, an override is made to the current mission state.

The cooperation mark adopts black and white colors, and the size of an outer frame is 0.5 m.

The resolution of the read-in image is greater than 1920 x 1080.

Advantageous effects

The invention provides a multi-robot relative positioning method based on multi-sensor fusion, which aims to realize a configuration sensing function during multi-unmanned aerial vehicle collaborative transportation and provide pose information for a control module to perform matrix maintenance. The adopted technical scheme comprises the following steps: estimating the visual pose of other unmanned aerial vehicles in the visual visible range by the single machine; IMU information is fused with visual estimation and UWB ranging information; and fusing the positioning results among multiple machines.

Compared with the prior art, the invention has the following advantages:

(1) due to the adoption of the cooperative identification with the determined size, the robustness, the precision and the calculation speed of the visual positioning are improved, and the observation updating frequency is increased;

(2) due to the adoption of multi-sensor combined calculation of IMU prediction, UWB ranging and visual image observation, the positioning precision among multiple robots is further improved, and the relative configuration of the system can be kept without depending on GPS information.

(3) Because the ring network topology is adopted for array type estimation, the system can still keep running under the condition of partial observation failure, and has certain redundancy.

Drawings

FIG. 1: cooperative identification accelerated computation flow chart

FIG. 2: multi-sensor fusion pose estimation schematic diagram

FIG. 3: system topology schematic

FIG. 4: flow chart of the system

Detailed Description

The invention will now be further described with reference to the following examples and drawings:

the invention aims to design a mutual positioning method based on multi-sensor fusion. The configuration perception function when aiming at realizing many unmanned aerial vehicles transportation in coordination provides position appearance information for control module carries out the formation and keeps.

In order to achieve the purpose, the technical scheme adopted by the invention comprises the following steps:

1) estimating the visual pose of other unmanned aerial vehicles in the visual visible range by the single machine;

2) IMU information is fused with visual estimation and UWB ranging information;

3) and fusing the positioning results among multiple machines.

The step 1) of estimating the visual pose of the single machine to other unmanned aerial vehicles:

1.1) setting of cooperation identification: aprilat was used as the collaboration logo, printed in black and white, with an outline size of 0.5 m.

1.2) determining the system topology: subject to the performance of the airborne computing unit, the unit drone detects and tracks only specific objects: assuming that the number of clusters is n, the airplane marked with i is tracked only by the airplane marked with i +1, and the airplane marked with n-1 is tracked by the airplane marked with 0 to form an annular network.

1.3) image preprocessing: reading in higher resolution image from onboard camera, converting pixel information into cv: (MAT) matrix form using OpenCV, and converting the image into grayscale image with grayscale size (cols, rows)T

1.4) pose initialization: using AprilTag algorithm to carry out full-image search and find out the position of the target cooperation mark on the image planeSolving homogeneous matrices from camera to collaboration mark using PnP methodUpdating clipping offsetSize of cutting area kwX gray wherein kw<1 is an artificially given scaling factor.

1.4) clipping the input image: the target can not be quickly maneuvered in a short time, and the cutting offset is updated according to the position of the cooperative identification at the moment k in the imageAnd (4) cutting the image at the moment of k +1, and only reserving the image area near the cooperation mark.

1.5) circularly updating the visual observation information: as shown in FIG. 1, the cooperation mark is searched for the input image, the original pixel information of the cooperation mark is restored by using the clipping information, and the position of the cooperation mark on the image plane is calculatedAnd homogeneous matrixThe reduction formula is:

porigin.uv=pdetect.uv+bias-size/2

if the target cooperation mark is not detected in the image area and the time difference between the current distance and the last time when the mark is detected is smaller than the threshold value, namely tnow-tlastframe<tthresholdUpdating size 2 × size; if time out, reset window size for gray and return to 1.3) for re-initialization.

Step 2) IMU information and visual estimation and UWB ranging information fusion:

2.1) resolving IMU information to obtain pose increment delta Rij,Δvij,ΔpijSum covariance matrix δ Φij,δvij,δpij

2.2) using the UWB label to carry out DS-TWR ranging, and obtaining the distance information between the devices.

2.3) each unit broadcasts the calculated inertial navigation data and UWB ranging information in an ROS communication network.

And 2.4) performing integral prediction by each unit according to the inertial navigation data of the unit and the inertial navigation data of the tracked target.

2.5) as shown in FIG. 2, each unit takes the vision resolving result and the UWB distance measuring result in 1) as observation, and carries out fusion filtering with the inertial navigation prediction result. The vision calculation result contains target azimuth and distance information, and is regarded as a main node as a main observation value; the UWB beacon only provides distance information, is used as a slave node and is attached to a master node, and exerts 'spring' constraint as a boundary for measuring distance change; correcting IMU data according to observation to obtain relative poseMinimizing the error.

If the condition that the long-time visual observation fails or the result covariance is larger than the threshold value occurs, attaching an Exception label to the filtering result, and marking the filtering result as the incredible data.

The step 3) fusing the positioning results among the multiple machines:

and 3.1) estimating the relative configuration of the current system by a pilot according to the fused relative pose data. As shown in fig. 3, the relative pose matrix is transmitted through a ring network, and the network can tolerate the positioning failure of at most one member, and at the moment, the integrity of the configuration estimation can be still ensured.

3.2) if there are two or more member positioning failures or communication errors, the system suspends the execution of the current task, maintains the hovering state and reports the errors on the ground station.

3.3) whenever the ground station has the highest priority, an override can be made to the current mission state. The general flow chart of the system is shown in fig. 4.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种发泡注料流量分段式控制系统及其控制方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类