EKF-based loosely-coupled multi-sensor fusion positioning method and system

文档序号:1873877 发布日期:2021-11-23 浏览:2次 中文

阅读说明:本技术 一种基于ekf的松耦合多传感器融合定位方法及系统 (EKF-based loosely-coupled multi-sensor fusion positioning method and system ) 是由 梁婉琳 关伟鹏 伍文飞 于 2021-08-23 设计创作,主要内容包括:本发明公开了一种基于EKF的松耦合多传感器融合定位方法及系统,包括:基于SLO-VLP获得移动终端的SLO-VLP位姿,将SLO-VLP位姿作为自适应蒙特卡罗定位算法粒子的初始化位姿,基于初始化位姿获得AMCL位姿;将SLO-VLP位姿和AMCL位姿作为EKF算法的输入值,获得移动终端的位姿值;系统包括RSE照相机、激光雷达和里程计。本发明通过里程计的状态预测和激光雷达扫描仪和VLP的测量更新,可以同时将可观测的LED数量的要求放宽到零,实现高精度定位。本发明使移动终端能够最大限度地提高其对环境的感知意识,并获得足够的测量值,为室内定位及导航提供了技术参考。(The invention discloses an EKF-based loosely-coupled multi-sensor fusion positioning method and system, which comprises the following steps: the SLO-VLP pose of the mobile terminal is obtained based on the SLO-VLP, the SLO-VLP pose is used as an initialization pose of a self-adaptive Monte Carlo positioning algorithm particle, and an AMCL pose is obtained based on the initialization pose; the SLO-VLP pose and the AMCL pose are used as input values of an EKF algorithm to obtain a pose value of the mobile terminal; the system includes an RSE camera, a lidar, and a odometer. The invention can simultaneously relax the requirement of the number of observable LEDs to zero by predicting the state of the odometer and updating the measurement of the laser radar scanner and the VLP, thereby realizing high-precision positioning. The invention enables the mobile terminal to improve the perception consciousness of the mobile terminal to the environment to the maximum extent, obtains enough measured values and provides technical reference for indoor positioning and navigation.)

1. An EKF-based loosely-coupled multi-sensor fusion positioning method is characterized by comprising the following steps:

the SLO-VLP pose of the mobile terminal is obtained based on the SLO-VLP, the SLO-VLP pose is used as an initialization pose of a self-adaptive Monte Carlo positioning algorithm particle, and an AMCL pose is obtained based on the initialization pose;

and taking the SLO-VLP pose and the AMCL pose as input values of an EKF algorithm to obtain a pose value of the mobile terminal.

2. The EKF-based loosely-coupled multi-sensor fusion localization method of claim 1, wherein,

obtaining the SLO-VLP pose of the mobile terminal based on the SLO-VLP comprises:

obtaining the positions of an LED lamp body and the camera image center of the mobile terminal in a world coordinate system;

obtaining the gesture of the mobile terminal according to a first transformation relation between a camera coordinate system of the mobile terminal and a base _ link coordinate system of the mobile terminal;

and acquiring a yaw angle through an odometer sensor of the mobile terminal, acquiring a second transformation relation of the attitude through a laser radar matcher, and correcting the yaw angle according to the second transformation relation to acquire the SLO-VLP (narrow line limit value-VLP) pose of the mobile terminal.

3. The EKF-based loosely-coupled multi-sensor fusion localization method of claim 2, wherein,

the obtaining of the positions of the LED lamp body and the camera image center of the mobile terminal in the world coordinate system comprises:

acquiring a photosensitive area of the LED lamp body, and performing identity recognition through the photosensitive area to construct an LED landmark picture, wherein the position of the LED lamp body in the LED landmark picture is the position of the LED lamp body in a world coordinate system;

and obtaining the position of the LED lamp body in the camera coordinate system based on the position in the LED landmark image, namely the position of the camera image center of the mobile terminal in the world coordinate system.

4. The EKF-based loosely-coupled multi-sensor fusion localization method of claim 3, wherein,

obtaining the position of the LED lamp body in the camera coordinate system based on the position in the LED landmark image, and particularly obtaining the position by calculating the height difference of the LED lamp body on the z axis;

the height difference is obtained through the proportional relation between the diameter of the LED image plane and the diameter of the LED lamp body, the triangular property and the imaging principle.

5. The EKF-based loosely-coupled multi-sensor fusion localization method of claim 1, wherein,

obtaining the AMCL pose comprises:

and taking the SLO-VLP pose as an initialization pose of AMCL filter particles, inputting pose information and laser radar data acquired by the odometer sensor into the AMCL filter, and outputting to acquire the AMCL pose.

6. The EKF-based loosely-coupled multi-sensor fusion localization method of claim 1, wherein,

obtaining the pose value of the mobile terminal comprises:

acquiring a pose estimation value through a odometer sensor of the mobile terminal;

and acquiring a pose observation value based on the SLO-VLP pose and the AMCL pose, and correcting the pose estimation value through a Kalman filter to acquire the pose value of the mobile terminal.

7. The EKF-based loosely-coupled multi-sensor fusion localization method of claim 6, wherein,

correcting the pose estimation value through the Kalman filter, and obtaining the pose value of the mobile terminal further comprises:

and calculating Kalman gain, and updating the pose vector and the covariance matrix according to the Kalman gain.

8. An EKF-based loosely-coupled multi-sensor fusion positioning system is characterized by comprising,

RSE cameras, lidar, odometers;

the RSE camera is used for obtaining an SLO-VLP pose of the mobile terminal based on the SLO-VLP;

the laser radar is used for taking the SLO-VLP pose as an initialization pose of the self-adaptive Monte Carlo positioning algorithm particle to obtain an AMCL pose;

and the odometer is used for taking the SLO-VLP pose and the AMCL pose as input values of an EKF algorithm to obtain a pose value of the mobile terminal.

9. The EKF-based loosely coupled multi-sensor fusion localization system of claim 8, comprising,

the odometer is also used for acquiring a yaw angle;

the laser radar is further used for obtaining a second transformation relation of the attitude of the mobile terminal and correcting the yaw angle according to the second transformation relation.

Technical Field

The invention belongs to the field of intelligent equipment positioning, and particularly relates to an EKF-based loosely-coupled multi-sensor fusion positioning method and system.

Background

The visible light communication technology (VLC for short) greatly fills the need of commercial indoor positioning with the advantages of high precision, low cost and easy realization. Meanwhile, the visible light communication technology cannot generate any radio frequency interference, so that the visible light positioning algorithm has good performance in the environment (such as hospitals, nuclear power stations and the like) with strictly limited radio frequency radiation. The visible light positioning technology (VLP) is to modulate LEDs so that each LED flashes in light and dark at different frequencies and transmits its position information through the air. The receiving device of the mobile terminal captures VLP information, processes the captured picture through an image processing technology, demodulates the position information of the LED, and calculates the position information of the mobile terminal by using the principles of geometry and the like.

Camera-based VLP systems employ modulated LED lights mounted in known locations (e.g., on the ceiling) as artificial landmarks, associating each LED with an ID coordinate by RSE-based OCC measurement. If the number of indicator lights required in a VLP is to be reduced, the captured image of the LED is no longer taken as a point, but rather as an image that utilizes geometric features to determine the orientation and position of the receiver. This method requires an additional marker to be placed on the LED.

SLAM location techniques are useful for building and updating maps in unknown environments, while mobile terminals maintain information about their location. It includes an environment model (map) constructed simultaneously and an estimation of the pose of the terminal moving inside it. SLAMs can be classified into two categories according to the sensor: vision based and lidar based. In comparison with other distance measuring devices, the lidar sensor performs measurement around it at a larger scanning angle and a high angular resolution. Furthermore, it is invariant to illumination. High reliability and accuracy make lidar sensors a popular choice for attitude estimation.

Disclosure of Invention

Aiming at the problems that the prior SLAM positioning technology based on a laser radar sensor can not accurately guide and position under the condition of no relatively accurate manual initial pose and the characteristics obtained from the laser radar are very limited, the invention provides the following scheme: an EKF-based loosely-coupled multi-sensor fusion positioning method comprises the following steps:

the SLO-VLP pose of the mobile terminal is obtained based on the SLO-VLP, the SLO-VLP pose is used as an initialization pose of a self-adaptive Monte Carlo positioning algorithm particle, and an AMCL pose is obtained based on the initialization pose;

and taking the SLO-VLP pose and the AMCL pose as input values of an EKF algorithm to obtain a pose value of the mobile terminal.

Preferably, the obtaining the SLO-VLP pose of the mobile terminal based on the SLO-VLP comprises:

obtaining the positions of an LED lamp body and the camera image center of the mobile terminal in a world coordinate system;

obtaining the gesture of the mobile terminal according to a first transformation relation between a camera coordinate system of the mobile terminal and a base _ link coordinate system of the mobile terminal;

and acquiring a yaw angle through an odometer sensor of the mobile terminal, acquiring a second transformation relation of the attitude through a laser radar matcher, and correcting the yaw angle according to the second transformation relation to acquire the SLO-VLP (narrow line limit value-VLP) pose of the mobile terminal.

Preferably, the obtaining of the positions of the LED lamp body and the camera image center of the mobile terminal in the world coordinate system includes:

acquiring a photosensitive area of the LED lamp body, and performing identity recognition through the photosensitive area to construct an LED landmark picture, wherein the position of the LED lamp body in the LED landmark picture is the position of the LED lamp body in a world coordinate system;

and obtaining the position of the LED lamp body in the camera coordinate system based on the position in the LED landmark image, namely the position of the camera image center of the mobile terminal in the world coordinate system.

Preferably, the position of the LED lamp body in the camera coordinate system is obtained based on the position in the LED landmark image, specifically by calculating a height difference of the LED lamp body on the z-axis;

the height difference is obtained through the proportional relation between the diameter of the LED image plane and the diameter of the LED lamp body, the triangular property and the imaging principle.

Preferably, obtaining the AMCL pose comprises:

and taking the SLO-VLP pose as an initialization pose of AMCL filter particles, inputting pose information and laser radar data acquired by the odometer sensor into the AMCL filter, and outputting to acquire the AMCL pose.

Preferably, obtaining the pose value of the mobile terminal comprises:

acquiring a pose estimation value through a odometer sensor of the mobile terminal;

and acquiring a pose observation value based on the SLO-VLP pose and the AMCL pose, and correcting the pose estimation value through a Kalman filter to acquire the pose value of the mobile terminal.

Preferably, the correcting the pose estimation value through the kalman filter, and obtaining the pose value of the mobile terminal further includes:

and calculating Kalman gain, and updating the pose vector and the covariance matrix according to the Kalman gain.

An EKF-based loosely-coupled multi-sensor fusion positioning system comprises,

RSE cameras, lidar, odometers;

the RSE camera is used for obtaining an SLO-VLP pose of the mobile terminal based on the SLO-VLP;

the laser radar is used for taking the SLO-VLP pose as an initialization pose of the self-adaptive Monte Carlo positioning algorithm particle to obtain an AMCL pose;

and the odometer is used for taking the SLO-VLP pose and the AMCL pose as input values of an EKF algorithm to obtain a pose value of the mobile terminal.

Preferably, the odometer is further configured to obtain a yaw angle;

the laser radar is further used for obtaining a second transformation relation of the attitude of the mobile terminal and correcting the yaw angle according to the second transformation relation.

The invention discloses the following technical effects:

the invention can simultaneously relax the requirement of the number of observable LEDs to zero by predicting the state of the odometer and updating the measurement of the laser radar scanner and the VLP, thereby realizing high-precision positioning. The invention enables the mobile terminal to improve the perception consciousness of the mobile terminal to the environment to the maximum extent, obtains enough measured values and provides technical reference for indoor positioning and navigation.

According to the invention, a reliable attitude observation value is provided for the laser SLAM technology, so that accumulated errors of sensors such as a speedometer and a laser radar can be corrected, a more accurate pose is provided, and the robustness of an algorithm is enhanced.

The invention can make up the defects of the independent sensor through the fusion of the multiple sensors and provide more reliable attitude estimation. The method provides accurate and reliable positioning for the terminal when the LED is in shortage/fault, and the robustness of the algorithm ensures that the mobile terminal is not only suitable for the occasions with perfect lighting facilities, but also can provide high-precision positioning service under the condition of information source shortage, thereby laying a solid foundation for path planning and autonomous navigation.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.

FIG. 1 is a flow chart of a method of an embodiment of the present invention;

FIG. 2 is a schematic diagram of the transformation between world, camera and image coordinate systems according to an embodiment of the present invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.

As shown in fig. 1, the present invention provides a method and a system for EKF-based loosely coupled multi-sensor fusion localization, comprising:

step S1: obtaining a mobile terminal observation pose (X) calculated from the SLO-VLPt,Yt,Zt,θt) And its direction angle thetatCorrected by a laser radar matcher (AMCL).

Step S11: obtaining the position (X) of a certain LED lamp body in a world coordinate systemi,Yi,Zi) And calculating the position P (X) of the center of the terminal camera image in the world coordinate systems,Ys,Zs). The method comprises the following steps:

step S111: obtaining a photosensitive region ROI of the LED lamp body, identifying the identity (LED-ID) of the photosensitive region ROI, and retrieving a 3D position P of the LED from a registered light database (a prefabricated LED landmark map)LED(Xi,Yi,Zi)。

Step S112: p acquired by step S11LED(Xi,Yi,Zi) Calculating the position of the LED lamp body in the camera coordinate system to be (x)i,yi) The method comprises the following steps:

lambda is the height difference between the mobile terminal and the LED lamp body on the z axis, and the height difference between the mobile terminal and the LED lamp body on the z axis is obtained according to the calculated proportional relation between the diameter of the LED image plane and the actual diameter of the LED lamp body, the triangular property and the imaging principle, namely the height difference between the mobile terminal and the LED lamp body on the z axis is obtained

Wherein D is the diameter of the LED, DpixelIs the pixel distance, P, of the cameradIs the conversion of pixel distance to physical distance, and f is the focal length of the camera.

f (focal length), dx/dy(pixel distance), u0/v0(image center pixel) constitutes a feature matrix K;

t is a translation matrix which is equivalent to the position of the mobile terminal in a three-dimensional world coordinate system;

r is a rotation matrix from the world coordinate system to the camera coordinate system.

When only two-dimensional planes are considered, α, β are considered constants, while γ is acquired by the odometer sensor.

Step S113: p obtained in step S111LED(Xi,Yi,Zi) It is possible to calculate:

Zs=Zi-λ (2)

step S114: the method for calculating the pose of the camera center in the world coordinate system comprises the following steps:

obtaining Ps [ Xs, Ys, Zs, gamma ].

Step S115: pose S of mobile terminal in world coordinate systemtCan be obtained by the following method:

wherein r isx,ry,rzAnd tx,ty,tzIs a slave phaseCoefficients of rotation and translation of the coordinate transformation of the machine coordinate system to the base _ link coordinate system of the mobile terminal.

Step S12: calculating a yaw angle theta input by the SLO-VLP and the odometer to obtain gamma, and correcting the TF coordinate relation through a laser radar matcher (AMCL), wherein the method comprises the following steps:

the laser radar obtains a relative pose transformation relation between two scanning data by matching the scanning data with previous scanning, and corrects the direction of the mobile terminal obtained by the odometer sensor by the coordinate transformation relation of the direction gamma, and the specific method is as follows:

θt=γ=γodomLIDAR→Map

step S2: using the SLO-VLP pose obtained in step S1 as an initialization pose of an adaptive Monte Carlo positioning Algorithm (AMCL) particle to obtain a position (X) estimated by the AMCLi,Yi). The method comprises the following steps:

step S21: the SLO-VLP posture S obtained in the step S1tAs the initialized position of the AMCL filter particles;

step S22: inputting pose information and laser radar data acquired by the odometer sensor into the AMCL positioner, and outputting the pose of the terminal on a map;

step S3: and taking the observation pose obtained in the steps S1 and S2 as an input value of an EKF algorithm to correct the estimated value of the pose of the mobile terminal calculated by the odometer sensor. The method comprises the following steps:

step S31: an attitude estimate based on odometer sensors is calculated, this information being measured by odometers inside the robot. More specifically, at time intervals (t-1, t)]Inner, robot slave pose xt-1Move to pose xtThe odometer feeds back from xt-1=(x,y,θ)TTo xt=(x’,y’,θ’)TRelative progression of (a).

Wherein F is a transformation matrix, and F is a transformation matrix,is according to xt-1True value x of current statetEstimate of (u)tIs motion control information.

Estimating an error covariance matrix P of poset|t-1=FPt-1FT+Qt(7) Wherein Q istIs the input noise covariance matrix and assumes that the input noise follows a normal distribution.

Step S22: the observation pose obtained at step S1 is used as an input value to the EKF algorithm to correct the estimated value calculated at step S21. The method comprises the following steps:

step S221: calculating kalman gain K ═ Pt|t-1Ht T(HtPt|t-1Ht T+Rt)-1(8) In which H istIs an observation matrix, RtIs the error covariance matrix of the observed noise and assumes that the observed noise follows a normal distribution.

Step S222: updating the pose vector and the covariance matrix by using the Kalman gain K obtained in the step S221, which specifically comprises the following steps:

Pt=Pt|t-1-KHtPt|t-1 (10)

of the above, in the above, the present invention,for the actual value x of the current statet(ii) an estimate of (d);is pre-estimation based on t-1 times and t times; assuming that both the input noise and the observed noise follow a normal distribution, Q is the covariance of the input noise and R is the covariance of the observed noise.

To this end, the estimated attitude via state prediction equation (6) and multi-sensor fusion may utilize a kalman filter algorithm to update the predicted attitude and correct the accumulated error of the odometer.

The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于LIFI的视障人士室内导航方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!