Laser fusion positioning method in high-precision digital twin scene

文档序号:934025 发布日期:2021-03-05 浏览:2次 中文

阅读说明:本技术 高精度的数字孪生场景中的激光融合定位方法 (Laser fusion positioning method in high-precision digital twin scene ) 是由 张松鹏 苏劲 王旭阳 于 2020-11-06 设计创作,主要内容包括:本发明公开了一种高精度的数字孪生场景中的激光融合定位方法,其包括:利用绝对定位的传感器数据与预存的数字孪生场景仿真地图进行匹配计算,得到当前时刻的观测绝对位姿及其协方差;获取上一时刻的实际绝对位姿及其协方差;根据用于相对定位的传感器数据计算得到当前时刻相对于上一时刻的相对位姿及其协方差;基于上一时刻的实际绝对位姿及其协方差,结合当前时刻相对于上一时刻的相对位姿及其协方差,计算得到当前时刻的预测绝对位姿及其协方差;将当前时刻的观测绝对位姿及其协方差和预测绝对位姿及其协方差融合,得到当前时刻的实际绝对位姿及其协方差。本发明实现了定位方法的高精度,高鲁棒性,高可扩展性以及高时效性等特点。(The invention discloses a laser fusion positioning method in a high-precision digital twin scene, which comprises the following steps: matching calculation is carried out by utilizing the sensor data of absolute positioning and a prestored digital twin scene simulation map, so that the observation absolute pose and the covariance thereof at the current moment are obtained; acquiring the actual absolute pose and covariance thereof at the previous moment; calculating to obtain the relative pose and the covariance of the current moment relative to the previous moment according to the sensor data for relative positioning; calculating to obtain a predicted absolute pose and a covariance thereof at the current moment based on the actual absolute pose and the covariance thereof at the previous moment and by combining the relative pose and the covariance thereof at the current moment relative to the previous moment; and fusing the observed absolute pose and the covariance thereof and the predicted absolute pose and the covariance thereof at the current moment to obtain the actual absolute pose and the covariance thereof at the current moment. The invention realizes the characteristics of high precision, high robustness, high expandability, high timeliness and the like of the positioning method.)

1. The laser fusion positioning method in the high-precision digital twin scene is characterized by comprising the following steps of:

acquiring sensor data for absolute positioning, and performing matching calculation by using the sensor data for absolute positioning and a prestored digital twin scene simulation map to obtain the observation absolute pose and covariance thereof at the current moment;

acquiring the actual absolute pose and covariance thereof at the previous moment;

acquiring sensor data for relative positioning, and obtaining the relative pose and the covariance of the current moment relative to the previous moment through pre-integral calculation according to the sensor data for relative positioning;

based on the actual absolute pose and the covariance thereof at the previous moment, and in combination with the relative pose and the covariance thereof at the current moment relative to the previous moment, the predicted absolute pose and the covariance thereof at the current moment are obtained through pre-integral calculation;

fusing the observed absolute pose and the covariance thereof and the predicted absolute pose and the covariance thereof at the current moment in a nonlinear optimization mode to obtain the actual absolute pose and the covariance thereof at the current moment;

and comparing the covariance of the actual absolute pose at the current moment with a first preset threshold, and judging whether to trust the actual absolute pose at the current moment according to a first preset condition.

2. The laser fusion positioning method in high-precision digital twin scene as claimed in claim 1, wherein there are at least two sensors for relative positioning, each sensor data for relative positioning separately calculates the predicted relative pose of the current time relative to the previous time and its covariance, and then uses a graph optimization method to fuse the predicted relative pose of the current time relative to the previous time and the covariance of the predicted relative pose of the current time relative to the previous time, which are obtained from all the sensor data for relative positioning.

3. The laser fusion positioning method in the high-precision digital twin scene as claimed in claim 1, wherein before the observed absolute pose and its covariance at the current moment and the predicted absolute pose and its covariance are fused in a nonlinear optimization manner, the covariance of the predicted absolute pose at the current moment is compared with a second preset threshold, and whether to trust the predicted absolute pose at the current moment is judged according to a second preset condition, and if the predicted absolute pose at the current moment is not trusted, the observed absolute pose and its covariance at the current moment are directly used as the actual absolute pose and its covariance at the current moment.

4. The method of high precision laser fusion localization in digital twin scenarios according to claim 2, wherein the sensors for relative localization comprise odometers and wheel speed meters.

5. The laser fusion positioning method in the high-precision digital twin scene as claimed in claim 1, wherein before calculating the relative pose of the current time with respect to the previous time and the covariance thereof based on the sensor data for relative positioning, the incredible sensor data for relative positioning is excluded by mahalanobis distance discrimination.

6. The laser fusion positioning method in the high-precision digital twin scene as claimed in claim 1, wherein before calculating the observation absolute pose and its covariance at the current time based on the sensor data for absolute positioning, the incredible sensor data for absolute positioning is excluded by mahalanobis distance discrimination.

7. The laser fusion positioning method in high precision digital twin scenarios according to claim 1, wherein the sensors for absolute positioning comprise lidar and GNSS sensors.

8. The laser fusion positioning method in the high-precision digital twin scene as claimed in claim 7, wherein the method for obtaining the observation absolute pose and the covariance thereof at the current moment by using the sensor data of absolute positioning to perform matching calculation with the pre-stored digital twin scene simulation map comprises:

acquiring the position information of the current moment through a GNSS sensor, and acquiring the image information of the digital twin scene simulation map under each attitude information through traversing the attitude information;

acquiring image information of a real scene through a laser radar;

calculating the matching score of the image information of the digital twin scene simulation map and the image information of the real scene under each attitude information;

and when the matching score of the image information of the digital twin scene simulation map and the image information of the real scene under certain attitude information accords with a third preset condition, taking the attitude information and the position information of the current moment as the observation absolute pose of the current moment, and calculating the covariance of the observation absolute pose of the current moment according to the matching score under the attitude information.

9. The laser fusion positioning method in the high-precision digital twin scene as claimed in claim 8, wherein when the digital twin scene simulation map is a point cloud map, an algorithm for calculating a matching score between the image information of the point cloud map and the image information of the real scene under each attitude information is an ICP algorithm or an NDT algorithm;

when the digital twin scene simulation map is a grid map, an algorithm for calculating the matching score between the image information of the grid map and the image information of the real scene under each attitude information is a gauss-newton algorithm or an LM algorithm.

Technical Field

The invention relates to the technical field of mobile robot positioning. More particularly, the invention relates to a laser fusion positioning method in a high-precision digital twin scene.

Background

With the development of computer technology and artificial intelligence, intelligent autonomous mobile robots become an important research direction and research hotspot in the field of robots. The positioning of the mobile robot is an eye of the autonomous mobile robot, is a hot research problem in the field, and is also one of the key problems for determining whether the robot autonomously moves. The autonomous mobile robot navigation process needs to answer 3 questions: "where do i am? "," where do i go? "and" how do i get there? ". The mobile robot positioning technology is to solve the 1 st problem. Specifically, the purpose of positioning the mobile robot is to determine coordinates of a world map of the robot in a motion environment of the robot, that is, a process of acquiring environment information through sensing and determining the position of the robot through related information processing by the robot. The existing robot positioning process can be divided into relative positioning and absolute positioning: the relative positioning is also called as pose tracking, the initial pose of the robot is assumed, the position of the robot is tracked and estimated by adopting sensor information at adjacent moments, however, errors are accumulated in the calculation process of the relative positioning, so that the positioning precision is poor; the absolute positioning is also called global positioning, an environment model needs to be determined in advance when the global positioning of the robot is completed, external position information is directly provided for the robot through an external sensor, the position of the robot in a global coordinate system is calculated, the sensor is needed to collect the external information in the absolute positioning process, the process is easily interfered by external light, and inaccurate positioning is easily caused.

Disclosure of Invention

An object of the present invention is to solve at least the above problems and to provide at least the advantages described later.

The invention also aims to provide a high-precision laser fusion positioning method in a digital twin scene, which realizes the characteristics of high precision, high robustness, high expandability, high timeliness and the like of the positioning method by fusing relative positioning and absolute positioning technologies and adopting a loose coupling and nonlinear optimization method.

To achieve these objects and other advantages in accordance with the present invention, there is provided a laser fusion localization method in a high precision digital twin scene, comprising:

acquiring sensor data for absolute positioning, and performing matching calculation by using the sensor data for absolute positioning and a prestored digital twin scene simulation map to obtain the observation absolute pose and covariance thereof at the current moment;

acquiring the actual absolute pose and covariance thereof at the previous moment;

acquiring sensor data for relative positioning, and obtaining the relative pose and the covariance of the current moment relative to the previous moment through pre-integral calculation according to the sensor data for relative positioning;

based on the actual absolute pose and the covariance thereof at the previous moment, and in combination with the relative pose and the covariance thereof at the current moment relative to the previous moment, the predicted absolute pose and the covariance thereof at the current moment are obtained through pre-integral calculation;

fusing the observed absolute pose and the covariance thereof and the predicted absolute pose and the covariance thereof at the current moment in a nonlinear optimization mode to obtain the actual absolute pose and the covariance thereof at the current moment;

and comparing the covariance of the actual absolute pose at the current moment with a first preset threshold, and judging whether to trust the actual absolute pose at the current moment according to a first preset condition.

Preferably, there are at least two sensors for relative positioning, each sensor data for relative positioning separately calculates the predicted relative pose and covariance thereof of the current time relative to the previous time, and then the predicted relative pose and covariance of the predicted relative pose of the current time relative to the previous time obtained by all sensor data for relative positioning are fused by a graph optimization method.

Preferably, before the observed absolute pose and the covariance thereof at the current moment and the predicted absolute pose and the covariance thereof are fused in a nonlinear optimization mode, the covariance of the predicted absolute pose at the current moment is compared with a second preset threshold, whether the predicted absolute pose at the current moment is trusted is judged according to a second preset condition, and if the predicted absolute pose at the current moment is not trusted, the observed absolute pose and the covariance thereof at the current moment are directly used as the actual absolute pose and the covariance thereof at the current moment.

Preferably, the sensors for relative positioning include odometers and wheel speed meters.

Preferably, before the relative pose of the current time with respect to the last time and the covariance thereof are calculated from the sensor data for relative positioning, the sensor data for relative positioning that is not authentic is excluded by the mahalanobis distance discrimination method.

Preferably, before the observation absolute pose at the current time and the covariance thereof are calculated based on the sensor data for absolute positioning, the incredible sensor data for absolute positioning is excluded by the mahalanobis distance discrimination method.

Preferably, the sensors for absolute positioning include lidar and GNSS sensors.

Preferably, the method for obtaining the observation absolute pose and the covariance thereof at the current moment by performing matching calculation on the sensor data of the absolute positioning and a pre-stored digital twin scene simulation map comprises the following steps:

acquiring the position information of the current moment through a GNSS sensor, and acquiring the image information of the digital twin scene simulation map under each attitude information through traversing the attitude information;

acquiring image information of a real scene through a laser radar;

calculating the matching score of the image information of the digital twin scene simulation map and the image information of the real scene under each attitude information;

and when the matching score of the image information of the digital twin scene simulation map and the image information of the real scene under certain attitude information accords with a third preset condition, taking the attitude information and the position information of the current moment as the observation absolute pose of the current moment, and calculating the covariance of the observation absolute pose of the current moment according to the matching score under the attitude information.

Preferably, when the digital twin scene simulation map is a point cloud map, an algorithm for calculating a matching score between the image information of the point cloud map and the image information of the real scene under each attitude information is an ICP algorithm or an NDT algorithm;

when the digital twin scene simulation map is a grid map, an algorithm for calculating the matching score between the image information of the grid map and the image information of the real scene under each attitude information is a gauss-newton algorithm or an LM algorithm.

The invention at least comprises the following beneficial effects: can set up multiple different sensor according to user's actual conditions and fuse, scalability is strong, and different sensor credibility is different, consequently, can dispose corresponding measurement covariance and reach different measurement demands, simultaneously, each stage can all be examined and get rid of the sensor outlier, this high robustness of location has been guaranteed, and relative positioning fuses the accumulative error that absolute positioning can eliminate relative positioning and bring, through fusing different sensors, obtains the positioning data of high accuracy.

Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention.

Drawings

Fig. 1 is a flowchart of a laser fusion positioning method according to the present invention.

Detailed Description

The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.

It is to be noted that the experimental methods described in the following embodiments are all conventional methods unless otherwise specified, and the reagents and materials, if not otherwise specified, are commercially available; in the description of the present invention, the terms "lateral", "longitudinal", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing the present invention and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention.

As shown in fig. 1, the present invention provides a high-precision laser fusion positioning method in a digital twin scene, which includes:

s101, acquiring sensor data for absolute positioning, and performing matching calculation by using the sensor data for absolute positioning and a pre-stored digital twin scene simulation map to obtain an observation absolute pose and covariance thereof at the current moment;

the sensors used for absolute positioning can be lidar and GNSS sensors.

Acquiring the position information of the current moment through a GNSS sensor, acquiring the image information of the digital twin scene simulation map under each attitude information through traversing the attitude information, wherein the position information is the position coordinate of the mobile robot, and the attitude information is the attitude angle of the mobile robot, and because the attitude angle is taken within the range of 360 degrees, a series of attitude angles can be calculated in an arithmetic progression way, such as trial at 30 degrees, 60 degrees and 90 degrees … … 360 degrees one by one;

the position information of the current time can be acquired in other manners, such as a manner of manually inputting the position coordinates or a manner of directly loading the position coordinates stored by the mobile robot for the last time.

Acquiring image information of a real scene through a laser radar;

excluding the unreliable GNSS sensor data and the image information of the real scene by a Mahalanobis distance discrimination method;

calculating the matching score of the image information of the digital twin scene simulation map and the image information of the real scene under each attitude information;

when the digital twin scene simulation map is a point cloud map, an algorithm for calculating the matching score between the image information of the point cloud map and the image information of the real scene under each attitude information is an ICP (inductively coupled plasma) algorithm or an NDT (non-deterministic transform) algorithm;

the ICP algorithm can merge point cloud data in different coordinates into the same coordinate system, and first finds a usable transformation, and the registration operation is actually to find a rigid transformation from coordinate system 1 to coordinate system 2. The ICP algorithm is essentially an optimal registration method based on the least squares method. The algorithm repeatedly selects the corresponding relation point pairs and calculates the optimal rigid body transformation until the convergence precision requirement of correct registration is met. The purpose of the ICP algorithm is to find rotation parameters and translation parameters between point cloud data to be registered and reference cloud data, so that the optimal matching between the two points of data meets a certain measurement criterion. Assuming that for two three-dimensional point sets X1 and X2, the registration steps of the ICP method are as follows: the first step, calculating the corresponding near point of each point in X2 in the X1 point set; secondly, obtaining rigid body transformation which enables the average distance of the corresponding points to be minimum, and obtaining translation parameters and rotation parameters; thirdly, obtaining a new transformation point set by using the translation and rotation parameters obtained in the previous step for X2; and fourthly, stopping iterative computation if the average distance between the new transformation point set and the reference point set is smaller than a given threshold, or taking the new transformation point set as a new X2 to continue iteration until the requirement of the objective function is met.

The NDT algorithm is a very useful point cloud registration method, is one-time initialization work, does not need to consume a large amount of cost to calculate the nearest neighbor search matching points, and the time of the probability density function between two image acquisition can be calculated off line, but still has many problems, including poor convergence domain, discontinuity of the NDT cost function, unreliable attitude estimation in a sparse outdoor environment and the like. Step (1) of the NDT algorithm the first step of the algorithm is to subdivide the space occupied by the scan into a grid of cells (squares in 2D images or cubes in 3D), the probability distribution function of each cell being calculated based on the distribution of points within the cell. The probability distribution function in each cell can be interpreted as the generation process of surface points within the cell. Voting the point cloud into each lattice, calculating a probability distribution function of each lattice, wherein the probability distribution function can be used as approximate expression of a surface, the eigenvector and the eigenvalue of the covariance matrix can express less than 3 points in the lattice of surface information (orientation and flatness), and the covariance matrix does not have an inverse matrix frequently, so that only a unit lattice with the point number more than 5 is calculated, and the method relates to a down-sampling method. A normal distribution gives a piecewise smooth representation of the point cloud, with continuous derivatives. Each probability distribution function can be viewed as an approximation of a local surface, describing the position of the surface and its direction and smoothness. In the case of multi-dimensions, the mean and variance describe the matrix Σ by the mean vector μ and the covariance. The diagonal elements of the covariance matrix represent variance per variable and the off-diagonal elements represent covariance variables. (2) And transforming each point of the second scanning point cloud according to the transfer matrix. (3) The second scan point falls on which grid of the reference frame point cloud, and the probability distribution function of the response is calculated. (4) When using NDT for scan point registration, the goal is to find the pose of the current scan point, which is the parameter to be optimized, that is, the rotation and translation vectors of the point cloud estimate of the current scan, to maximize the likelihood that the current scan point will be located on the reference scan surface. And finally, solving the optimal value of all the points.

When the digital twin scene simulation map is a grid map, an algorithm for calculating the matching score between the image information of the grid map and the image information of the real scene under each attitude information is a gauss-newton algorithm or an LM algorithm.

The gauss-newton iteration method is an iteration method for solving regression parameters in a nonlinear regression model to carry out least square, and uses a taylor series expansion formula to approximately replace the nonlinear regression model, then modifies the regression coefficient for multiple times through multiple iterations to enable the regression coefficient to continuously approximate to the optimal regression coefficient of the nonlinear regression model, and finally enables the sum of squares of residuals of an original model to be minimum.

The LM algorithm is a nonlinear optimization method between a Newton method and a gradient descent method, is insensitive to an over-parameterization problem, can effectively process a redundant parameter problem, greatly reduces the chance of trapping a cost function into a local minimum value, and is widely applied to the fields of computer vision and the like due to the characteristics.

And when the matching score of the image information of the digital twin scene simulation map and the image information of the real scene under certain attitude information accords with a third preset condition, taking the attitude information and the position information of the current moment as the observation absolute pose of the current moment, and calculating the covariance of the observation absolute pose of the current moment according to the matching score under the attitude information.

Here, the third preset condition is set by user self-definition, and the calculation manner of the matching score and the covariance thereof of each registration algorithm is disclosed in the prior art, so that the details are not described again.

S102, acquiring the actual absolute pose and covariance of the actual absolute pose at the previous moment;

the actual absolute pose and the covariance thereof are stored in the storage device every time the mobile robot updates the actual absolute pose and the covariance thereof, so that the actual absolute pose and the covariance thereof at the previous moment can be directly obtained.

S103, acquiring sensor data for relative positioning, and obtaining the relative pose and the covariance of the current moment relative to the previous moment through pre-integral calculation according to the sensor data for relative positioning;

the sensors used for relative positioning here may be odometers and wheel speed meters.

And (4) excluding the incredible odometer sensor data and the wheel speed meter sensor data by the Mahalanobis distance discrimination method.

According to the data of the odometer sensor, the relative pose of the current moment relative to the previous moment is calculated, and the formula is as follows:

k-1Pk=(oPk-1)-1·oPk

whereink-1PkIs the relative pose of the current moment relative to the last moment,oPk-1is the relative pose of the odometer at the last moment relative to the initial moment,oPkthe calculation of the covariance is obtained by the following formula for the relative pose of the current time of the odometer relative to the initial time:

first, initializing to obtain (R, p) ═ I,0, and then iteratively calculating the following formula:

(dR,dp)=(oPk-1,m-1)-1oPk-1,m

Cm=F·Cm-1·FT+G·Q·GT

(R,p)=(R·dR,p+R·dp)

where R is the relative pose, p is the relative translation, C is the covariance of the relative pose, and Q is the covariance of the odometer sensor data.

According to the data of the wheel speed meter sensor, the relative pose and the covariance thereof are obtained through pre-integral calculation, and the iterative formula is as follows:

ω=0.5·(ωm-1m)

dR=exp(ω·dt)

v=0.5·(vm-1+dR·vm)

dp=v·dt

Rn=R·dR

pn=p+R·dp

Cm=F·Cm-1·FT+G·Q·GT+H·Q·HT

(R,p)=(Rn,pn)

wherein R is relative pose, p is relative translation, C is covariance of relative pose, ω is angular velocity, v is linear velocity, dt is relative time, and Q is covariance of wheel speed meter sensor data.

And fusing the predicted relative pose of the current moment relative to the previous moment obtained through the data of the odometer sensor and the predicted relative pose of the current moment relative to the previous moment obtained through the data of the wheel speed meter sensor by adopting a graph optimization method, and fusing the predicted relative pose covariance of the current moment relative to the previous moment obtained through the data of the odometer sensor and the predicted relative pose covariance of the current moment relative to the previous moment obtained through the data of the wheel speed meter sensor.

S104, based on the actual absolute pose and the covariance of the previous moment, and in combination with the relative pose and the covariance of the current moment relative to the previous moment, the predicted absolute pose and the covariance of the current moment are obtained through pre-integral calculation, and the calculation formula is as follows:

wRkwRk-1·k-1Rk

Ck=F·Ck-1·FT+G·Crel·GT

Mk=Ck -1

whereinwRkThe predicted absolute pose for the current time is,wRk-1is the actual absolute pose at the last moment,k-1Rkis the relative position of the current time with respect to the previous timePosture CkCovariance of the predicted absolute pose for the current time, Ck-1As the covariance of the relative pose at the current time with respect to the previous time, CrelIs the covariance of the actual absolute pose at the previous moment, MkIs the information matrix of the current moment.

S105, fusing the observed absolute pose and the covariance thereof and the predicted absolute pose and the covariance thereof at the current moment in a nonlinear optimization mode to obtain the actual absolute pose and the covariance thereof at the current moment;

it should be noted here that since the GNSS sensor cannot acquire the attitude information, its corresponding information weight should be 0.

And S106, comparing the covariance of the actual absolute pose at the current moment with a first preset threshold, and judging whether to trust the actual absolute pose at the current moment according to a first preset condition.

The first preset threshold and the first preset condition can be set by a user in a self-defined mode, if the covariance of the actual absolute pose at the current moment is smaller than the first preset threshold, the actual absolute pose at the current moment is trusted, and if the covariance of the actual absolute pose at the current moment is larger than the first preset threshold, the actual absolute pose at the current moment is not trusted.

And when the actual absolute pose at the current moment is not obtained, acquiring data again through the sensor for relative positioning and the sensor for absolute positioning, repeating the laser fusion positioning method, and recalculating the actual absolute pose.

In another embodiment, before the observed absolute pose and the covariance thereof at the current moment are fused with the predicted absolute pose and the covariance thereof in a nonlinear optimization mode, the covariance of the predicted absolute pose at the current moment is compared with a second preset threshold, whether the predicted absolute pose at the current moment is trusted is judged according to a second preset condition, and if the predicted absolute pose at the current moment is not trusted, the observed absolute pose and the covariance thereof at the current moment are directly used as the actual absolute pose and the covariance thereof at the current moment.

The second preset threshold and the second preset condition can be set by a user in a self-defined way, if the covariance of the predicted absolute pose at the current moment is smaller than the second preset threshold, the predicted absolute pose at the current moment is trusted, and if the covariance of the predicted absolute pose at the current moment is larger than the second preset threshold, the predicted absolute pose at the current moment is not trusted.

And if the predicted absolute pose of the current moment is obtained, continuing to fuse the observed absolute pose and the predicted absolute pose of the current moment, otherwise, directly taking the observed absolute pose and the covariance thereof as the actual absolute pose and the covariance thereof of the current moment.

While embodiments of the invention have been described above, it is not limited to the applications set forth in the description and the embodiments, which are fully applicable in various fields of endeavor to which the invention pertains, and further modifications may readily be made by those skilled in the art, it being understood that the invention is not limited to the details shown and described herein without departing from the general concept defined by the appended claims and their equivalents.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种基于矩阵变换的室内定位方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!