Laser radar high-precision positioning method for regional scene

文档序号:1860164 发布日期:2021-11-19 浏览:20次 中文

阅读说明:本技术 一种面向区域场景的激光雷达高精度定位方法 (Laser radar high-precision positioning method for regional scene ) 是由 季博文 吕品 赖际舟 李志敏 袁诚 方玮 郑国庆 温烨贝 于 2021-07-30 设计创作,主要内容包括:本发明公开了一种面向区域场景的激光雷达高精度定位方法,包括如下步骤:步骤(1)在区域周围放置反光板,利用激光雷达在区域内构建反光板点云地图F-(M);步骤(2)利用无人车周期采集第k时刻激光雷达点云数据,对激光雷达点云进行预处理得到预处理后点云集合S(k);步骤(3)通过点云匹配,解耦优化无人车位姿;步骤(4)重复步骤(2)和步骤(3),直到无人车导航结束。本发明能够在一定区域内,实现基于三维激光雷达的高精度定位。(The invention discloses a high-precision positioning method of a laser radar facing to an area scene, which comprises the following steps: step (1) placing reflectors around an area, and constructing a reflector point cloud map F in the area by using a laser radar M (ii) a Step (2) periodically collecting laser radar point cloud data at the kth moment by using an unmanned vehicle, and preprocessing the laser radar point cloud to obtain a preprocessed point cloud set S (k); decoupling and optimizing the pose of the unmanned vehicle through point cloud matching; and (4) repeating the step (2) and the step (3) until the navigation of the unmanned vehicle is finished. The invention can realize high-precision positioning based on the three-dimensional laser radar in a certain area.)

1. A laser radar high-precision positioning method facing to regional scenes is characterized by comprising the following steps:

step (1): placing reflectors around the area, and constructing a reflector point cloud map F in the area by using a laser radarM

Step (2): collecting laser radar point cloud data at the kth moment periodically by using an unmanned vehicle, and preprocessing the laser radar point cloud to obtain a preprocessed point cloud set S (k);

and (3): decoupling and optimizing the pose of the unmanned vehicle through point cloud matching;

and (4): and (5) repeating the step (2) and the step (3) until the unmanned vehicle navigation is finished.

2. The area-scene-oriented laser radar high-precision positioning method according to claim 1, wherein the step (1) specifically comprises:

step 11) placing a reflector at the rear, left and right of the area respectively, wherein the reflector is required to be a plane and has the size of at least 1m by 1m, and the center of the reflector and the laser radar on the unmanned vehicle are stored at the same height;

step 12) parking the unmanned vehicle to the center of the area, and collecting laser radar point cloud;

step 13), screening out a point cloud set Q (k) with the intensity value higher than 100 according to the intensity information of the laser radar point cloud;

step 14) extracting a point cloud set { F) of the reflector by a Ranpac algorithmMAnd storing the normal vector of the corresponding plane.

3. The area-scene-oriented laser radar high-precision positioning method according to claim 2, wherein the step 14) specifically comprises the following steps:

141) selecting at least 3 points from the point cloud data according to the point cloud set Q (k) screened in the step 13), obtaining a plane equation parameter by utilizing least square fitting, and calculating the difference d between all the point clouds in the point cloud set Q (k) and the planei,diPoints smaller than 0.05m are regarded as interior points, and the number of the interior points at the moment is recorded;

142) repeating the step 141), recording the plane parameter with the maximum number of interior points and the corresponding interior points, wherein the plane parameter is the optimal model parameter; until 30 times of iteration, the iteration is considered to be finished, the calculated optimal model parameters are plane equation parameters obtained by fitting, and the corresponding interior points are the extracted point cloud F of the reflector1

143) From the point cloud set Q (k) from culling F1And repeating the steps 141) and 142) to obtain F2

144) From the point cloud set Q (k) from culling F1、F2And repeating the steps 141) and 142) to obtain F3(ii) a Thereby obtaining a reflector point cloud map { FM}={F1,F2,F3}。

4. The method for high-precision positioning of lidar for regional scenes according to claim 1, wherein in the step (2), the laser radar point cloud is preprocessed, comprising the following steps:

step 21), screening out a point cloud set I (k) with the intensity value higher than 100 according to the intensity information of the laser radar point cloud;

and step 22), extracting a point cloud set (S (k) of the reflector by a Randac algorithm, and storing a normal vector of a corresponding plane.

5. The area-scene-oriented laser radar high-precision positioning method according to claim 4, wherein the step 22) specifically comprises the following steps:

221) selecting at least 3 points from the point cloud data according to the point cloud set I (k) screened in the step 21), obtaining a plane equation parameter by utilizing least square fitting, and calculating the difference d between all the point clouds in the I (k) and the planek,dkPoints smaller than 0.05m are regarded as interior points, and the number of the interior points at the moment is recorded;

222) repeating the step 221), recording the plane parameter with the maximum number of interior points and the corresponding interior points, wherein the plane parameter is the optimal model parameter; until 30 times of iteration, the iteration is considered to be finished, the calculated optimal model parameters are plane equation parameters obtained by fitting, and the corresponding interior points are the extracted point cloud S of the reflector1

223) From the point cloud set I (k) from culling S1Repeating the steps 221) and 222) to obtain S2

224) From the point cloud set I (k) from culling S1、S2Repeating the steps (1) and (2) to obtain S3(ii) a So as to obtain the preprocessed point cloud set { S (k) } ═ S1,S2,S3}。

6. The area scene-oriented laser radar high-precision positioning method according to claim 1, wherein in the step (3), the unmanned vehicle pose is decoupled and optimized, and the method comprises the following steps:

step 31) with the pose T of the unmanned vehicle at the moment k-1k-1For an initial value, performing coordinate transformation on the preprocessed point cloud set S (k) to obtain S' (k), wherein the transformation method is as follows:

wherein s isiDenotes a point, s 'in S (k)'iDenotes the point, S, in S' (k)i∈S(k),s′i∈S′(k);si=[six,siy,siz]T,s′i=[s′ix,s′iy,s′iz]T,six,siy,sizThree-dimensional coordinates, s 'representing the midpoint of S (k)'ix,s′iy,s′izThree-dimensional coordinates representing the midpoint of S' (k);

step 32) for each point S ' in the point cloud set S ' (k) preprocessed at the current moment 'iOn point cloud map FMIn finding the closest point miAs corresponding points;is miThe normal vector of the plane; the nonlinear least squares problem is constructed as follows:

in the formula,. DELTA.M4×4Representing a transformation pose matrix as follows:

wherein, alpha, beta and gamma represent three-axis attitude angles of the unmanned vehicle, and tx、ty、tzRepresents the three-axis position of the unmanned vehicle; assuming each optimized attitude angle is a small quantity, for Δ M4×4Carrying out approximate processing to obtain:

regardless of the height optimization, a new linear least squares problem is obtained:

wherein the content of the first and second substances,solving the normal equation by constructing the normal equation so as to obtain a new iteration initial value

Step 33) repeat step 31), step 32) until 30 iterations orLess than thresholdAnd considering that the iteration is finished to obtain a final optimization result T', wherein the result is the pose T solved by the unmanned vehicle at the kth momentk

Technical Field

The invention belongs to the field of autonomous navigation and guidance, and relates to a high-precision positioning method of a laser radar facing to an area scene.

Background

Along with the improvement of intelligent degree, the application of unmanned car is more and more extensive, involves each field such as commodity circulation, security protection, patrolling and examining. The accurate estimation of the self pose is the premise that the unmanned vehicle completes the task, and common navigation modes comprise satellite/inertia combined navigation, visual navigation, laser radar navigation and the like. In order to complete some special tasks, the unmanned vehicle is required to have high-precision estimation on the self pose in a certain area, such as autonomous charging. Satellite/inertial integrated navigation is often used in open scenes such as outdoors and the like, and has poor performance under the condition that the satellite is shielded. The conventional laser radar SLAM method and the conventional visual SLAM method cannot meet the positioning requirement.

In the prior art, the mainstream methods include magnetic stripe guidance, visual two-dimensional code positioning and laser radar positioning. The magnetic stripe guides the unmanned vehicle to run along the preset track through the magnetic tape, but the laying cost is high, the track change difficulty is high, and the route cannot be flexibly adjusted. The vision-based method obtains the position information in the two-dimensional code by identifying the two-dimensional code, so that the unmanned vehicle is guided to reach the designated area. However, the visual sensor is greatly affected by the illumination intensity, and the working effect is not good in the environment of weak light. The laser-based positioning method mostly adopts a two-dimensional laser radar and utilizes a cylindrical reflective marker for auxiliary positioning. However, the two-dimensional laser radar has a limited sensing range, is easy to be shielded and has high requirements on the environment.

Disclosure of Invention

The technical problem to be solved by the invention is as follows: the laser radar high-precision positioning method for the regional scene is provided to solve the problem of accurate positioning of unmanned vehicles in special scenes.

The invention adopts the following technical scheme for solving the technical problems:

a laser radar high-precision positioning method facing to regional scenes comprises the following steps:

step (1): placing reflectors around the area, and constructing a reflector point cloud map F in the area by using a laser radarM

Step (2): collecting laser radar point cloud data at the kth moment periodically by using an unmanned vehicle, and preprocessing the laser radar point cloud to obtain a preprocessed point cloud set S (k);

and (3): decoupling and optimizing the pose of the unmanned vehicle through point cloud matching;

and (4): and (5) repeating the step (2) and the step (3) until the unmanned vehicle navigation is finished.

Further, the step (1) is specifically as follows:

step 11) placing a reflector at the rear, left and right of the area respectively, wherein the reflector is required to be a plane and has the size of at least 1m by 1m, and the center of the reflector and the laser radar on the unmanned vehicle are stored at the same height;

step 12) parking the unmanned vehicle to the center of the area, and collecting laser radar point cloud;

step 13), screening out a point cloud set Q (k) with the intensity value higher than 100 according to the intensity information of the laser radar point cloud;

step 14) extracting a point cloud set { F) of the reflector by a Ranpac algorithmMAnd storing the normal vector of the corresponding plane.

Further, the step 14) specifically includes the following steps:

141) selecting at least 3 points from the point cloud data according to the point cloud set Q (k) screened in the step 13), obtaining a plane equation parameter by utilizing least square fitting, and calculating the difference d between all the point clouds in the point cloud set Q (k) and the planei,diPoints smaller than 0.05m are regarded as interior points, and the number of the interior points at the moment is recorded;

142) repeating the step 141), recording the plane parameter with the maximum number of interior points and the corresponding interior points, wherein the plane parameter is the optimal model parameter; until 30 times of iteration, considering that the iteration is finished, and calculatingThe optimal model parameter is the plane equation parameter obtained by fitting, and the corresponding inner point is the point cloud F of the extracted reflector1

143) From the point cloud set Q (k) from culling F1And repeating the steps 141) and 142) to obtain F2

144) From the point cloud set Q (k) from culling F1、F2And repeating the steps 141) and 142) to obtain F3(ii) a Thereby obtaining a reflector point cloud map { FM}={F1,F2,F3}。

Further, in the step (2), the laser radar point cloud is preprocessed, which includes the following steps:

step 21), screening out a point cloud set I (k) with the intensity value higher than 100 according to the intensity information of the laser radar point cloud;

and step 22), extracting a point cloud set (S (k) of the reflector by a Randac algorithm, and storing a normal vector of a corresponding plane.

Further, the step 22) specifically includes the following steps:

221) selecting at least 3 points from the point cloud data according to the point cloud set I (k) screened in the step 21), obtaining a plane equation parameter by utilizing least square fitting, and calculating the difference d between all the point clouds in the I (k) and the planek,dkPoints smaller than 0.05m are regarded as interior points, and the number of the interior points at the moment is recorded;

222) repeating the step 221), recording the plane parameter with the maximum number of interior points and the corresponding interior points, wherein the plane parameter is the optimal model parameter; until 30 times of iteration, the iteration is considered to be finished, the calculated optimal model parameters are plane equation parameters obtained by fitting, and the corresponding interior points are the extracted point cloud S of the reflector1

223) From the point cloud set I (k) from culling S1Repeating the steps 221) and 222) to obtain S2

224) From the point cloud set I (k) from culling S1、S2Repeating the steps (1) and (2) to obtain S3(ii) a So as to obtain the preprocessed point cloud set { S (k) } ═ S1,S2,S3}。

Further, in the step (3), decoupling and optimizing the pose of the unmanned vehicle comprises the following steps:

step 31) with the pose T of the unmanned vehicle at the moment k-1k-1For an initial value, performing coordinate transformation on the preprocessed point cloud set S (k) to obtain S' (k), wherein the transformation method is as follows:

wherein s isiDenotes the point, s, in S (k)i'denotes a point, S, in S' (k)i∈S(k),s′i∈S′(k);si=[six,siy,siz]T,s′i=[s′ix,s′iy,s′iz]T,six,siy,sizThree-dimensional coordinates, s 'representing the midpoint of S (k)'ix,s′iy,s′izThree-dimensional coordinates representing the midpoint of S' (k);

step 32) for each point S ' in the point cloud set S ' (k) preprocessed at the current moment 'iOn point cloud map FMIn finding the closest point miAs corresponding points;is miThe normal vector of the plane; the nonlinear least squares problem is constructed as follows:

in the formula,. DELTA.M4×4Representing a transformation pose matrix as follows:

wherein, alpha, beta and gamma represent three-axis attitude angles of the unmanned vehicle, and tx、ty、tzRepresents the three-axis position of the unmanned vehicle; assuming every suboptimumThe attitude angle of change is small, for Δ M4×4Carrying out approximate processing to obtain:

regardless of the height optimization, a new linear least squares problem is obtained:

wherein the content of the first and second substances,solving the normal equation by constructing the normal equation so as to obtain a new iteration initial value

Step 33) repeat step 31), step 32) until 30 iterations orLess than thresholdAnd considering that the iteration is finished to obtain a final optimization result T', wherein the result is the pose T solved by the unmanned vehicle at the kth momentk

The method has the beneficial effects that: compared with the prior art, the invention can realize high-precision positioning based on the three-dimensional laser radar in a certain area.

Drawings

FIG. 1 is a flow chart of the present invention.

Detailed Description

In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.

The embodiment of the invention provides a high-precision positioning method of a laser radar facing to an area scene, and a flow chart is shown in figure 1, and the method comprises the following steps:

step (1): placing reflectors around the area, and constructing a reflector point cloud map F in the area by using a laser radarM

Step 11) placing a reflector at the rear, left and right of the area respectively, wherein the reflector is required to be a plane and has the size of at least 1m by 1m, and the center of the reflector and the laser radar on the unmanned vehicle are stored at the same height;

step 12) parking the unmanned vehicle to the center of the area, and collecting laser radar point cloud;

step 13), screening out a point cloud set Q (k) with the intensity value higher than 100 according to the intensity information of the laser radar point cloud;

step 14) extracting a point cloud set { F) of the reflector by a Ranpac algorithmMAnd storing the normal vector of the corresponding plane.

The step 14) specifically comprises the following steps:

141) selecting at least 3 points from the point cloud data according to the point cloud set Q (k) screened in the step 13), obtaining a plane equation parameter by utilizing least square fitting, and calculating the difference d between all the point clouds in the point cloud set Q (k) and the planei,diPoints smaller than 0.05m are regarded as interior points, and the number of the interior points at the moment is recorded;

142) repeating the step 141), recording the plane parameter with the maximum number of interior points and the corresponding interior points, wherein the plane parameter is the optimal model parameter; until 30 times of iteration, the iteration is considered to be finished, the calculated optimal model parameters are plane equation parameters obtained by fitting, and the corresponding interior points are the extracted point cloud F of the reflector1

143) From the point cloud set Q (k) from culling F1And repeating the steps 141) and 142) to obtain F2

144) From the point cloud set Q (k) from culling F1、F2And repeating the steps 141) and 142) to obtain F3(ii) a Thereby obtaining a reflector point cloud map { FM}={F1,F2,F3}。

Step (2): collecting laser radar point cloud data at the kth moment periodically by using an unmanned vehicle, and preprocessing the laser radar point cloud to obtain a preprocessed point cloud set S (k);

step 21), screening out a point cloud set I (k) with the intensity value higher than 100 according to the intensity information of the laser radar point cloud;

and step 22), extracting a point cloud set (S (k) of the reflector by a Randac algorithm, and storing a normal vector of a corresponding plane.

The step 22) specifically comprises the following steps:

221) selecting at least 3 points from the point cloud data according to the point cloud set I (k) screened in the step 21), obtaining a plane equation parameter by utilizing least square fitting, and calculating the difference d between all the point clouds in the I (k) and the planek,dkPoints smaller than 0.05m are regarded as interior points, and the number of the interior points at the moment is recorded;

222) repeating the step 221), recording the plane parameter with the maximum number of interior points and the corresponding interior points, wherein the plane parameter is the optimal model parameter; until 30 times of iteration, the iteration is considered to be finished, the calculated optimal model parameters are plane equation parameters obtained by fitting, and the corresponding interior points are the extracted point cloud S of the reflector1

223) From the point cloud set I (k) from culling S1Repeating the steps 221) and 222) to obtain S2

224) From the point cloud set I (k) from culling S1、S2Repeating the steps (1) and (2) to obtain S3(ii) a So as to obtain the preprocessed point cloud set { S (k) } ═ S1,S2,S3}。

And (3): and decoupling and optimizing the pose of the unmanned vehicle through point cloud matching.

Step 31) with the pose T of the unmanned vehicle at the moment k-1k-1For an initial value, performing coordinate transformation on the preprocessed point cloud set S (k) to obtain S' (k), wherein the transformation method is as follows:

whereinsiDenotes the point, s, in S (k)i'denotes a point, S, in S' (k)i∈S(k),s′i∈S′(k);si=[six,siy,siz]T,s′i=[s′ix,s′iy,s′iz]T,six,siy,sizThree-dimensional coordinates, s 'representing the midpoint of S (k)'ix,s′iy,s′izThree-dimensional coordinates representing the midpoint of S' (k);

step 32) for each point S ' in the point cloud set S ' (k) preprocessed at the current moment 'iOn point cloud map FMIn finding the closest point miAs corresponding points;is miThe normal vector of the plane; the nonlinear least squares problem is constructed as follows:

in the formula,. DELTA.M4×4Representing a transformation pose matrix as follows:

wherein, alpha, beta and gamma represent three-axis attitude angles of the unmanned vehicle, and tx、ty、tzRepresents the three-axis position of the unmanned vehicle; assuming each optimized attitude angle is a small quantity, for Δ M4×4Carrying out approximate processing to obtain:

regardless of the height optimization, a new linear least squares problem is obtained:

wherein the content of the first and second substances,solving the normal equation by constructing the normal equation so as to obtain a new iteration initial value

Step 33) repeat step 31), step 32) until 30 iterations orLess than thresholdAnd considering that the iteration is finished to obtain a final optimization result T', wherein the result is the pose T solved by the unmanned vehicle at the kth momentk

And (4): and (5) repeating the step (2) and the step (3) until the unmanned vehicle navigation is finished.

10页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:接近传感器角度调整方法、装置及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类