Position calibration method between laser radar and camera

文档序号:1542850 发布日期:2020-01-17 浏览:7次 中文

阅读说明:本技术 激光雷达与摄像头之间的位置标定方法 (Position calibration method between laser radar and camera ) 是由 姜光 白子轩 李嘉琪 马全盟 于 2019-10-15 设计创作,主要内容包括:本发明公开了了一种激光雷达与摄像头之间的位置标定方法,主要解决现有标定方法中所需标定数据量较大和人工选取对应点导致信息融合效果不好的问题,其实现方案是:1)将激光雷达与摄像头固定在移动平台上;2)将截面为L形的竖直长杆立于开阔环境中,构建标定场景;3)多次变更长杆位姿,使用雷达和摄像头获取多组标定数据;4)根据标定数据所得的多组对应点间的对应关系解得旋转矩阵R;5)根据约束条件解得平移向量t;6)将激光雷达所在的坐标系按旋转矩阵R和平移向量t进行旋转和平移,使其与摄像机所在的坐标系重合,完成位置标定。本发明具有标定结果精度高、所需计算量小的特点,可用于标定激光雷达和摄像头间的位置。(The invention discloses a position calibration method between a laser radar and a camera, which mainly solves the problems that the required calibration data volume is large and the information fusion effect is not good due to the manual selection of corresponding points in the existing calibration method, and the realization scheme is as follows: 1) fixing a laser radar and a camera on a mobile platform; 2) a vertical long rod with an L-shaped section is erected in an open environment, and a calibration scene is constructed; 3) changing the long rod pose for multiple times, and acquiring multiple groups of calibration data by using a radar and a camera; 4) solving the corresponding relation among a plurality of groups of corresponding points obtained according to the calibration data to obtain a rotation matrix R; 5) solving the constraint condition to obtain a translation vector t; 6) and rotating and translating the coordinate system where the laser radar is located according to the rotation matrix R and the translation vector t, so that the coordinate system where the laser radar is located is superposed with the coordinate system where the camera is located, and position calibration is completed. The method has the characteristics of high calibration result precision and small required calculation amount, and can be used for calibrating the position between the laser radar and the camera.)

1. A position calibration method between a laser radar and a camera is characterized in that: comprises the following steps:

(1) mounting a camera which is subjected to distortion removal and has a known internal reference matrix K and a laser radar on a mobile platform, wherein the relative position between the camera and the laser radar is fixed;

(2) the method comprises the following steps that a vertical long rod with an L-shaped section is erected in an open environment, two planes of the L-shaped section of the long rod are marked with different colors respectively, and two faces of the long rod face a mobile platform to form a calibration scene;

(3) the long rod is shot for the first time through a laser radar and a camera to obtain radar point cloud and an image, the long rod is shot for the second time through a parallel moving platform to obtain a group of calibration data about the long rod;

(4) changing the pose of the long rod relative to the ground at least four times, and repeating the process (3) every time to obtain multiple groups of calibration data about the long rod;

(5) obtaining the intersection point X of the intersection line of the two long rod planes before and after translation on the infinite plane in the radar point cloud

(5a) In one-time shooting of the long rod by using a laser radar, at least 2 points are respectively left on two planes of the L-shaped section when one beam of laser passes through the long rod, equations of two laser trace straight lines on the two planes of the L-shaped section are respectively determined, and an intersection point p is calculated for the two trace straight lines by using the equations;

(5b) repeating the step (5a) for each laser beam to obtain the intersection point p corresponding to each laser beamiThe connecting lines of all the intersection points are the intersecting lines L of the long rod planes1Wherein i is 0,1, n, n is the number of laser beams of the laser radar;

(5c) repeating the (5a) and (5b) of the secondary shooting after translation to obtain a long rod plane intersecting line L2According to the intersection line L of two long rod planes in two times of shooting1And L2The parallelism of the two long rod planes is solved to obtain the intersection point of the intersection line of the two long rod planes on the plane of infinite distance

Figure FDA0002234416210000011

(6) obtaining (3) vanishing points o of the intersection line of the two long rod planes before and after translation on the image plane

(6a) Determination of long-rod planar intersection l in a camera image using a line detection algorithm1The equation of (c);

(6b) repeating the secondary shooting after translation (6a) to obtain a long rod plane crossed line l2The equation of (c);

(6c) according to the cross line l of the long rod plane1And l2The equation of (A) is used for solving the vanishing point of the intersection line of the two long rod planes on the image planeWherein a is1,b1,c1Is a point oTwo-dimensional homogeneous coordinate components of (a);

(7) in a plurality of groups of calibration data, a 3 × 3 rotation matrix R is solved by using the correspondence between the infinity points and the vanishing points:

(7a) repeating the steps (5) and (6) for each group of data to obtain a plurality of groups of corresponding points consisting of infinity points and vanishing points;

(7b) using a set of corresponding points oAnd XObtain a set of equationsWherein HKR, a projective transformation matrix from an infinity plane to an image plane;

(7c) repeating the step (7b) for each group of corresponding points to obtain an equation corresponding to each group of corresponding points, and solving a projective transformation matrix H for all the equations by using Singular Value Decomposition (SVD)Then by decomposing H according to the known internal reference matrix KSolving a rotation matrix R;

(8) solving a 3 multiplied by 1 translation vector t by utilizing a plurality of points on the long rod plane intersection line:

(8a) in one-time shooting, a long rod plane intersection line L in radar point cloud is taken1Last point

Figure FDA0002234416210000023

(8b) Plane intersection line L of stock1Constructing at least three groups of equations shown in (8a) at the last three points, and solving a translation vector t by using Singular Value Decomposition (SVD);

(9) and rotating and translating the coordinate system where the laser radar is located according to the rotation matrix R and the translation vector t, so that the coordinate system where the laser radar is located is superposed with the coordinate system where the camera is located, and position calibration is completed.

2. The method of claim 1, wherein: (2) the open environment refers to an environment in which other objects except the long rod cannot interfere with the radar.

3. The method of claim 1, wherein: (5a) the method comprises the steps of respectively determining equations of two laser trace straight lines on two planes of the L-shaped section, solving an intersection point p of the two laser trace straight lines by using the equations, obtaining the equations of the two laser trace straight lines by determining a straight line according to two points, and solving the intersection point p by using a simultaneous equation set method.

4. The method of claim 1, wherein: (6a) in which a linear detection algorithm is used to determine the long-rod plane cross line l in a camera image1The equation (c) is that a plurality of points falling on the cross line of the long rod plane are extracted according to different color information of two planes of the L-shaped section of the long rod, and the connecting line of all the points is the cross line L of the long rod plane1

5. The method of claim 1, wherein: (6b) repeating the secondary shooting after the translation (6a) to obtain a long rod plane crossed line l2The equation (c) is that a plurality of points falling on the cross line of the long rod plane are extracted according to different color information of two planes of the L-shaped section of the long rod, and the connecting line of all the points is the cross line L of the long rod plane2

Technical Field

The invention belongs to the technical field of image processing and computer vision, and particularly relates to a position joint calibration method which can be used for information fusion between a laser radar and a camera.

Background

The vehicle or the mobile mechanical arm is provided with the laser radar and the camera, so that the advantages of two devices can be exerted simultaneously, a better sensing function is provided, and the wider application occasions are covered.

The laser radar technology is a comprehensive product of laser technology, radar technology, control technology, computer technology and other technologies. Lidar is capable of scanning the surrounding environment with multiple lasers and collecting the reflected beams to form a point cloud and acquire data, generating an accurate three-dimensional image of the surrounding environment.

The position relationship between the laser radar and the camera is determined by the corresponding relationship between the radar data and the image points. After the position calibration is carried out, the advantages of the laser radar and the camera can be simultaneously exerted, information is fused, and three-dimensional data with color information is obtained.

In a patent applied by Zhejiang industry university, a multiline laser radar and camera combined calibration method based on refined radar scanning edge points (application date: 8/17/2018, application number: 201810939185.0, and publication number: CN109300162A), a combined calibration method based on a hollowed-out circle calibration object is disclosed. The method has the disadvantages of large data volume and complicated calculation.

And another method uses software ROS to calibrate the position between the laser radar and the camera. The method comprises the steps of firstly shooting the surrounding environment by using a laser radar and a camera, then respectively displaying radar data and an image by using visualization tools rviz and an image _ view2 package, and then selecting corresponding points in point clouds and the image pair by pair in a manual observation and mouse clicking mode. And after the selection of the multiple pairs of corresponding points is completed, calculating an external parameter matrix from the camera to a radar coordinate system. The method has the disadvantages that the point cloud data and the image need to be compared manually to select the corresponding points, the accuracy of the external parameter matrix obtained by solving is low, the determined position relation is inaccurate, and the information fusion effect is influenced.

Disclosure of Invention

The invention aims to provide a method for calibrating the position between a laser radar and a camera to improve the calibration accuracy between the laser radar and the camera and realize better fusion of the information of the laser radar and the camera aiming at the defects of the prior art.

The technical idea of the invention is as follows: solving a transformation matrix from the infinite plane to the image plane by using the corresponding relation between the space infinite point and the image plane infinite point, and solving a rotation matrix R between the laser radar and the camera from the matrix; and solving a translation vector t between the laser radar and the camera through the constraint condition of the point on the straight line to obtain a complete position corresponding relation.

According to the above thought, the implementation steps of the invention include the following:

(1) mounting a camera which is subjected to distortion removal and has a known internal reference matrix K and a laser radar on a mobile platform, wherein the relative position between the camera and the laser radar is fixed;

(2) the method comprises the following steps that a vertical long rod with an L-shaped section is erected in an open environment, two planes of the L-shaped section of the long rod are marked with different colors respectively, and two faces of the long rod face a mobile platform to form a calibration scene;

(3) the long rod is shot for the first time through a laser radar and a camera to obtain radar point cloud and an image, the long rod is shot for the second time through a parallel moving platform to obtain a group of calibration data about the long rod;

(4) changing the pose of the long rod relative to the ground at least four times, and repeating the process (3) every time to obtain multiple groups of calibration data about the long rod;

(5) obtaining the intersection point X of the intersection line of the two long rod planes before and after translation on the infinite plane in the radar point cloud

(5a) In one-time shooting of the long rod by using a laser radar, at least 2 points are respectively left on two planes of the L-shaped section when one beam of laser passes through the long rod, equations of two laser trace straight lines on the two planes of the L-shaped section are respectively determined, and an intersection point p is calculated for the two trace straight lines by using the equations;

(5b) repeating the step (5a) for each laser beam to obtain the intersection point p corresponding to each laser beamiThe connecting lines of all the intersection points are the intersecting lines L of the long rod planes1Wherein i is 0,1, n, n is the number of laser beams of the laser radar;

(5c) repeating the (5a) and (5b) of the secondary shooting after translation to obtain a long rod plane intersecting line L2According to the intersection line L of two long rod planes in two times of shooting1And L2The parallelism of the two long rod planes is solved to obtain the intersection point of the intersection line of the two long rod planes on the plane of infinite distance

Figure BDA0002234416220000021

Wherein X1,Y1,Z10 is a point XThree-dimensional homogeneous coordinate components of (a);

(6) obtaining (3) vanishing points o of the intersection line of the two long rod planes before and after translation on the image plane

(6a) Determination of long-rod planar intersection l in a camera image using a line detection algorithm1The equation of (c);

(6b) repeating the secondary shooting after translation (6a) to obtain a long rod plane crossed line l2The equation of (c);

(6c) according to the cross line l of the long rod plane1And l2The equation of (A) is used for solving the vanishing point of the intersection line of the two long rod planes on the image plane

Figure BDA0002234416220000031

Wherein a is1,b1,c1Is a point oTwo-dimensional homogeneous coordinate components of (a);

(7) in a plurality of groups of calibration data, a 3 × 3 rotation matrix R is solved by using the correspondence between the infinity points and the vanishing points:

(7a) repeating the steps (5) and (6) for each group of data to obtain a plurality of groups of corresponding points consisting of infinity points and vanishing points;

(7b) using a set of corresponding points oAnd XObtain a set of equations

Figure BDA0002234416220000032

Wherein HKR, a projective transformation matrix from an infinity plane to an image plane;

(7c) repeating the step (7b) for each group of corresponding points to obtain an equation corresponding to each group of corresponding points, and solving a projective transformation matrix H for all the equations by using Singular Value Decomposition (SVD)Then by decomposing H according to the known internal reference matrix KSolving a rotation matrix R;

(8) solving a 3 multiplied by 1 translation vector t by utilizing a plurality of points on the long rod plane intersection line:

(8a) in one-time shooting, a long rod plane intersection line L in radar point cloud is taken1Last point

Figure BDA0002234416220000033

Wherein D, E, F, 1 are the three-dimensional homogeneous coordinate components of point Q, respectively; the image point according to the point Q is to fall on the cross line l of the long rod plane of the image plane1The above constraint condition results in a set of equations

Figure BDA0002234416220000034

(8b) Plane intersection line L of stock1Constructing at least three groups of equations shown in (8a) at the last three points, and solving a translation vector t by using Singular Value Decomposition (SVD);

(9) and rotating and translating the coordinate system where the laser radar is located according to the rotation matrix R and the translation vector t, so that the coordinate system where the laser radar is located is superposed with the coordinate system where the camera is located, and position calibration is completed.

Compared with the prior art, the invention has the following advantages:

firstly, the invention adopts the infinite point and the vanishing point as the corresponding points, has stronger geometric constraint relation and smaller data volume required by calibration.

Secondly, the method does not need manual participation except the preparation process of the calibration data, and has higher automation degree.

Drawings

FIG. 1 is a flow chart of an implementation of the present invention.

Detailed Description

The present invention will be described in further detail with reference to the accompanying drawings.

Referring to fig. 1, the method comprises the following steps:

step 1, fixing the laser radar and the camera on a mobile platform.

The laser radar is a radar system which emits laser beams to detect characteristic quantities such as target position, speed and the like; the laser radar used in the embodiment is a multi-line laser radar, which is a laser rotation distance measuring radar for simultaneously emitting and receiving a plurality of laser beams; the camera used in the embodiment is a camera with distortion removed and known internal reference matrix K, and a fisheye lens or a standard lens can be selected according to the requirement of the field angle;

the mobile platform comprises an automobile and a mechanical arm and can carry a laser radar and a camera and move; the fixing adopts screw fixation and welding fixation, the firmness degree is not influenced by the movement of the platform, and the relative position information of the distance between the laser radar and the camera and the pose is not changed any more.

And 2, constructing a calibration scene.

The method comprises the following steps that a vertical long rod with an L-shaped section is erected in an open environment, namely the environment that other objects do not interfere radar imaging;

and marking two planes of the L-shaped section of the long rod with different colors respectively, wherein the two planes face the mobile platform to form a calibration scene.

And 3, changing the positions and postures of the long rod for multiple times, and acquiring multiple groups of calibration data by using the radar and the camera.

3.1) shooting the long rod for the first time through the laser radar and the camera to obtain radar point cloud and an image, then parallelly moving the platform, shooting the long rod for the second time to obtain a group of calibration data about the long rod, wherein the group of data comprises the radar point cloud and the image shot by the laser radar and the camera for the long rod for the second time;

the secondary shooting can also be carried out in a mode of only translating the long rod by adopting a fixed moving platform;

3.2) changing the pose of the long rod relative to the ground at least four times, and repeating the process of 3.1) every time to obtain multiple groups of calibration data of the long rod. The posture of the long rod relative to the ground is changed ten times in the embodiment.

And 4, solving the corresponding relation between a plurality of groups of infinity points and vanishing points obtained according to the calibration data to obtain a rotation matrix R.

4.1) obtaining the intersection point X of the intersection line of the two long rod planes before and after 3.1) translation on the plane at infinity in the radar point cloud

4.1.1) in one-time shooting of the long rod by using a laser radar, at least 2 points are respectively reserved on two planes of the L-shaped section when one beam of laser passes through the long rod, an equation of two laser trace straight lines on the two planes of the L-shaped section is respectively determined by a method for determining a straight line according to the two points, and an intersection point p is solved for the two trace straight lines by using a method of a simultaneous equation set;

4.1.2) repeating for each laser beam 4.1.1) to obtain the intersection point p corresponding to each laser beamiThe connecting lines of all the intersection points are the intersecting lines L of the long rod planes1Where i is 0,1,.., n, n is the number of laser beams of the laser radar, which in this example uses 16 laser beams, i.e., n is 16;

4.1.3) repeating 4.1.1) and 4.1.2) for the secondary shooting after translation, obtaining the long rod plane cross line L2According to the intersection line L of two long rod planes in two times of shooting1And L2The intersection point of the intersection line of the two long rod planes on the plane of infinity is solved:

Figure BDA0002234416220000051

wherein X1,Y1,Z10 is a point XThree-dimensional homogeneous coordinate components of (a);

4.2) obtaining the vanishing point o of the intersection line of the two long rod planes before and after 3.1) translation on the image plane

4.2.1), extracting a plurality of drops from one camera image according to different color information of two planes of the L-shaped section of the long rodThe connecting lines of all the points on the long rod plane cross line are the long rod plane cross line l1

4.2.2) repeat 4.2.1) to the secondary shooting after translation), according to the different color information in two planes of stock L type cross-section, extract a plurality of points that fall on stock plane intersecting line, the line of connection of all points is stock plane intersecting line L2

4.2.3) twice-shooting intersection line l according to long rod plane1And l2Obtaining vanishing points according to the intersection relation on the image plane

Figure BDA0002234416220000052

Wherein a is1,b1,c1Is a point oTwo-dimensional homogeneous coordinate components of (a);

4.3) in a plurality of groups of calibration data, solving a 3 multiplied by 3 rotation matrix R by utilizing the corresponding relation between the infinite point and the vanishing point:

4.3.1) repeating 4.1) and 4.2) for each group of data to obtain a plurality of groups of infinite points and vanishing points;

4.3.2) according to the corresponding point o in a set of pointsAnd XThere is a corresponding relationship between

Figure BDA0002234416220000053

And XThe last coordinate component is 0 to obtain an equationWherein HKR is a projective transformation matrix from the infinity plane to the image plane;

4.3.3) repeating for 4.3.2) for each group of corresponding points, obtaining the corresponding equation of each group, and solving the projective transformation matrix H by using Singular Value Decomposition (SVD) for all the equationsThen by decomposing H according to the known internal reference matrix KAnd solving a rotation matrix R.

And 5, solving the translation vector t according to the constraint condition.

5.1) taking the long rod plane intersection line L in the radar point cloud in one-time shooting1Last point

Figure BDA0002234416220000061

Wherein D, E, F, 1 are the three-dimensional homogeneous coordinate components of point Q, respectively; the image point according to the point Q is to fall on the cross line l of the long rod plane of the image plane1The above constraint condition results in a set of equations

Figure BDA0002234416220000062

5.2) taking the plane intersection line L of the long rod1Constructing at least three groups of equations shown in 5.1) at the last three points, and then solving a translation vector t by using Singular Value Decomposition (SVD).

And 6, rotating and translating the coordinate system where the laser radar is located according to the rotation matrix R and the translation vector t, so that the coordinate system where the laser radar is located is overlapped with the coordinate system where the camera is located, and position calibration is completed.

The foregoing description is only an example of the present invention and is not intended to limit the invention, so that it will be apparent to those skilled in the art that various changes and modifications in form and detail may be made therein without departing from the spirit and scope of the invention.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种动态偏振激光回波信号模拟系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!