Camera and laser radar pose calibration method and device based on point cloud registration

文档序号:1814762 发布日期:2021-11-09 浏览:30次 中文

阅读说明:本技术 基于点云配准的相机与激光雷达位姿标定方法和装置 (Camera and laser radar pose calibration method and device based on point cloud registration ) 是由 申抒含 王宝宇 于 2021-06-15 设计创作,主要内容包括:本发明提供一种基于点云配准的相机与激光雷达位姿标定方法和装置,其中方法包括:基于多个相机以及激光雷达,分别采集预设场景的图像序列和激光点云数据;其中,所述多个相机之间刚性固定;基于所述图像序列进行稀疏重建,得到所述预设场景的稀疏三维点云数据;将所述激光点云数据与所述稀疏三维点云数据进行配准,得到所述激光雷达与所述多个相机中的参考相机间的位姿标定信息,并基于所述多个相机之间的相对位姿信息,确定所述激光雷达与其他相机间的位姿标定信息。本发明可以得到更高精度的相机-激光雷达相对位姿标定结果以及相机内参标定结果,且不依赖于标定物,具有更高的鲁棒性。(The invention provides a camera and laser radar pose calibration method and device based on point cloud registration, wherein the method comprises the following steps: respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed therebetween; performing sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of the preset scene; registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining the pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras. The method can obtain the camera-laser radar relative pose calibration result and the camera internal reference calibration result with higher precision, does not depend on a calibration object, and has higher robustness.)

1. A camera and laser radar pose calibration method based on point cloud registration is characterized by comprising the following steps:

respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed therebetween;

performing sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of the preset scene;

registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining the pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras.

2. The method for calibrating pose of camera and lidar based on point cloud registration according to claim 1, wherein the method for calibrating pose of camera and lidar based on multiple cameras and lidar respectively collects image sequence and laser point cloud data of a preset scene, and specifically comprises:

the vehicle surrounds the vehicle in the preset scene in a shape like a letter 8, stops at intervals in the surrounding process, and performs data acquisition based on a plurality of cameras and a laser radar at the parking moment to obtain the image sequence and the laser point cloud data;

wherein the plurality of cameras and the lidar are fixed to the vehicle.

3. The method for calibrating the pose of a camera and a lidar based on point cloud registration according to claim 2, wherein registering the laser point cloud data with the sparse three-dimensional point cloud data to obtain pose calibration information between the lidar and a reference camera of the plurality of cameras comprises:

determining pose calibration information between the laser radar and the reference camera based on single pose calibration information of the laser radar and the reference camera at each parking moment;

and the single pose calibration information of the laser radar and the reference camera at any parking moment is obtained by registering laser point cloud data acquired by the laser radar at any parking moment with the sparse three-dimensional point cloud data.

4. The method for calibrating the pose of a camera and a lidar based on point cloud registration according to claim 3, wherein the determining the pose calibration information between the lidar and the reference camera based on the single pose calibration information of the lidar and the reference camera at each parking moment comprises:

determining single pose calibration information of the laser radar and the reference camera at each parking time in the current round by taking the pose calibration average value calculated in the previous round as an initial value, and calculating the pose calibration average value of the current round until the variance of the single pose calibration information of each currently determined parking time is not greater than a preset threshold or reaches the maximum iteration number;

the pose calibration average value is an average value of single pose calibration information of each parking time of the corresponding turn.

5. The point cloud registration-based camera and lidar pose calibration method of claim 3, wherein the single pose calibration information for the lidar and the reference camera at any parking moment is determined based on the following steps:

determining image three-dimensional points in the sparse three-dimensional point cloud data, which are matched with the laser three-dimensional points in the laser point cloud data acquired at any parking time, based on the laser three-dimensional points in the laser point cloud data acquired at any parking time;

and estimating the relative pose based on each laser three-dimensional point and the image three-dimensional point matched with the laser three-dimensional point to obtain the single pose calibration information of any parking time.

6. The method for calibrating pose of camera and lidar based on point cloud registration according to claim 1, wherein the sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of the preset scene specifically comprises:

respectively extracting the features of each image in the image sequence, and performing feature matching on each two images to obtain a matching point pair between each two images;

and constructing a matching point track based on the matching point pairs between every two images, carrying out triangularization treatment on the matching point track, carrying out binding adjustment, and reconstructing the sparse three-dimensional point cloud data.

7. The method for calibrating pose of camera and lidar based on point cloud registration of claim 6, wherein the reprojection error function employed in the binding adjustment process is:

E1=E1,1+E1,2

wherein E is1,1For the reference camera Ci1Reprojection error function of E1,2For other cameras CijJ is a reprojection error function of 2,3, 4;

wherein k represents the serial number of the three-dimensional point of the image, j represents the serial number of the image, i represents the serial number of the auxiliary camera, and xj,kAs image feature points, KjIs a camera internal reference, RjAnd tjA transformation matrix of three-dimensional points of the image in space to a reference camera,andfor transformation matrices of reference camera to other cameras, XkFor three-dimensional points of the image, r represents the projection process.

8. A camera and laser radar position and pose calibration device based on point cloud registration is characterized by comprising:

the data acquisition unit is used for respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed therebetween;

the sparse reconstruction unit is used for performing sparse reconstruction on the basis of the image sequence to obtain sparse three-dimensional point cloud data of the preset scene;

and the point cloud registration unit is used for registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining the pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras.

9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the point cloud registration-based camera and lidar pose calibration method according to any one of claims 1 to 7.

10. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the steps of the method for camera and lidar pose calibration based on point cloud registration of any of claims 1 to 7.

Technical Field

The invention relates to the technical field of sensor calibration, in particular to a camera and laser radar pose calibration method and device based on point cloud registration.

Background

In recent years, unmanned technology has received increasing attention. The sensor plays an important role in the unmanned technology, and the sensors which are widely used at present are cameras, laser radars, GPS, IMU and the like. The fusion of multiple sensors can exploit the advantages of each sensor, with two important sensors: the camera and the laser radar have good complementary advantages.

When multi-sensor fusion is carried out, pose calibration is usually required. However, in the current calibration method for the camera and the laser radar, a calibration object needs to be placed in a specific environment in advance, the placement angle and distance of the calibration object can affect the calibration result, the multi-sensor trigger has time difference, the relative pose between the sensors can be changed in the time difference, the calibration accuracy is poor, and the calibration error of the multi-sensor also has the coupling error of internal and external parameters.

Disclosure of Invention

The invention provides a camera and laser radar pose calibration method and device based on point cloud registration, which are used for solving the defect of poor calibration accuracy in the prior art.

The invention provides a camera and laser radar pose calibration method based on point cloud registration, which comprises the following steps:

respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed therebetween;

performing sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of the preset scene;

registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining the pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras.

The invention provides a camera and laser radar pose calibration method based on point cloud registration, which is based on a plurality of cameras and laser radars and respectively collects an image sequence and laser point cloud data of a preset scene, and specifically comprises the following steps:

the vehicle surrounds the vehicle in the preset scene in a shape like a letter 8, stops at intervals in the surrounding process, and performs data acquisition based on a plurality of cameras and a laser radar at the parking moment to obtain the image sequence and the laser point cloud data;

wherein the plurality of cameras and the lidar are fixed to the vehicle.

According to the camera and laser radar pose calibration method based on point cloud registration provided by the invention, the laser point cloud data and the sparse three-dimensional point cloud data are registered to obtain pose calibration information between the laser radar and a reference camera in a plurality of cameras, and the method specifically comprises the following steps:

determining pose calibration information between the laser radar and the reference camera based on single pose calibration information of the laser radar and the reference camera at each parking moment;

and the single pose calibration information of the laser radar and the reference camera at any parking moment is obtained by registering laser point cloud data acquired by the laser radar at any parking moment with the sparse three-dimensional point cloud data.

According to the camera and laser radar pose calibration method based on point cloud registration provided by the invention, the pose calibration information between the laser radar and the reference camera is determined based on the single pose calibration information of the laser radar and the reference camera at each parking moment, and the method specifically comprises the following steps:

determining single pose calibration information of the laser radar and the reference camera at each parking time in the current round by taking the pose calibration average value calculated in the previous round as an initial value, and calculating the pose calibration average value of the current round until the variance of the single pose calibration information of each currently determined parking time is not greater than a preset threshold or reaches the maximum iteration number;

the pose calibration average value is an average value of single pose calibration information of each parking time of the corresponding turn.

According to the camera and laser radar pose calibration method based on point cloud registration provided by the invention, single pose calibration information of the laser radar and the reference camera at any parking moment is determined based on the following steps:

determining image three-dimensional points in the sparse three-dimensional point cloud data, which are matched with the laser three-dimensional points in the laser point cloud data acquired at any parking time, based on the laser three-dimensional points in the laser point cloud data acquired at any parking time;

and estimating the relative pose based on each laser three-dimensional point and the image three-dimensional point matched with the laser three-dimensional point to obtain the single pose calibration information of any parking time.

According to the camera and laser radar pose calibration method based on point cloud registration, sparse reconstruction is carried out based on the image sequence to obtain sparse three-dimensional point cloud data of the preset scene, and the method specifically comprises the following steps:

respectively extracting the features of each image in the image sequence, and performing feature matching on each two images to obtain a matching point pair between each two images;

and constructing a matching point track based on the matching point pairs between every two images, carrying out triangularization treatment on the matching point track, carrying out binding adjustment, and reconstructing the sparse three-dimensional point cloud data.

According to the camera and laser radar pose calibration method based on point cloud registration, provided by the invention, a reprojection error function adopted in the binding adjustment process is as follows:

E1=E1,1+E1,2

wherein E is1,1For the reference camera Ci1Reprojection error function of E1,2For other cameras CijJ is a reprojection error function of 2,3, 4;

wherein k represents the serial number of the three-dimensional point of the image, j represents the serial number of the image, i represents the serial number of the auxiliary camera, and xj,kAs image feature points, KjIs a camera internal reference, RjAnd tjA transformation matrix of three-dimensional points of the image in space to a reference camera,andfor transformation matrices of reference camera to other cameras, XkFor three-dimensional points of the image, r represents the projection process.

The invention also provides a camera and laser radar pose calibration device based on point cloud registration, which comprises:

the data acquisition unit is used for respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed therebetween;

the sparse reconstruction unit is used for performing sparse reconstruction on the basis of the image sequence to obtain sparse three-dimensional point cloud data of the preset scene;

and the point cloud registration unit is used for registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining the pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras.

The invention also provides electronic equipment which comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to realize the steps of the point cloud registration-based camera and laser radar pose calibration method.

The invention also provides a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method for calibrating the pose of a camera and a lidar based on point cloud registration as described in any one of the above.

According to the camera and laser radar pose calibration method and device based on point cloud registration, sparse reconstruction is carried out on image sequences acquired by a plurality of cameras to obtain sparse three-dimensional point cloud data, point cloud registration is carried out on the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between a laser radar and a camera, a camera-laser radar relative pose calibration result and a camera internal reference calibration result with higher accuracy can be obtained, a calibration object is not depended on, and higher robustness is achieved.

Drawings

In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.

FIG. 1 is a schematic flow chart of a camera and laser radar pose calibration method based on point cloud registration provided by the invention;

FIG. 2 is a schematic diagram of a generalized camera model provided by the present invention;

FIG. 3 is a schematic diagram of the figure-8 orbiting motion provided by the present invention;

FIG. 4 is a schematic diagram of a camera and a laser radar pose calibration method provided by the invention;

FIG. 5 is a schematic structural diagram of a camera and lidar pose calibration device based on point cloud registration provided by the invention;

fig. 6 is a schematic structural diagram of an electronic device provided in the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Fig. 1 is a schematic flow chart of a camera and lidar pose calibration method based on point cloud registration provided in an embodiment of the present invention, as shown in fig. 1, the method includes:

step 110, respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed;

120, performing sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of a preset scene;

and step 130, registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras.

Specifically, data acquisition is performed by using a plurality of cameras and a laser radar, wherein the plurality of cameras are used for acquiring an image sequence of a preset scene, and the laser radar is used for acquiring laser point cloud data of the preset scene. Here, for improving the accuracy of the calibration method, a scene with abundant textures and geometric structures may be selected for data acquisition, for example, at an intersection, so as to provide abundant scene structures, and provide abundant texture information for feature extraction of an image.

Note that the plurality of cameras are rigidly fixed to each other. Here, embodiments of the present invention use a generalized camera model. The generalized camera model means that the cameras are rigidly fixed, so that the relative pose between the cameras is not changed. Fig. 2 is a schematic diagram of a generalized camera model according to an embodiment of the present invention, as shown in fig. 2, a camera is selected from a plurality of cameras as a reference camera, as shown in C in fig. 2i1For reference camera, the other cameras are CijJ-2, 3,4 denotes a phaseMachine number. Error function using reprojection error function, camera CijCan be determined by reference to the camera Ci1And obtaining the relative pose parameters between the two. The optimization model can ensure that the relative pose between the cameras is unchanged in the subsequent sparse reconstruction optimization process, and accords with an objective physical model.

After the data acquisition is finished, sparse reconstruction can be performed on the basis of the image sequences acquired by the multiple cameras, and sparse three-dimensional point cloud data of the preset scene are obtained. The sparse reconstruction can be performed by adopting an incremental sparse reconstruction mode, and if a better global initial pose exists, the global sparse reconstruction mode can be selected.

And then, registering the laser point cloud data with the sparse three-dimensional point cloud data to obtain the pose of the laser radar in the sparse three-dimensional point cloud data, and calculating the relative pose between a reference camera in the plurality of cameras and the laser radar according to the pose to obtain pose calibration information between the laser radar and the reference camera in the plurality of cameras. Because the relative poses between the cameras are fixed, the pose calibration information between the laser radar and other cameras can be determined based on the pose calibration information between the laser radar and the reference cameras in the cameras and the relative pose information between the cameras. The pose calibration information of the laser radar and any camera comprises the relative relation between the laser radar and the pose of the camera.

According to the method provided by the embodiment of the invention, sparse reconstruction is carried out on the image sequences acquired by the plurality of cameras to obtain sparse three-dimensional point cloud data, point cloud registration is carried out on the laser point cloud data and the sparse three-dimensional point cloud data to obtain the pose calibration information between the laser radar and the cameras, a camera-laser radar relative pose calibration result and a camera internal reference calibration result with higher precision can be obtained, and the method is independent of a calibration object and has higher robustness.

Based on the above embodiment, step 110 specifically includes:

the method comprises the following steps that a vehicle surrounds in a preset scene in a shape like a Chinese character '8', parking is carried out at intervals in the surrounding process, data collection is carried out based on a plurality of cameras and a laser radar at the parking moment, and an image sequence and laser point cloud data are obtained;

wherein the plurality of cameras and the lidar are fixed to the vehicle.

In particular, due to the different orientation of each camera in a multi-camera system, there is no overlap region between the viewing angles of the multiple cameras during data acquisition. In order to improve the accuracy of sparse reconstruction, the multi-camera acquisition equipment and the laser radar are installed on a vehicle, such as an unmanned vehicle, and the vehicle is set to acquire data in a preset scene in an 8-shaped surrounding mode, so that overlapping regions exist among the visual angles of the multi-cameras, the constraints among different cameras are increased during optimization, the multiple cameras are bundled together for optimization, and further more stable and accurate camera parameters are acquired.

Fig. 3 is a schematic diagram of a 8-shaped circling motion provided by an embodiment of the present invention, as shown in fig. 3, a vehicle starts from a starting position with a multi-camera acquisition device, at this time, a camera 1 can acquire a right "+" area, a camera 2 acquires the "+" area along with a change in the position of the multi-camera acquisition device, and a camera 3 and a camera 4 will acquire data of the "+" area in the process of successive acquisition. Therefore, when the vehicle carries the multi-camera system to carry out data acquisition in a mode of winding 8, the overlapping area of the multi-cameras can be obtained, and the accuracy of sparse reconstruction is improved.

Secondly, the vehicle can stop at intervals in the data acquisition process, and data acquisition is carried out based on a plurality of cameras and the laser radar at the time of stopping to obtain an image sequence and laser point cloud data. The mode of parking at intervals can ensure that the spatial pose is not changed when different sensors acquire data, the pose change among the multiple sensors due to asynchronous triggering cannot occur, and at the moment, the multiple sensors are in a fixed (rigid) phase position relation.

In addition, the 8-shaped winding is also a common vehicle driving mode when the inertial navigation equipment is calibrated, so that calibration of various sensors can be realized at the same time through one-time data acquisition, excessive data acquisition workload cannot be additionally increased, data acquisition is performed in a small scene, and the calculation amount of sparse reconstruction is reduced while the data amount is small.

Based on any of the above embodiments, registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera of the plurality of cameras, specifically including:

determining pose calibration information between the laser radar and the reference camera based on single pose calibration information of the laser radar and the reference camera at each parking moment;

the single pose calibration information of the laser radar and the reference camera at any parking time is obtained by registering laser point cloud data acquired by the laser radar at the parking time with sparse three-dimensional point cloud data.

Specifically, in the data acquisition process, the vehicles stop at intervals, and multiple sensors acquire data at each stopping time. Therefore, for any parking moment, the laser point cloud data acquired by the laser radar at the parking moment and the sparse three-dimensional point cloud data can be registered to obtain single pose calibration information of the laser radar and the reference camera at the parking moment. The single pose calibration information comprises the relative pose between the laser radar and the corresponding camera, including a rotation matrix and a translation vector. During registration, the corresponding relation of the initial point pairs between the laser point cloud data p and the sparse three-dimensional point cloud data p' can be obtained according to nearest neighbor, and the ith point p of the laser point cloud data p is assumed at the momentiThe corresponding point in the sparse three-dimensional point cloud data p' is pi'. The transformation relationship between the two groups of point clouds, i.e. the single pose calibration information, can be solved by minimizing the following formula:

wherein, (R, t) is single pose calibration information, R is a rotation matrix, and t is a translation vector.

By using the same registration mode, single pose calibration information corresponding to each parking moment can be obtained. Because the single pose calibration information calculated at different parking moments is not strictly equal, and because of the influence of factors such as matching errors, the single pose calibration information at some parking moments may contain larger deviation. Therefore, in order to obtain stable and consistent calibration results, the single-time pose calibration information corresponding to each parking moment can be averaged, and the average value is used as the pose calibration information between the laser radar and the reference camera.

Based on any one of the embodiments, determining the pose calibration information between the laser radar and the reference camera based on the single pose calibration information of the laser radar and the reference camera at each parking moment specifically includes:

determining single pose calibration information of the laser radar and the reference camera of the current round at each parking time by taking the pose calibration average value calculated in the previous round as an initial value, and calculating the pose calibration average value of the current round until the variance of the single pose calibration information of each currently determined parking time is not more than a preset threshold or reaches the maximum iteration number;

the pose calibration average value is the average value of single pose calibration information of each parking time of the corresponding turn.

Specifically, when a large registration error exists (for example, an initial laser radar position error is large), fluctuation between single pose calibration information at each parking time is large, and therefore the stability of the current solution can be measured by using the variance of the single pose calibration information at each parking time. If the variance of the single pose calibration information at each parking time in the previous round is large, the pose calibration average value calculated in the previous round can be used as an initial value to update the poses of all the laser radars, the single pose calibration information of the laser radar and the reference camera at each parking time in the current round is determined in an iterative mode, the pose calibration average value of the current round is calculated until the variance of the single pose calibration information at each parking time determined currently is not larger than a preset threshold (for example, 10)-6) Or a maximum number of iterations is reached, typically 3-5 times to converge.

And the pose calibration average value is the average value of single pose calibration information corresponding to each parking time of the turn. Because the single pose calibration information comprises the rotation matrix and the translation vector, the average value of the rotation matrix and the average value of the translation vector can be respectively obtained. To average the rotation matrix R, the rotation matrix may be converted to an axis-angle form, followed by averaging the axis-angle representations of all the rotation matrices. After the mean value of the axis-angle form is obtained, the mean value is converted into a rotation matrix form through a Rogorgs formula, and the mean value of the rotation matrix is obtained. For the translation vector t, all translation vectors can be directly averaged.

Based on any embodiment, the single pose calibration information of the laser radar and the reference camera at any parking moment is determined based on the following steps:

determining image three-dimensional points in the sparse three-dimensional point cloud data, which are matched with the laser three-dimensional points in the laser point cloud data acquired at the parking moment, based on the laser three-dimensional points in the laser point cloud data acquired at the parking moment;

and estimating the relative pose based on each laser three-dimensional point and the matched image three-dimensional point to obtain single pose calibration information of the parking time.

Specifically, Point cloud registration may be performed using an Iterative Closest Point algorithm (ICP). Specifically, the Euclidean distance between a certain laser three-dimensional point in the laser point cloud data collected at the parking time and all image three-dimensional points in the sparse three-dimensional point cloud data is calculated, and the image three-dimensional point with the nearest Euclidean distance is taken as a matching point of the laser three-dimensional point. And repeating the steps to obtain image three-dimensional points in the sparse three-dimensional point cloud data, which are matched with the laser three-dimensional points in the laser point cloud data, so as to form two groups of matched point clouds.

And estimating the relative pose based on each laser three-dimensional point and the matched image three-dimensional point to obtain single pose calibration information of the parking time. The relative pose estimation method can adopt a relative pose estimation method based on RANSAC. And solving a rotation matrix R and a translational vector t between the parking laser radar and the reference camera pose by using an RANSAC algorithm, carrying out laser point space pose transformation, and reconstructing and calculating errors of two groups of point clouds.

If the iteration error is reduced greatly or the maximum iteration times is exceeded compared with the last iteration, the iteration is ended, otherwise, the three-dimensional point matching and the relative pose estimation step are continuously iterated by taking the calculation result of the iteration as an initial value.

Based on any of the above embodiments, step 120 specifically includes:

respectively extracting the features of each image in the image sequence, and performing feature matching on each two images to obtain a matching point pair between each two images;

and constructing a matching point track based on the matching point pairs between every two images, carrying out triangularization treatment on the matching point track, carrying out binding adjustment, and reconstructing sparse three-dimensional point cloud data.

Specifically, sfm (structured From motion) algorithm may be employed for sparse reconstruction. Specifically, feature extraction may be performed on each image in the image sequence, a key point with stable properties in the image is found as a feature point, and feature matching is performed on each two images to obtain a matching point pair between each two images. And then, constructing a matching point track based on the matching point pairs between every two images, carrying out triangularization treatment on the matching point track, carrying out binding adjustment, and reconstructing sparse three-dimensional point cloud data.

Based on any of the above embodiments, the reprojection error function used in the bundling adjustment process is:

E1=E1,1+E1,2

wherein E is1,1As a reference camera Ci1Reprojection error function of E1,2For other cameras CijJ is a reprojection error function of 2,3, 4;

in which k represents a three-dimensional point of the imageSequence number, j denotes the sequence number of the image, i denotes the sequence number of the secondary camera, xj,kAs image feature points, KjIs a camera internal reference, RjAnd tjA transformation matrix of three-dimensional points of the image in space to a reference camera,andfor transformation matrices of reference camera to other cameras, XkFor three-dimensional points of the image, r represents the projection process.

Based on any of the above embodiments, fig. 4 is a schematic diagram of a camera and lidar pose calibration method provided by an embodiment of the present invention, as shown in fig. 4, the method includes: an SfM sparse reconstruction step and an ICP registration step. The SfM sparse reconstruction step comprises the steps of feature extraction, feature matching, matching point track triangularization and binding adjustment, so that the internal reference and the pose of each camera in the multi-camera system and the reconstructed three-dimensional point cloud are obtained. And the ICP registration step comprises the steps of carrying out ICP registration on the images and the laser point clouds ICP at all the parking moments, namely carrying out ICP registration on the laser point clouds at all the parking moments and the reconstructed three-dimensional point clouds to obtain external reference calibration results corresponding to all the parking moments, wherein the external reference calibration results are pose calibration information between the laser radar and the reference camera at corresponding moments, and the external reference calibration results are averaged, and iterative solution is carried out until convergence is achieved.

Based on any of the above embodiments, fig. 5 is a schematic structural diagram of a camera and lidar pose calibration device based on point cloud registration provided by an embodiment of the present invention, as shown in fig. 5, the device includes: a data acquisition unit 510, a sparse reconstruction unit 520 and a point cloud registration unit 530.

The data acquisition unit 510 is configured to acquire an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed;

the sparse reconstruction unit 520 is configured to perform sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of a preset scene;

the point cloud registration unit 530 is configured to register the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera of the multiple cameras, and determine pose calibration information between the laser radar and other cameras based on relative pose information between the multiple cameras.

According to the device provided by the embodiment of the invention, sparse reconstruction is carried out on the image sequences acquired by the plurality of cameras to obtain sparse three-dimensional point cloud data, point cloud registration is carried out on the laser point cloud data and the sparse three-dimensional point cloud data to obtain the pose calibration information between the laser radar and the cameras, a camera-laser radar relative pose calibration result and a camera internal reference calibration result with higher precision can be obtained, and the device is independent of a calibration object and has higher robustness.

Based on any of the above embodiments, the data acquisition unit 510 is specifically configured to:

the method comprises the following steps that a vehicle surrounds in a preset scene in a shape like a Chinese character '8', parking is carried out at intervals in the surrounding process, data collection is carried out based on a plurality of cameras and a laser radar at the parking moment, and an image sequence and laser point cloud data are obtained;

wherein the plurality of cameras and the lidar are fixed to the vehicle.

Based on any of the above embodiments, registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera of the plurality of cameras, specifically including:

determining pose calibration information between the laser radar and the reference camera based on single pose calibration information of the laser radar and the reference camera at each parking moment;

the single pose calibration information of the laser radar and the reference camera at any parking time is obtained by registering laser point cloud data acquired by the laser radar at the parking time with sparse three-dimensional point cloud data.

Based on any one of the embodiments, determining the pose calibration information between the laser radar and the reference camera based on the single pose calibration information of the laser radar and the reference camera at each parking moment specifically includes:

determining single pose calibration information of the laser radar and the reference camera of the current round at each parking time by taking the pose calibration average value calculated in the previous round as an initial value, and calculating the pose calibration average value of the current round until the variance of the single pose calibration information of each currently determined parking time is not more than a preset threshold or reaches the maximum iteration number;

the pose calibration average value is the average value of single pose calibration information of each parking time of the corresponding turn.

Based on any embodiment, the single pose calibration information of the laser radar and the reference camera at any parking moment is determined based on the following steps:

determining image three-dimensional points in the sparse three-dimensional point cloud data, which are matched with the laser three-dimensional points in the laser point cloud data acquired at the parking moment, based on the laser three-dimensional points in the laser point cloud data acquired at the parking moment;

and estimating the relative pose based on each laser three-dimensional point and the matched image three-dimensional point to obtain single pose calibration information of the parking time.

Based on any of the above embodiments, the sparse reconstruction unit 520 is specifically configured to:

respectively extracting the features of each image in the image sequence, and performing feature matching on each two images to obtain a matching point pair between each two images;

and constructing a matching point track based on the matching point pairs between every two images, carrying out triangularization treatment on the matching point track, carrying out binding adjustment, and reconstructing sparse three-dimensional point cloud data.

Based on any of the above embodiments, the reprojection error function used in the bundling adjustment process is:

E1=E1,1+E1,2

wherein E is1,1As a reference camera Ci1Reprojection error function of E1,2For other cameras CijJ is a reprojection error function of 2,3, 4;

wherein k represents the serial number of the three-dimensional point of the image, j represents the serial number of the image, i represents the serial number of the auxiliary camera, and xj,kAs image feature points, KjIs a camera internal reference, RjAnd tjA transformation matrix of three-dimensional points of the image in space to a reference camera,andfor transformation matrices of reference camera to other cameras, XkFor three-dimensional points of the image, r represents the projection process.

Fig. 6 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 6: a processor (processor)610, a communication Interface (Communications Interface)620, a memory (memory)630 and a communication bus 640, wherein the processor 610, the communication Interface 620 and the memory 630 communicate with each other via the communication bus 640. Processor 610 may invoke logic instructions in memory 630 to perform a method for camera to lidar pose calibration based on point cloud registration, the method comprising: respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed therebetween; performing sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of the preset scene; registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining the pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras.

In addition, the logic instructions in the memory 630 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

In another aspect, the present invention further provides a computer program product, the computer program product includes a computer program stored on a non-transitory computer-readable storage medium, the computer program includes program instructions, when the program instructions are executed by a computer, the computer can execute the method for calibrating the pose of a camera and a lidar based on point cloud registration, the method includes: respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed therebetween; performing sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of the preset scene; registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining the pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras.

In yet another aspect, the present invention further provides a non-transitory computer-readable storage medium, on which a computer program is stored, the computer program being implemented by a processor to perform the above-mentioned methods for calibrating pose of camera and lidar based on point cloud registration, the method including: respectively acquiring an image sequence and laser point cloud data of a preset scene based on a plurality of cameras and a laser radar; wherein the plurality of cameras are rigidly fixed therebetween; performing sparse reconstruction based on the image sequence to obtain sparse three-dimensional point cloud data of the preset scene; registering the laser point cloud data and the sparse three-dimensional point cloud data to obtain pose calibration information between the laser radar and a reference camera in the plurality of cameras, and determining the pose calibration information between the laser radar and other cameras based on the relative pose information between the plurality of cameras.

The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.

Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.

Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

16页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于结构光的深度测量装置、测量方法及拍摄设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类