Combined calibration method and device for camera and laser radar and storage medium
阅读说明:本技术 摄像机与激光雷达的联合标定方法、装置及存储介质 (Combined calibration method and device for camera and laser radar and storage medium ) 是由 唐得志 赛影辉 阴山慧 肖飞 陈开祥 李垚 于 2019-11-28 设计创作,主要内容包括:本申请公开了一种摄像机与激光雷达的联合标定方法、装置及存储介质,属于计算机视觉技术领域。该方法包括:根据激光雷达扫描目标标定靶的激光点云,确定目标标定靶的N个特征点的空间坐标,N由目标标定靶的形状确定得到;通过摄像机检测N个特征点的平面坐标;根据N个特征点的空间坐标和N个特征点的平面坐标,确定激光雷达和摄像机的姿态信息,以实现摄像机和所述激光雷达的联合标定。本申请通过从激光雷达获得的目标标定靶的激光点云,确定特征点的空间坐标,并通过摄像头检测图像中特征点的平面坐标,通过空间坐标和平面坐标确定激光雷达和摄像机的姿态信息,仅需一次就可以完成摄像机与激光雷达的标定,提高了标定的效率与准确性。(The application discloses a camera and laser radar combined calibration method, a camera and laser radar combined calibration device and a storage medium, and belongs to the technical field of computer vision. The method comprises the following steps: according to the laser point cloud of a target calibration target scanned by a laser radar, determining the spatial coordinates of N characteristic points of the target calibration target, wherein N is determined by the shape of the target calibration target; detecting plane coordinates of the N characteristic points through a camera; and determining attitude information of the laser radar and the camera according to the space coordinates of the N characteristic points and the plane coordinates of the N characteristic points so as to realize the combined calibration of the camera and the laser radar. The space coordinate of the characteristic points is determined through the laser point cloud of the target calibration target obtained from the laser radar, the plane coordinate of the characteristic points in the image is detected through the camera, the attitude information of the laser radar and the camera is determined through the space coordinate and the plane coordinate, the calibration of the camera and the laser radar can be completed only once, and the calibration efficiency and accuracy are improved.)
1. A combined calibration method for a camera and a laser radar is characterized by comprising the following steps:
according to laser point cloud of a target calibration target scanned by a laser radar, determining the spatial coordinates of N characteristic points of the target calibration target, wherein N is determined by the shape of the target calibration target;
detecting the plane coordinates of the N characteristic points through a camera;
and determining attitude information of the laser radar and the camera according to the space coordinates of the N characteristic points and the plane coordinates of the N characteristic points so as to realize the combined calibration of the camera and the laser radar.
2. The method of claim 1, wherein before determining the spatial coordinates of the N feature points of the target calibration target from the laser point cloud of the target calibration target scanned by the laser radar, further comprising:
acquiring laser point cloud of the laser radar;
and according to the laser point cloud, carrying out levelness calibration on the laser point cloud.
3. The method of claim 2, wherein said leveling the laser point cloud from the laser point cloud comprises:
projecting the laser point cloud to obtain a top view of the laser point cloud;
in the top view of the laser point cloud, taking the position of the laser radar as a center, determining a cross reticle, wherein the cross reticle is a cross reticle which takes the position of the laser radar as the center and is symmetrical with each other;
in the top view of the laser point cloud, when any laser point cloud of the laser radar passes through four vertexes of the cross line, the level of the laser radar and the road surface where the laser radar is located is determined, so that the level calibration of the laser radar is completed.
4. The method of claim 1, wherein the determining the spatial coordinates of the N feature points of the target calibration target from the laser point cloud of the target calibration target scanned by the laser radar comprises:
clustering laser point clouds into a plurality of scanning line segments according to the distance between each laser radar point in the laser point clouds along the laser scanning direction and the change of the laser radar point direction;
clustering the scanning line segments into object segments to obtain a laser radar point set of a target plane of the target calibration target;
according to the point cloud intensity value and the intensity threshold value of the laser point cloud, eliminating interference points from the laser radar point set;
determining N boundary points of a preset area in the target calibration target according to the reflection intensity difference value of each laser radar point in the laser point cloud;
and determining the space coordinates of the intersection points of the boundary lines corresponding to the N boundary points as the space coordinates of the N characteristic points of the target calibration target.
5. The method of claim 4, wherein determining N boundary points of a preset area in the target calibration target based on the difference of the reflection intensity of each lidar point in the laser point cloud comprises:
for a laser radar subset of an ith scanning laser in a plurality of scanning lasers, determining whether a reflection intensity difference value between any two adjacent laser radar points in the laser radar subset is greater than a difference threshold value, wherein i is a positive integer greater than or equal to 1;
and when the reflection intensity difference value between any two adjacent laser radar points is greater than the difference threshold value, determining the average coordinate value of any two adjacent laser radar points as the coordinate value of any boundary point.
6. A camera and lidar combined calibration apparatus, the apparatus comprising:
the first determination module is used for determining the spatial coordinates of N characteristic points of a target calibration target according to the laser point cloud of the target calibration target scanned by a laser radar, wherein N is determined by the shape of the target calibration target;
the detection module is used for detecting the plane coordinates of the N characteristic points through a camera;
and the second determining module is used for determining the attitude information of the laser radar and the camera according to the space coordinates of the N characteristic points and the plane coordinates of the N characteristic points so as to realize the combined calibration of the camera and the laser radar.
7. The apparatus of claim 6, wherein the apparatus further comprises:
the acquisition module is used for acquiring laser point cloud of the laser radar;
and the calibration module is used for calibrating the levelness of the laser point cloud according to the laser point cloud.
8. The apparatus of claim 7, wherein the calibration module comprises:
the projection submodule is used for projecting the laser point cloud to obtain a top view of the laser point cloud;
the first determining submodule is used for determining a cross reticle by taking the position of the laser radar as a center in a top view of the laser point cloud, wherein the cross reticle is a cross reticle which is centered on the position of the laser radar and is symmetrical to the position of the laser radar;
and the second determining submodule is used for determining the level of the laser radar and the road surface when any laser point cloud of the laser radar passes through the four vertexes of the cross line in the top view of the laser point cloud so as to finish the level calibration of the laser radar.
9. The apparatus of claim 6, wherein the first determining module comprises:
the first clustering submodule is used for clustering the laser point cloud into a plurality of scanning line segments according to the distance between each laser radar point in the laser point cloud along the laser scanning direction and the change of the laser radar point direction;
the second clustering submodule is used for clustering the scanning line segments into object segments to obtain a laser radar point set of a target plane of the target calibration target;
the eliminating sub-module is used for eliminating interference points from the laser radar point set according to the point cloud intensity value and the intensity threshold value of the laser point cloud;
the third determining submodule is used for determining N boundary points of a preset area in the target calibration target according to the reflection intensity difference value of each laser radar point in the laser point cloud;
and the fourth determining submodule is used for determining the space coordinates of the intersection points of the boundary lines corresponding to the N boundary points as the space coordinates of the N characteristic points of the target calibration target.
10. A computer-readable storage medium, characterized in that the storage medium has stored therein a computer program which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 5.
Technical Field
The present application relates to the field of computer vision technologies, and in particular, to a method and an apparatus for jointly calibrating a camera and a laser radar, and a storage medium.
Background
With the development of unmanned vehicles, lidar and cameras have been applied to the field of unmanned vehicles as an important part of environmental perception. The objects such as pedestrians and vehicles in the image can be detected through the laser radar and the camera, and the laser radar and the camera are usually calibrated in advance to ensure the detection accuracy.
Currently, in calibration, the vertices of a polygonal plate, which may be a chessboard or a triangular plate, may be found in the point cloud obtained by a laser radar and the image captured by a camera, and the vertices are estimated by constructing convex hulls of the point cloud of an extraction plate to achieve calibration. However, since the vertex only includes geometric features of the outline of the polygonal plate, these features are formed by the discontinuity of the scanning line, which results in insufficient utilization of the point cloud and inaccurate calibration.
Disclosure of Invention
The application provides a camera and laser radar combined calibration method, device and storage medium, which can solve the problem of inaccurate calibration in the related technology. The technical scheme is as follows:
in one aspect, a joint calibration method for a camera and a laser radar is provided, where the method includes:
according to laser point cloud of a target calibration target scanned by a laser radar, determining the spatial coordinates of N characteristic points of the target calibration target, wherein N is determined by the shape of the target calibration target;
detecting the plane coordinates of the N characteristic points through a camera;
and determining attitude information of the laser radar and the camera according to the space coordinates of the N characteristic points and the plane coordinates of the N characteristic points so as to realize the combined calibration of the camera and the laser radar.
In some embodiments, before determining the spatial coordinates of the N feature points of the target calibration target according to the laser point cloud of the target calibration target scanned by the laser radar, the method further includes:
acquiring laser point cloud of the laser radar;
and according to the laser point cloud, carrying out levelness calibration on the laser point cloud.
In some embodiments, the leveling the laser point cloud according to the laser point cloud includes:
projecting the laser point cloud to obtain a top view of the laser point cloud;
in the top view of the laser point cloud, taking the position of the laser radar as a center, determining a cross reticle, wherein the cross reticle is a cross reticle which takes the position of the laser radar as the center and is symmetrical with each other;
in the top view of the laser point cloud, when any laser point cloud of the laser radar passes through four vertexes of the cross line, the level of the laser radar and the road surface where the laser radar is located is determined, so that the level calibration of the laser radar is completed.
In some embodiments, the determining the spatial coordinates of the N feature points of the target calibration target according to the scanning of the laser point cloud of the target calibration target by the laser radar includes:
clustering laser point clouds into a plurality of scanning line segments according to the distance between each laser radar point in the laser point clouds along the laser scanning direction and the change of the laser radar point direction;
clustering the scanning line segments into object segments to obtain a laser radar point set of a target plane of the target calibration target;
according to the point cloud intensity value and the intensity threshold value of the laser point cloud, eliminating interference points from the laser radar point set;
determining N boundary points of a preset area in the target calibration target according to the reflection intensity difference value of each laser radar point in the laser point cloud;
and determining the space coordinates of the intersection points of the boundary lines corresponding to the N boundary points as the space coordinates of the N characteristic points of the target calibration target.
In some embodiments, the determining N boundary points of a preset region in the target calibration target according to the reflection intensity difference of each lidar point in the laser point cloud includes:
for a laser radar subset of an ith scanning laser in a plurality of scanning lasers, determining whether a reflection intensity difference value between any two adjacent laser radar points in the laser radar subset is greater than a difference threshold value, wherein i is a positive integer greater than or equal to 1;
and when the reflection intensity difference value between any two adjacent laser radar points is greater than the difference threshold value, determining the average coordinate value of any two adjacent laser radar points as the coordinate value of any boundary point.
In another aspect, a combined calibration apparatus for a camera and a lidar is provided, where the apparatus includes:
the first determination module is used for determining the spatial coordinates of N characteristic points of a target calibration target according to the laser point cloud of the target calibration target scanned by a laser radar, wherein N is determined by the shape of the target calibration target;
the detection module is used for detecting the plane coordinates of the N characteristic points through a camera;
and the second determining module is used for determining the attitude information of the laser radar and the camera according to the space coordinates of the N characteristic points and the plane coordinates of the N characteristic points so as to realize the combined calibration of the camera and the laser radar.
In some embodiments, the apparatus further comprises:
the acquisition module is used for acquiring laser point cloud of the laser radar;
and the calibration module is used for calibrating the levelness of the laser point cloud according to the laser point cloud.
In some embodiments, the calibration module comprises:
the projection submodule is used for projecting the laser point cloud to obtain a top view of the laser point cloud;
the first determining submodule is used for determining a cross reticle by taking the position of the laser radar as a center in a top view of the laser point cloud, wherein the cross reticle is a cross reticle which is centered on the position of the laser radar and is symmetrical to the position of the laser radar;
and the second determining submodule is used for determining the level of the laser radar and the road surface when any laser point cloud of the laser radar passes through the four vertexes of the cross line in the top view of the laser point cloud so as to finish the level calibration of the laser radar.
In some embodiments, the first determining module comprises:
the first clustering submodule is used for clustering the laser point cloud into a plurality of scanning line segments according to the distance between each laser radar point in the laser point cloud along the laser scanning direction and the change of the laser radar point direction;
the second clustering submodule is used for clustering the scanning line segments into object segments to obtain a laser radar point set of a target plane of the target calibration target;
the eliminating sub-module is used for eliminating interference points from the laser radar point set according to the point cloud intensity value and the intensity threshold value of the laser point cloud;
the third determining submodule is used for determining N boundary points of a preset area in the target calibration target according to the reflection intensity difference value of each laser radar point in the laser point cloud;
and the fourth determining submodule is used for determining the space coordinates of the intersection points of the boundary lines corresponding to the N boundary points as the space coordinates of the N characteristic points of the target calibration target.
In some embodiments, the third determination submodule is to:
for a laser radar subset of an ith scanning laser in a plurality of scanning lasers, determining whether a reflection intensity difference value between any two adjacent laser radar points in the laser radar subset is greater than a difference threshold value, wherein i is a positive integer greater than or equal to 1;
and when the reflection intensity difference value between any two adjacent laser radar points is greater than the difference threshold value, determining the average coordinate value of any two adjacent laser radar points as the coordinate value of any boundary point.
In another aspect, a terminal is provided, where the terminal includes a memory and a processor, the memory is used to store a computer program, and the processor is used to execute the computer program stored in the memory, so as to implement the steps of the camera and lidar joint calibration method described above.
In another aspect, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when being executed by a processor, implements the steps of the camera and lidar joint calibration method described above.
In another aspect, a computer program product comprising instructions is provided, which when run on a computer, causes the computer to perform the steps of the above-described method for joint calibration of a camera and a lidar.
The technical scheme provided by the application can at least bring the following beneficial effects:
in the application, the space coordinate of the characteristic point can be determined through the laser point cloud of the target calibration target obtained from the laser radar, the plane coordinate of the characteristic point in the image is detected through the camera, the attitude information of the laser radar and the camera is determined through the space coordinate and the plane coordinate, the calibration of the camera and the laser radar can be completed only once, and the calibration efficiency and accuracy are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic illustration of an implementation environment provided by an embodiment of the present application;
fig. 2 is a flowchart of a method for jointly calibrating a camera and a lidar according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another method for jointly calibrating a camera and a lidar according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a target calibration target according to an embodiment of the present disclosure;
FIG. 5 is a schematic structural diagram of a combined calibration device for a camera and a lidar, provided by an embodiment of the present application;
FIG. 6 is a schematic structural diagram of another combined calibration device for a camera and a lidar, provided by an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a calibration module according to an embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a first determining module provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Before explaining the combined calibration method of the camera and the laser radar provided by the embodiment in detail, an application scenario and an implementation environment provided by the embodiment of the present application are introduced.
First, an application scenario related to the embodiment of the present application is described.
In order to realize unmanned driving, environment sensing is usually performed through a laser radar and a camera, and in order to ensure the accuracy of the environment sensing, the laser radar and the camera need to be calibrated in advance.
Currently, in calibration, the vertices of a polygonal plate, which may be a chessboard or a triangular plate, may be found in the point cloud obtained by a laser radar and the image captured by a camera, and the vertices are estimated by constructing convex hulls of the point cloud of an extraction plate to achieve calibration. However, since the vertex only includes geometric features of the outer contour of the polygonal plate, which are formed by the discontinuity of the scanning line, and the internal information is not used, the utilization of the point cloud is insufficient, and the calibration is inaccurate.
Based on the scene, the embodiment of the application provides a camera and laser radar combined calibration method for improving the sufficiency and calibration accuracy of point cloud utilization.
Next, a system architecture according to an embodiment of the present application will be described.
Referring to FIG. 1, FIG. 1 is a schematic diagram illustrating an implementation environment in accordance with an example embodiment. The implementation environment includes at least one
The terminal 101 may be any electronic product capable of performing human-computer interaction with a user through one or more modes such as a keyboard, a touch pad, a touch screen, a remote controller, voice interaction or handwriting equipment, for example, a PC (Personal computer), a mobile phone, a smart phone, a PDA (Personal Digital Assistant), a wearable device, a pocket PC (pocket PC), a tablet computer, a smart car, a smart television, a smart sound box, and the like.
The automobile 102 may be any automobile having a lidar and a camera mounted thereon.
Those skilled in the art will appreciate that the terminal 101 and the car 102 are only examples, and other existing or future terminals or cars may be suitable for the application and are included within the scope of the present application and are hereby incorporated by reference.
The combined calibration method for the camera and the lidar provided by the embodiments of the present application will be explained in detail with reference to the accompanying drawings.
Fig. 2 is a flowchart of a method for jointly calibrating a camera and a lidar according to an embodiment of the present disclosure, where the method is applied to a terminal. Referring to fig. 2, the method includes the following steps.
Step 201: according to the laser point cloud of a target calibration target scanned by a laser radar, determining the space coordinates of N characteristic points of the target calibration target, wherein N is determined by the shape of the target calibration target.
Step 201: the plane coordinates of the N feature points are detected by a camera.
Step 201: and determining attitude information of the laser radar and the camera according to the space coordinates of the N characteristic points and the plane coordinates of the N characteristic points so as to realize the combined calibration of the camera and the laser radar.
In the method, the space coordinates of the characteristic points are determined through the laser point cloud of the target calibration target obtained from the laser radar, the plane coordinates of the characteristic points in the image are detected through the camera, the attitude information of the laser radar and the camera is determined through the space coordinates and the plane coordinates, the calibration of the camera and the laser radar can be completed only once, and the calibration efficiency and accuracy are improved.
In some embodiments, before determining the spatial coordinates of the N feature points of the target calibration target according to the laser point cloud of the target calibration target scanned by the laser radar, the method further includes:
acquiring laser point cloud of the laser radar;
and according to the laser point cloud, carrying out levelness calibration on the laser point cloud.
In some embodiments, leveling the laser point cloud from the laser point cloud comprises:
projecting the laser point cloud to obtain a top view of the laser point cloud;
in the top view of the laser point cloud, taking the position of the laser radar as a center, determining a cross reticle, wherein the cross reticle is a cross reticle which takes the position of the laser radar as the center and is symmetrical with each other;
in the top view of the laser point cloud, when any laser point cloud of the laser radar passes through the four vertexes of the cross line, the level of the laser radar and the road surface where the laser radar is located is determined, so that the level calibration of the laser radar is completed.
In some embodiments, determining the spatial coordinates of N feature points of a target calibration target according to a laser point cloud of the target calibration target scanned by a laser radar includes:
clustering the laser point cloud into a plurality of scanning line segments according to the distance between each laser radar point in the laser point cloud along the laser scanning direction and the change of the laser radar point direction;
clustering the scanning line segments into object segments to obtain a laser radar point set of a target plane of the target calibration target;
according to the point cloud intensity value and the intensity threshold value of the laser point cloud, eliminating interference points from the laser radar point set;
determining N boundary points of a preset area in the target calibration target according to the reflection intensity difference value of each laser radar point in the laser point cloud;
and determining the space coordinates of the intersection points of the boundary lines corresponding to the N boundary points as the space coordinates of the N characteristic points of the target calibration target.
In some embodiments, determining N boundary points of a preset region in the target calibration target according to the reflection intensity difference of each lidar point in the laser point cloud includes:
for a laser radar subset of an ith scanning laser in a plurality of scanning lasers, determining whether a reflection intensity difference value between any two adjacent laser radar points in the laser radar subset is greater than a difference threshold value, wherein i is a positive integer greater than or equal to 1;
and when the difference value of the reflection intensity between any two adjacent laser radar points is larger than the difference threshold value, determining the average coordinate value of any two adjacent laser radar points as the coordinate value of any boundary point.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present application, and the present application embodiment is not described in detail again.
Fig. 3 is a flowchart of a method for calibrating a camera and a lidar according to an embodiment of the present disclosure, and referring to fig. 3, the method includes the following steps.
Step 301: and the terminal calibrates the levelness of the laser radar.
It should be noted that, the terminal is when carrying out the levelness calibration to laser radar, can install laser radar on the tripod and fix the tripod on the roof of car, selects comparatively level and the road surface of less shelter from the thing, and the operation laser radar obtains the laser point cloud to with the laser point cloud projection to the top view on, set up the size of slide bar in order to adjust the laser radar top view, mark the position that laser radar belonged to simultaneously on the top view.
As an example, the terminal may perform levelness calibration on the laser radar through a cross intersection method. That is, the terminal can acquire the laser point cloud of the laser radar; and according to the laser point cloud, carrying out levelness calibration on the laser point cloud.
As an example, the terminal performs levelness calibration on the laser point cloud according to the laser point cloud, and the operations of performing levelness calibration on the laser point cloud may be: projecting the laser point cloud to obtain a top view of the laser point cloud; in the top view of the laser point cloud, taking the position of the laser radar as the center, determining a cross reticle which is symmetrical to each other by taking the position of the laser radar as the center; in the top view of the laser point clouds, when any laser point cloud of the laser radar passes through four vertexes of the cross line, the level of the laser radar and the road surface where the laser radar is located is determined, so that the level calibration of the laser radar is completed.
In some embodiments, the terminal may calibrate the levelness of the laser radar in the above manner, and may also calibrate the levelness of the laser radar in other manners, for example, because the laser radar may generate a circle of annular point cloud on a top view of the laser point cloud, if the annular point cloud is close to a circle, it indicates that the laser radar is approximately level with a road surface where the laser radar is located. Therefore, the terminal can determine the similarity of the shape of the annular point cloud and the circle; and when the similarity between the shape of the annular point cloud and the circle is greater than the similarity threshold, determining the level of the laser radar and the road surface where the laser radar is located so as to finish the level calibration of the laser radar. And when the similarity between the shape of the annular point cloud and the circular shape is smaller than or equal to the similarity threshold, determining that the laser radar is not level with the road surface, adjusting the position of the laser radar, and returning the terminal to perform the operation of levelness calibration on the laser radar until the laser radar is level with the road surface.
It should be noted that the similarity threshold may be set in advance according to requirements, for example, the similarity threshold may be 90%, 95%, and so on.
Step 302: the terminal scans the laser point cloud of the target calibration target according to the laser radar, and determines the space coordinates of N characteristic points of the target calibration target, wherein N is determined by the shape of the target calibration target.
It should be noted that the target calibration target is a calibration target preset for implementing calibration, and the shape of the target calibration target may be a triangle, a hollowed triangle, a quadrangle, and the like, for example, the target calibration target may be a quadrangle as shown in fig. 5.
In some embodiments, the terminal may know the shape of the target calibration target in advance to determine the value of N, or perform image recognition on the target calibration target through a camera mounted on the vehicle to determine the value of N.
As an example, the operation of determining, by the terminal, the spatial coordinates of N feature points of the target calibration target according to the laser point cloud of the laser radar may be: clustering the laser point cloud into a plurality of scanning line segments according to the distance between each laser radar point in the laser point cloud along the laser scanning direction and the change of the laser radar point direction; clustering a plurality of scanning line segments into object segments to obtain a laser radar point set of a target plane of a target calibration target; according to the point cloud intensity value and the intensity threshold value of the laser point cloud, eliminating interference points from the laser radar point set; determining N boundary points of a preset area in the target calibration target according to the reflection intensity difference value of each laser radar point in the laser point cloud; and determining the space coordinates of the intersection points of the boundary lines corresponding to the N boundary points as the space coordinates of the N characteristic points of the target calibration target.
It should be noted that the terminal may process the point cloud by a scanning line-based segmentation method, that is, the terminal clusters the laser point cloud into a plurality of scanning line segments according to the distance between each laser radar point in the laser point cloud along the laser scanning direction and the change of the laser radar point direction, and then clusters the plurality of scanning line segments into an object segment to obtain a laser radar point set of the target plane of the target calibration target.
It should be noted that the distance between each lidar point may refer to a distance between a preset number of consecutive points, and the preset number may be set in advance, for example, the preset number may be 5, 6, and so on.
As an example, the terminal can decompose all three-dimensional points in a plurality of scanning line segments according to three main components (three coordinate directions) by a main component analysis method, and gather the three-dimensional points into object segments according to similarity to obtain a laser radar point set of a target plane of a target calibration target
Where p represents the lidar point and L represents the lidar.In some embodiments, since there may be interference points in the set of lidar points, which may interfere with the accuracy of the calibration, the terminal may determine the intensity threshold to exclude the interference points. The intensity threshold may be an intensity threshold region.
Because the target plane only has two colors of black and white, the terminal can obtain a point cloud intensity map of the target plane by utilizing the corresponding relation between the reflection intensity and the pattern color, and then can determine the ordinate in the coordinate system to find the peak value abscissa (r) of the point cloud number on the left side and the right side by taking the point cloud intensity value as the abscissa and the point cloud number corresponding to the intensity value as the ordinate according to the point cloud intensity mapl,rh) The terminal can count the peak abscissa (r) according to the point cloudl,rh) An intensity threshold region is determined.
As an example, the terminal may determine the intensity threshold region by the following first formula.
In the first formula (1), r islLow intensity values, r, corresponding to the point cloud number peak to the left of the ordinatelHigh intensity value, tau, corresponding to the peak value of the point cloud number on the right of the ordinatelIs the lowest intensity threshold, τhThe highest intensity threshold.
As an example, the terminal may compare the coordinates of the point in the lidar point set to an intensity threshold region to determine if it is an interference point, i.e., when it is time to detect an interference pointThen determine
Belonging to a target plane point, otherwise, being an interference point. And when the interference point is determined, excluding the interference point.The N characteristic points are the intersection points of all boundary lines of the target calibration target, so that the terminal needs to acquire the boundary lines, and can determine the preset inner and outer boundary points in the target calibration target.
It should be noted that, when the target calibration target is the calibration target shown in fig. 4, the preset area may be a black area shown in fig. 4.
As an example, the operation of determining, by the terminal, N boundary points of a preset region in the target calibration target according to the reflection intensity difference of each lidar point in the laser point cloud may be: for a laser radar subset of the ith scanning laser in the multiple scanning lasers, determining whether the reflection intensity difference value between any two adjacent laser radar points in the laser radar subset is greater than a difference threshold value, wherein i is a positive integer greater than or equal to 1; and when the reflection intensity difference value between any two adjacent laser radar points is greater than the difference threshold value, determining the average coordinate value of any two adjacent laser radar points as the coordinate value of any boundary point.
Because the scanning is carried out through a plurality of scanning lasers when the laser radar scans, a plurality of point clouds can be obtained for each scanning laser, so that the laser radar set comprises a plurality of laser radar subsets, and each laser radar subset corresponds to one scanning laser. For the lidar subset of the ith scanning laser, the lidar subset may be
The terminal may determine whether a difference in reflection intensity between any two adjacent lidar points in the lidar subset is greater than a difference threshold, that is, the terminal may determine whether a difference in reflection intensity between any two adjacent lidar points is greater than the difference threshold by using a second formula, and when the difference in reflection intensity between any two adjacent lidar points is greater than the difference threshold, it is determined that the boundary point is located between any two adjacent lidar points. Therefore, the average coordinate value of any two adjacent laser radar points can be determined as the coordinate value of any boundary point.
In the second formula (2), the first formula (a) is,
is the intensity value corresponding to the coordinate of the jth point,and the intensity value corresponding to the coordinate of the j +1 th point is shown, and tau is a difference threshold value.As an example, the terminal may determine the average coordinate value of any two adjacent laser radar points by the following third formula.
In the third formula (3), m isikIs the average coordinate value of the coordinate values,
is the coordinate of the j-th point,is the coordinate of the j +1 th point.In some embodiments, when the ith scanning laser intersects a preset area of the target calibration target, i.e., a black area, there may be two boundary points, where N is 2, or four boundary points, where N is 4. When two boundary points exist, the scanning laser is respectively arranged at the upper part and the lower part of the target calibration target (the upper part and the lower part of the hollow area), and when four boundary points exist, the scanning laser is arranged at the middle part of the target calibration target (the hollow area). The terminal may define the boundary points of the black region detected by the laser radar as a vector, for example, when there are two boundary points, the defined vector may be (m)i1,×,×,mi4) And x represents absent; when there are four boundary points, the defined vector may be (m)i1,mi2,mi3,mi4). That is, after writing the black region boundary points in the form of a vector, the vector may include (m)11,×,×,m14),(m21,m22,m23,m24),…,(mi1,×,×,mi4)。
Note that the vector (m)11,m21,…,mi1),(m14,m24,…,mi4) Respectively, left and right outer boundary line points, (×, m)22,…,m(i-1)2,×),(×,m23,…,m(i-1)3X (here i)>1) Representing the left and right inner boundary line points, respectively, the terminal can determine the inner and outer 8 boundary lines, thereby obtaining eight boundary line intersection points.
In some embodimentsIn the method, the terminal can record the intersection point of the outer boundary line of the black area in the laser radar coordinate system as p in a counterclockwise mannerL1,pL2,pL3,pL4The intersection of the inner boundary lines is denoted as p 'in the counterclockwise direction'L1,p′L2,p′L3,p′L4And obtaining eight three-dimensional characteristic points.
Step 303: and the terminal detects the plane coordinates of the N characteristic points through the camera.
It should be noted that the camera may obtain one frame of image including the target, pre-process the obtained image, and detect the plane coordinates of the N feature points from the pre-processed image.
As an example, the operation of the terminal to pre-process the image may be: carrying out graying processing on the image to obtain a grayscale image; and carrying out binarization on the gray level image to obtain a binary image.
As an example, the operation of the terminal detecting the plane coordinates of N feature points from the preprocessed image may be: when the gray value of any pixel point in the binary image is larger than the pixel threshold, determining the gray value of any pixel point to be set as the pixel threshold; otherwise, determining that the gray value of any pixel point is unchanged. Then, the terminal can perform pixel-level corner detection in a 5 × 5 window according to the Harri corner detection algorithm to obtain pixel-level edge points; and carrying out weighted centering operation in the optimal window through a Forstner operator to obtain sub-pixel angular points and determining plane coordinates of the N characteristic points.
In some embodiments, the terminal may set the number of feature points to 8, and may output accurate corner point coordinates after determining the plane coordinates of the N feature points. The terminal can record the characteristic point of the outer boundary line of the black area in the camera coordinate system as p in a counterclockwise mannerC1,pC2,pC3,pC4Feature points of the inner boundary line are denoted as p 'in the counterclockwise direction'C1,p′C2,p′C3,p′C4I.e. the plane coordinates of the eight feature points.
Step 304: and the terminal determines attitude information of the laser radar and the camera according to the space coordinates of the N characteristic points and the plane coordinates of the N characteristic points so as to realize the combined calibration of the camera and the laser radar.
As an example, the terminal may determine absolute attitude information of the laser radar and the camera by a uniform perspective N-point method according to the spatial coordinates of the N feature points and the plane coordinates of the N feature points, so as to obtain a rotation matrix from a laser radar coordinate system to a camera coordinate system
And translation vectorIn some embodiments, the terminal may adjust the lidar and the camera based on the attitude information of the lidar and the camera.
In the embodiment of the application, the terminal can horizontally calibrate the laser radar, the space coordinates of the characteristic points are determined through the laser point cloud of the target calibration target obtained from the laser radar, the plane coordinates of the characteristic points in the image are detected through the camera, the attitude information of the laser radar and the camera is determined through the space coordinates and the plane coordinates, the calibration of the camera and the laser radar can be completed only once, and the calibration efficiency and accuracy are improved.
After explaining the combined calibration method for the camera and the lidar provided by the embodiment of the present application, a combined calibration device for the camera and the lidar provided by the embodiment of the present application is introduced next.
Fig. 5 is a schematic structural diagram of a combined calibration apparatus for a camera and a lidar according to an embodiment of the present disclosure, where the combined calibration apparatus for a camera and a lidar may be implemented by software, hardware, or a combination of the two to be part or all of a terminal, and the terminal may be the terminal shown in fig. 1. Referring to fig. 5, the apparatus includes: a
The first determining
a
a second determining
In some embodiments, referring to fig. 6, the apparatus further comprises:
an obtaining
and the
In some embodiments, referring to fig. 7, the
the
the first determining
and a second determining sub-module 5053, configured to determine, in the top view of the laser point cloud, that when any laser point cloud of the laser radar passes through four vertices of the cross line, the laser radar is horizontal to the road surface where the laser radar is located, so as to complete horizontal calibration of the laser radar.
In some embodiments, referring to fig. 8, the first determining
the
the
the eliminating
the third determining
the fourth determining
In some embodiments, the
for a laser radar subset of an ith scanning laser in a plurality of scanning lasers, determining whether a reflection intensity difference value between any two adjacent laser radar points in the laser radar subset is greater than a difference threshold value, wherein i is a positive integer greater than or equal to 1;
and when the reflection intensity difference value between any two adjacent laser radar points is greater than the difference threshold value, determining the average coordinate value of any two adjacent laser radar points as the coordinate value of any boundary point.
In the embodiment of the application, the terminal can horizontally calibrate the laser radar, the space coordinates of the characteristic points are determined through the laser point cloud of the target calibration target obtained from the laser radar, the plane coordinates of the characteristic points in the image are detected through the camera, the attitude information of the laser radar and the camera is determined through the space coordinates and the plane coordinates, the calibration of the camera and the laser radar can be completed only once, and the calibration efficiency and accuracy are improved.
It should be noted that: in the combined calibration device for the camera and the lidar, when the camera and the lidar are calibrated, only the division of the functional modules is used for illustration, and in practical application, the function distribution can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the combined calibration device for the camera and the laser radar and the combined calibration method for the camera and the laser radar provided by the embodiments belong to the same concept, and specific implementation processes are detailed in the method embodiments and are not described herein again.
Fig. 9 is a block diagram of a terminal 900 according to an embodiment of the present disclosure. The terminal 900 may be a portable mobile terminal such as: a smartphone, a tablet laptop or a desktop computer.
In general,
In some embodiments, terminal 900 can also optionally include: a
The
The
The
The
The
In some embodiments, terminal 900 can also include one or more sensors 910.
Those skilled in the art will appreciate that the configuration shown in fig. 9 does not constitute a limitation of
In some embodiments, a computer-readable storage medium is provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the joint calibration method for a camera and a lidar in the above embodiments. For example, the computer readable storage medium may be a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It is noted that the computer-readable storage medium referred to herein may be a non-volatile storage medium, in other words, a non-transitory storage medium.
It should be understood that all or part of the steps for implementing the above embodiments may be implemented by software, hardware, firmware or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The computer instructions may be stored in the computer-readable storage medium described above.
That is, in some embodiments, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the steps of the above-described method for joint calibration of a camera and a lidar.
The above-mentioned embodiments are provided not to limit the present application, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the protection scope of the present application.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:一种海洋激光雷达系统响应优化处理方法