Sensor fusion target detection method under bumpy road condition

文档序号:748081 发布日期:2021-04-23 浏览:12次 中文

阅读说明:本技术 一种颠簸路况下的传感器融合目标检测方法 (Sensor fusion target detection method under bumpy road condition ) 是由 王展 王春燕 吴刚 刘晓强 秦亚娟 刘利峰 张自宇 王一松 于 2020-12-16 设计创作,主要内容包括:本发明公开了一种颠簸路况下的传感器融合目标检测方法,步骤如下:对激光雷达和两个全局快门相机所检测的数据进行处理校正;对上述校正后的激光雷达和两个全局快门相机数据进行特征提取;利用空间同步,将特征提取后的激光雷达点云数据从激光雷达坐标系转移变换到图像的像素坐标系中;利用时间同步,得到相同时刻特征提取后的激光雷达点云数据和全局快门相机数据;由当前时刻激光雷达与全局快门相机检测结果、惯性测量单元检测颠簸程度和前n个时刻所检测结果对当前时刻的预测结果共同作用得到障碍物信息输出结果。本发明考虑颠簸路面下的点云和图像数据修正,保证无人车在颠簸路况下依然能够具备较好的准确性。(The invention discloses a method for detecting a sensor fusion target under bumpy road conditions, which comprises the following steps: processing and correcting data detected by the laser radar and the two global shutter cameras; carrying out feature extraction on the corrected laser radar and the two global shutter camera data; transferring and transforming the laser radar point cloud data after the characteristic extraction from a laser radar coordinate system into a pixel coordinate system of an image by utilizing space synchronization; obtaining laser radar point cloud data and global shutter camera data after feature extraction at the same time by utilizing time synchronization; and obtaining an obstacle information output result by the combined action of the detection results of the laser radar and the global shutter camera at the current moment, the detection bumping degree of the inertia measurement unit and the prediction result of the detection results of the previous n moments on the current moment. The invention considers the point cloud and image data correction under the bumpy road surface, and ensures that the unmanned vehicle still has better accuracy under the bumpy road condition.)

1. A method for detecting a sensor fusion target under a bumpy road condition is characterized by comprising the following steps:

step 1): processing and correcting data detected by the laser radar and the two global shutter cameras;

step 2): performing feature extraction on the laser radar and the two global shutter camera data corrected in the step 1);

step 3): transferring and transforming the laser radar point cloud data after the characteristic extraction from a laser radar coordinate system into a pixel coordinate system of an image by utilizing space synchronization;

step 4): obtaining laser radar point cloud data and global shutter camera data after feature extraction at the same time by utilizing time synchronization;

and 5) obtaining an obstacle information output result by the combined action of the detection result of the laser radar and the global shutter camera at the current moment, the detection bumping degree of the inertia measurement unit and the prediction result of the detection result of the previous n moments on the current moment.

2. The method for detecting the fusion target of the sensor under the bumpy road condition as claimed in claim 1, wherein the lidar is arranged at the center of the top of the vehicle, and the two global shutter cameras are respectively arranged at two sides of the bottom end of a front gear of the vehicle.

3. The method for detecting the sensor fusion target under the bumpy road condition according to claim 1, wherein the step 1) specifically comprises:

step 11): the laser radar is subjected to data correction by using an inertial measurement unit and a global positioning system, and a time stamp is provided for each laser point in a single frame by using the time synchronization of the global positioning system; providing course angle, yaw angle, pitch angle, vehicle speed and acceleration information by using a high-frequency inertia measurement unit; interpolating a multi-frame continuous acceleration and vehicle speed data contained in the single-frame point cloud to obtain a time-varying function, and performing inverse operation on the point cloud through a time stamp, the acceleration and the vehicle speed data to perform coordinate transformation on the point cloud so as to avoid discrete deformation of the single-frame point cloud;

step 12): and combining the corrected point cloud information in the step 11) and the image information acquired by the global shutter camera, and correcting and matching the image data and the point cloud data of the two frames by using the data obtained by the inertial measurement unit on the basis of the previous frame data.

4. The method for detecting the sensor fusion target under the bumpy road condition according to claim 3, wherein the inverse operation process in the step 11) is specifically as follows: correcting the vehicle attitude difference from the coordinate position of the current point cloud and the timestamp of the initial point cloud to the timestamp of the current point cloud, wherein the vehicle attitude change is reflected in a course angle, a yaw angle, a pitch angle and longitudinal displacement; setting the period of one scanning detection of the high-frequency inertia measurement unit as TimuThe time from the initial point cloud time stamp to the current point cloud time stamp is tlidarThe course angle, the yaw angle and the pitch angle speed obtained by interpolation of the data measured by the high-frequency inertia measurement unit areAcceleration of longitudinal displacement of deltaziThen the attitude transformation matrix [ Delta alpha Delta beta Delta gamma ]]And longitudinal displacement Δ Z:

solving a coordinate correction equation R of the point cloud according to the attitude transformation matrix:

the coordinates after correction are:

[x' y' z']T=R[x y z]T+[0 0 ΔZ]T

wherein, [ x ' y ' z ']TFor the corrected point cloud coordinates, R is the coordinate correction equation of the point cloud, [ x y z [ ]]TTo correct the pre-point cloud coordinates, [ 00 Δ Z]TIs a longitudinal correction matrix.

5. The method for detecting the fusion target of the sensor under the bumpy road condition as claimed in claim 3, wherein in the step 12), inter-frame correction is performed, namely real-time data measured by the high-frequency inertia measurement unit are recorded, interpolation is performed on multiple groups of acceleration and vehicle speed data between two frames of laser radars and images, and angle change and longitudinal displacement change are solved by utilizing integral operation; and then carrying out combined correction on the two frames by utilizing matrix calculation and calibrating the corresponding relation.

6. The method for detecting the sensor fusion target under the bumpy road condition according to claim 1, wherein the step 2) specifically comprises:

step 21): dividing the corrected point cloud data by using a multi-feature multi-layer grid map, clustering the grid map by using a depth value-based clustering method to obtain the edge information of the obstacle, and reserving the center point information and the distance information of the obstacle;

step 22): and processing the image data by using a stereoscopic vision parallax method and a yolo3 algorithm based on deep learning, and outputting obstacle distance information and the position of the center of the obstacle in the image.

7. The method for detecting the fusion target of the sensor under the bumpy road condition as claimed in claim 6, wherein the grid division mentioned in the step 21) is to divide the grid into three types, namely a ground grid, an obstacle grid and a suspension grid, by using a height characteristic and an intensity characteristic; the clustering mode based on the depth value changes the clustering distance threshold value along with the change of the depth value, and solves the problems that the distant objects are difficult to cluster and the near objects are over-segmented.

8. The method for detecting the sensor fusion target under the bumpy road condition as claimed in claim 1, wherein the process required to be performed by the step 3) for transferring the radar coordinate system to the pixel coordinate system is as follows: and converting the radar coordinate system into a world coordinate system, converting the world coordinate system into a camera coordinate system, and finally converting the camera coordinate system into a pixel coordinate system to finish the coordinate unification of the detected data.

9. The method for detecting the fusion target of the sensor under the bumpy road condition as claimed in claim 1, wherein the time synchronization tool used in the step 4) is a pulse generator of a global positioning system, the synchronization mode is that a trigger frequency is set according to a scanning frequency of the laser radar, data of the laser radar and the global shutter camera of the current frame are acquired by triggering each time, and if no data exists in the image at the moment, interpolation calculation is performed by using data of the previous moment and the next moment.

10. The method for detecting the sensor fusion target under the bumpy road condition according to claim 1, wherein the output result of the obstacle information in the step 5) is based on whether the current lidar and the global shutter camera both detect the obstacle information, and the method is divided into three categories:

51) the laser radar and the global shutter camera both detect the obstacle information;

and (3) fusing the detection results by jointly setting a judgment function by using the barrier central point and the barrier distance information:

F=wu|uc-ur|+wr|vc-vr|+wdis|dc-dr|

in the formula uc,vc,ur,vrRespectively representing shots in a pixel coordinate systemImage head and radar detection of obstacle center point coordinates, dc,drRepresenting the distance of the lidar from the global shutter camera and the obstacle, wu,wr,wdisRepresents a weight; the weight distribution is determined by the pitching degree determined by the pitch angle, the yaw angle, the course angle and the longitudinal acceleration;

setting the determination threshold value to FuWhen F is less than FuWhen the distance between the laser radar and the global shutter camera is larger than F, the laser radar and the global shutter camera are considered to be detected as the same obstacle, and when the distance is larger than FuIf the information belongs to different barrier information, the information belongs to different barrier information;

and (3) carrying out weighted average correction on the fused data and the predicted value of the previous n moments at the current moment, and establishing a mathematical model:

in the formula (I), the compound is shown in the specification,a value of 1, X0For data at the present moment, X1~XnThe data at the previous n moments; the weight distribution is carried out according to whether the vehicle posture changes suddenly or not; if the vehicle posture at the current moment suddenly changes, giving a weight value which is small in prediction of the current moment at the moment when the sudden change occurs in the current moment and the previous n moments, giving a weight value which is large in prediction of the current moment at the moment when the sudden change does not occur in the previous n moments, and giving a weight value which is larger when the vehicle posture is closer to the current moment; if the current moment does not generate mutation, giving the maximum weight at the current moment, and distributing the weights at other moments according to the time from the current moment and whether mutation occurs; finally, the fused and corrected data are output to a decision mechanism so as to facilitate road planning and intelligent steering;

52) only one of the laser radar and the global shutter camera detects an obstacle target due to bumping;

when only one of the laser radar and the global shutter camera works normally, the current equipment detection result which works normally and the previous n moment detection results are adopted for weighted correction, and the correction process is consistent with the data correction mode of the fused laser radar and the global shutter camera;

53) the laser radar and the global shutter camera lose detection targets due to bumping;

when the laser radar and the global shutter camera lose detection targets due to bumping, performing weighted average on the predicted value of the current moment at the previous n moments to judge the obstacle information of the current moment; namely, it isX1~XnThe data at the previous n moments; the weight value is distributed according to whether mutation occurs or not and the time from the current moment is comprehensively distributed;

and (3) calculating by utilizing the prediction mode of the previous n moments to the current moment, namely utilizing the obstacle information of the ith moment and the integral calculation of the vertical acceleration, the vehicle speed, the course angular velocity, the yaw angular velocity and the pitch angular velocity from the ith moment to the current moment.

Technical Field

The invention belongs to the field of target detection and environment perception of unmanned technology, and particularly relates to a sensor fusion target detection method under bumpy road conditions.

Background

With the continuous development of the technology level, the vehicle industry in China gradually enters the intelligent era. In the continuous development of intelligent vehicles, target detection, intelligent decision and bottom layer control become the focus and difficulty of intelligent level development. In the current development process, the target detection is mostly focused on the target detection of urban structured roads. However, when the outdoor bumpy road surface is faced, the current algorithm is difficult to provide better robustness.

Different sensors face different problems in a bumpy environment, and for radar data, in a field bumpy environment, the acquired data face the problems of jitter, distortion, data loss and the like. The data collected by the vision sensor also faces the problem of obstacle smearing due to jitter. Therefore, for target detection under bumpy road conditions, special data processing methods and devices are needed to achieve stable detection. In the current research, the research on the detection of the target on the bumpy road surface is less, and a single radar sensor is mainly used, so that the robust detection under various conditions cannot be realized well. For example, chinese patent application No. CN201510634012.4, entitled "tracking device and tracking method for radar target position of vehicle on bumpy road", takes the problem of tracking failure on bumpy road into consideration, and senses the bumping condition of road under the current condition by using a gyroscope, and outputs detected radar data, such as relative position, relative speed, following angle, etc., in real time when the road surface condition is good or when the road surface bumps but a front target signal can be detected. When the road surface bumps and the front vehicle target cannot be well detected, the previous time data is adopted, the method does not correct the data and excessively depends on the previous time data, and when the road surface condition is poor and the continuous frame detection fails, traffic accidents are easily caused by relative information detection errors and information lag. Chinese patent application No. CN202010207660.2, entitled "method and system for detecting obstacle considering vehicle bump", wherein processing of laser radar signals is emphasized, including single-frame attitude correction, single-frame longitudinal distortion correction, and multi-frame correction based on IMU and background. However, the target loss phenomenon due to thrashing is ignored in the detection process. Meanwhile, the problems of limitation and low detection precision of the laser radar in rainy and foggy weather are also ignored.

Therefore, in the existing obstacle detection system, relatively few researches are conducted on bumpy road conditions, the existing researches mostly focus on the utilization of a single sensor, and the relative information at the current moment cannot be well predicted under the condition that the obstacle detection fails so as to ensure the driving safety of the vehicle.

Disclosure of Invention

Aiming at the defects of the prior art, the invention aims to provide a sensor fusion target detection method under bumpy road conditions so as to solve the problems of data distortion and data loss of the bumpy road conditions in the prior art; the method comprises the steps of fusing a laser radar and two global shutter cameras, processing image data by using a stereoscopic vision parallax method, detecting whether an obstacle exists in a certain distance in front or not, correcting and searching an interested area where the obstacle is located by using an IMU (inertial measurement unit), processing the laser radar by using IMU single-frame correction, inter-frame correction and cluster analysis methods, extracting features, fusing the feature data of the laser radar and the feature data of the laser radar, weighting and fusing the fused feature data with the previous n moments, and considering a detection method under the condition that various sensor information is lost.

In order to achieve the purpose, the technical scheme adopted by the invention is as follows:

the invention discloses a method for detecting a sensor fusion target under bumpy road conditions, which comprises the following steps:

step 1): processing and correcting data detected by the laser radar and the two global shutter cameras;

step 2): performing feature extraction on the laser radar and the two global shutter camera data corrected in the step 1);

step 3): transferring and transforming the laser radar point cloud data after the characteristic extraction from a laser radar coordinate system into a pixel coordinate system of an image by utilizing space synchronization;

step 4): obtaining laser radar point cloud data and global shutter camera data after feature extraction at the same time by utilizing time synchronization;

and 5) obtaining an obstacle information output result by the combined action of the detection result of the laser radar and the global shutter camera at the current moment, the detection bumping degree of the inertia measurement unit and the prediction result of the detection result of the previous n moments on the current moment.

Further, the laser radar is arranged at the center of the top of the vehicle, and the two global shutter cameras are respectively arranged on two sides of the bottom end of a front gear of the vehicle.

Further, the step 1) specifically includes:

step 11): the laser radar is subjected to data correction by using an inertial measurement unit and a global positioning system, and a time stamp is provided for each laser point in a single frame by using the time synchronization of the global positioning system; providing course angle, yaw angle, pitch angle, vehicle speed and acceleration information by using a high-frequency inertia measurement unit; interpolating a multi-frame continuous acceleration and vehicle speed data contained in the single-frame point cloud to obtain a time-varying function, and performing inverse operation on the point cloud through a time stamp, the acceleration and the vehicle speed data to perform coordinate transformation on the point cloud so as to avoid discrete deformation of the single-frame point cloud;

step 12): and combining the corrected point cloud information in the step 11) and the image information acquired by the global shutter camera, and correcting and matching the image data and the point cloud data of the two frames by using the data obtained by the inertial measurement unit on the basis of the previous frame data.

Further, the step 2) specifically includes:

step 21): dividing the corrected point cloud data by using a multi-feature multi-layer grid map, clustering the grid map by using a depth value-based clustering method to obtain the edge information of the obstacle, and reserving the center point information and the distance information of the obstacle;

step 22): and processing the image data by using a stereoscopic vision parallax method and a yolo3 algorithm based on deep learning, and outputting obstacle distance information and the position of the center of the obstacle in the image.

Further, the laser radar and the global shutter camera adopted in the step 1) are respectively HDL-64E mechanical radar of Velodyne and CMV-50 of the illunins family; the adoption of the global shutter camera can effectively avoid pixel distortion caused by bumping.

Further, the inverse operation process in the step 11) specifically includes: correcting the vehicle attitude difference from the coordinate position of the current point cloud and the timestamp of the initial point cloud to the timestamp of the current point cloud, wherein the vehicle attitude change is reflected in a course angle, a yaw angle, a pitch angle and longitudinal displacement; setting the period of one scanning detection of the high-frequency inertia measurement unit as TimuThe time from the initial point cloud time stamp to the current point cloud time stamp is tlidarThe course angle, the yaw angle and the pitch angle speed obtained by interpolation of the data measured by the high-frequency inertia measurement unit areAcceleration of longitudinal displacement of deltaziThen the attitude transformation matrix [ Delta alpha Delta beta Delta gamma ]]And longitudinal displacement Δ Z:

solving a coordinate correction equation R of the point cloud according to the attitude transformation matrix:

the coordinates after correction are:

[x' y' z']T=R[x y z]T+[0 0 ΔZ]T

wherein, [ x ' y ' z ']TFor the corrected point cloud coordinates, R is the coordinate correction equation of the point cloud, [ x y z [ ]]TTo correct the pre-point cloud coordinates, [ 00 Δ Z]TIs a longitudinal correction matrix.

Further, the inter-frame correction mentioned in the step 12) is to record real-time data measured by the high-frequency inertia measurement unit, interpolate multiple groups of acceleration and vehicle speed data between two frames of laser radar and images, and solve the angle change and longitudinal displacement change by using integral operation; and then carrying out combined correction on the two frames by utilizing matrix calculation and calibrating the corresponding relation.

Further, the grid division mentioned in the step 21) divides the grid into three types, namely a ground grid, an obstacle grid and a suspension grid by using a height characteristic and a strength characteristic; the clustering mode based on the depth value changes the clustering distance threshold value along with the change of the depth value, and solves the problems that the distant objects are difficult to cluster and the near objects are over-segmented.

Further, in the step 21), the grid map is clustered by using a clustering method based on depth values, and a clustering threshold value is set to be NTFrom the formula NT=[DT/G]Solving, wherein G is the size of the grid; further expansion of connected regions to size (2N) using grid clustering thresholdsT-1)×(2NT-1) then deciding the grid within the communication area as the same obstacle; at a distance threshold DTIn the solution, the formula is utilizedSolution of σ in the equationrRepresenting the measurement error of the lidar, Δ φ is the horizontal angular resolution, rn-1The depth value of the center point of the barrier grid is used, and lambda is a threshold parameter; according to depth value rnThe distance threshold is determined, over-segmentation caused by short distance is avoided, and under-segmentation caused by long distance is avoided.

Further, the process required to be undergone by the step 3) for transferring from the radar coordinate system to the pixel coordinate system is as follows: and converting the radar coordinate system into a world coordinate system, converting the world coordinate system into a camera coordinate system, and finally converting the camera coordinate system into a pixel coordinate system to finish the coordinate unification of the detected data.

Further, the time synchronization tool used in the step 4) is a pulse generator of the global positioning system, the synchronization mode is that a trigger frequency is set according to a scanning frequency of the laser radar, data of the laser radar and the global shutter camera of the current frame are acquired by triggering each time, and if the image has no data at the moment, interpolation calculation is performed by using data of the previous moment and the next moment.

Further, the obstacle information output result in the step 5) is divided into three categories according to whether the current laser radar and the global shutter camera both detect the obstacle information:

(1) the laser radar and the global shutter camera both detect the obstacle information;

and (3) fusing the detection results by jointly setting a judgment function by using the barrier central point and the barrier distance information:

F=wu|uc-ur|+wr|vc-vr|+wdis|dc-dr|

in the formula uc,vc,ur,vrRespectively representing the coordinates of the center point of the barrier detected by the camera and the radar under the pixel coordinate system, dc,drRepresenting the distance of the lidar from the global shutter camera and the obstacle, wu,wr,wdisRepresents a weight; the weight distribution is determined by the pitching degree determined by the pitch angle, the yaw angle, the course angle and the longitudinal acceleration;

setting the determination threshold value to FuWhen F is less than FuWhen the distance between the laser radar and the global shutter camera is larger than F, the laser radar and the global shutter camera are considered to be detected as the same obstacle, and when the distance is larger than FuThe time is considered to belong to different barrier informationInformation;

and (3) carrying out weighted average correction on the fused data and the predicted value of the previous n moments at the current moment, and establishing a mathematical model:

in the formula (I), the compound is shown in the specification,a value of 1, X0For data at the present moment, X1~XnThe data at the previous n moments; the weight distribution is carried out according to whether the vehicle posture changes suddenly or not; if the vehicle posture at the current moment suddenly changes, giving a weight value which is small in prediction of the current moment at the moment when the sudden change occurs in the current moment and the previous n moments, giving a weight value which is large in prediction of the current moment at the moment when the sudden change does not occur in the previous n moments, and giving a weight value which is larger when the vehicle posture is closer to the current moment; if the current moment does not generate mutation, giving the maximum weight at the current moment, and distributing the weights at other moments according to the time from the current moment and whether mutation occurs; finally, the fused and corrected data are output to a decision mechanism so as to facilitate road planning and intelligent steering;

(2) only one of the laser radar and the global shutter camera detects an obstacle target due to bumping;

when only one of the laser radar and the global shutter camera works normally, the current equipment detection result which works normally and the previous n moment detection results are adopted for weighted correction, and the correction process is consistent with the data correction mode of the fused laser radar and the global shutter camera;

(3) the laser radar and the global shutter camera lose detection targets due to bumping;

when the laser radar and the global shutter camera lose detection targets due to bumping, performing weighted average on the predicted value of the current moment at the previous n moments to judge the obstacle information of the current moment; namely, it isX1~XnThe data at the previous n moments; the weight value is distributed according to whether mutation occurs or not and the time from the current moment is comprehensively distributed;

and (3) calculating by utilizing the prediction mode of the previous n moments to the current moment, namely utilizing the obstacle information of the ith moment and the integral calculation of the vertical acceleration, the vehicle speed, the course angular velocity, the yaw angular velocity and the pitch angular velocity from the ith moment to the current moment.

The invention has the beneficial effects that:

the invention considers the point cloud and image data correction under the bumpy road surface, and ensures that the unmanned vehicle still has better accuracy under the bumpy road condition; meanwhile, redundant detection is carried out by utilizing the laser radar and the global shutter camera, and high robustness of detection is ensured. And finally, the stability of target detection at any moment is ensured by utilizing the correction of the current moment at the previous n moments and considering the data prediction method under the failure of the laser radar and the global shutter camera.

Drawings

FIG. 1 is a schematic diagram of a distributed arrangement of a lidar and a global shutter camera;

FIG. 2 is a schematic view showing changes in the bump attitude of a vehicle;

FIG. 3 is a flow chart of single frame point cloud correction;

FIG. 4 is a data fusion flow diagram of the schematic diagram of the pulse generator operation;

FIG. 5 is a data fusion flow diagram.

Detailed Description

In order to facilitate understanding of those skilled in the art, the present invention will be further described with reference to the following examples and drawings, which are not intended to limit the present invention.

The invention discloses a method for detecting a sensor fusion target under bumpy road conditions, which comprises the following steps:

step 1): processing and correcting data detected by the laser radar and the two global shutter cameras; the arrangement mode of the laser radar and the two global shutter cameras is shown in fig. 1, the laser radar is arranged at the center of the top of the vehicle, and the two global shutter cameras are respectively arranged at two sides of the bottom end of the front gear of the vehicle.

The step 1) specifically comprises the following steps:

step 11): referring to fig. 3, data correction is performed on single frame data of the laser radar by using an inertial measurement unit and a global positioning system, and a time stamp is provided for each laser point in the single frame by using time synchronization of the global positioning system; providing course angle, yaw angle, pitch angle, vehicle speed and acceleration information by using a high-frequency inertia measurement unit; interpolating a multi-frame continuous acceleration and vehicle speed data contained in the single-frame point cloud to obtain a time-varying function, and performing inverse operation on the point cloud through a time stamp, the acceleration and the vehicle speed data to perform coordinate transformation on the point cloud so as to avoid discrete deformation of the single-frame point cloud;

step 12): and combining the corrected point cloud information in the step 11) and the image information acquired by the global shutter camera, and correcting and matching the image data and the point cloud data of the two frames by using the data obtained by the inertial measurement unit on the basis of the previous frame data.

The laser radar and the global shutter camera adopted in the step 1) are respectively an HDL-64E mechanical radar of Velodyne and a CMV-50 of an illunins family; the adoption of the global shutter camera can effectively avoid pixel distortion caused by bumping.

The inverse operation in step 11) is shown in fig. 2, and the process specifically includes: correcting the vehicle attitude difference from the coordinate position of the current point cloud and the timestamp of the initial point cloud to the timestamp of the current point cloud, wherein the vehicle attitude change is reflected in a course angle, a yaw angle, a pitch angle and longitudinal displacement; setting the period of one scanning detection of the high-frequency inertia measurement unit as TimuThe time from the initial point cloud time stamp to the current point cloud time stamp is tlidarThe course angle, the yaw angle and the pitch angle speed obtained by interpolation of the data measured by the high-frequency inertia measurement unit areAcceleration of longitudinal displacement of deltaziThen the attitude transformation matrix [ Delta alpha Delta beta Delta gamma ]]And longitudinal displacement Δ Z:

solving a coordinate correction equation R of the point cloud according to the attitude transformation matrix:

the coordinates after correction are:

[x' y' z']T=R[x y z]T+[0 0 ΔZ]T

wherein, [ x ' y ' z ']TFor the corrected point cloud coordinates, R is the coordinate correction equation of the point cloud, [ x y z [ ]]TTo correct the pre-point cloud coordinates, [ 00 Δ Z]TIs a longitudinal correction matrix.

The inter-frame correction mentioned in the step 12) is to record real-time data measured by a high-frequency inertia measurement unit, interpolate multiple groups of acceleration and vehicle speed data between two frames of laser radars and images, and solve angle change and longitudinal displacement change by utilizing integral operation; and then carrying out combined correction on the two frames by utilizing matrix calculation and calibrating the corresponding relation.

Step 2): performing feature extraction on the laser radar and the two global shutter camera data corrected in the step 1);

step 21): dividing the corrected point cloud data by using a multi-feature multi-layer grid map, clustering the grid map by using a depth value-based clustering method to obtain the edge information of the obstacle, and reserving the center point information and the distance information of the obstacle;

step 22): and processing the image data by using a stereoscopic vision parallax method and a yolo3 algorithm based on deep learning, and outputting obstacle distance information and the position of the center of the obstacle in the image.

The grid division mentioned in the step 21) is to divide the grid into three types of ground grid, obstacle grid and suspension grid by using height characteristic and strength characteristic; the clustering mode based on the depth value changes the clustering distance threshold value along with the change of the depth value, and solves the problems that the distant objects are difficult to cluster and the near objects are over-segmented.

Clustering the grid map by using a depth value-based clustering method in the step 21), wherein a clustering threshold value is set to be NTFrom the formula NT=[DT/G]Solving, wherein G is the size of the grid; further expansion of connected regions to size (2N) using grid clustering thresholdsT-1)×(2NT-1) then deciding the grid within the communication area as the same obstacle; at a distance threshold DTIn the solution, the formula is utilizedSolution of σ in the equationrRepresenting the measurement error of the lidar, Δ φ is the horizontal angular resolution, rn-1The depth value of the center point of the barrier grid is used, and lambda is a threshold parameter; according to depth value rnThe distance threshold is determined, over-segmentation caused by short distance is avoided, and under-segmentation caused by long distance is avoided.

Step 3): transferring and transforming the laser radar point cloud data after the characteristic extraction from a laser radar coordinate system into a pixel coordinate system of an image by utilizing space synchronization;

the process required to be undergone by the step 3) for transferring from the radar coordinate system to the pixel coordinate system is as follows: and converting the radar coordinate system into a world coordinate system, converting the world coordinate system into a camera coordinate system, and finally converting the camera coordinate system into a pixel coordinate system to finish the coordinate unification of the detected data.

Step 4): obtaining laser radar point cloud data and global shutter camera data after feature extraction at the same time by utilizing time synchronization;

referring to fig. 4, the time synchronization tool is a pulse generator of the global positioning system, the synchronization mode is to set a trigger frequency according to a scanning frequency of the laser radar, acquire data of the laser radar and the global shutter camera of the current frame each time, and perform interpolation calculation by using data of the previous and subsequent times if the image has no data at the moment.

Step 5) obtaining an obstacle information output result by the combined action of the detection result of the laser radar and the global shutter camera at the current moment, the detection bumping degree of the inertia measurement unit and the prediction result of the detection result of the previous n moments on the current moment; wherein, the bumping degree is determined by the acceleration sudden change condition in each direction detected by the inertia measurement unit;

referring to fig. 5, the obstacle information output result is divided into three categories according to whether both the current lidar and the global shutter camera detect the obstacle information:

(1) the laser radar and the global shutter camera both detect the obstacle information;

and (3) fusing the detection results by jointly setting a judgment function by using the barrier central point and the barrier distance information:

F=wu|uc-ur|+wr|vc-vr|+wdis|dc-dr|

in the formula uc,vc,ur,vrRespectively representing the coordinates of the center point of the barrier detected by the camera and the radar under the pixel coordinate system, dc,drRepresenting the distance of the lidar from the global shutter camera and the obstacle, wu,wr,wdisRepresents a weight; the weight distribution is determined by the pitching degree determined by the pitch angle, the yaw angle, the course angle and the longitudinal acceleration;

setting the determination threshold value to FuWhen F is less than FuWhen the distance between the laser radar and the global shutter camera is larger than F, the laser radar and the global shutter camera are considered to be detected as the same obstacle, and when the distance is larger than FuIf the information belongs to different barrier information, the information belongs to different barrier information;

and (3) carrying out weighted average correction on the fused data and the predicted value of the previous n moments at the current moment, and establishing a mathematical model:

in the formula (I), the compound is shown in the specification,a value of 1, X0For data at the present moment, X1~XnThe data at the previous n moments; the weight distribution is carried out according to whether the vehicle posture changes suddenly or not; if the vehicle posture at the current moment suddenly changes, giving a weight value which is small in prediction of the current moment at the moment when the sudden change occurs in the current moment and the previous n moments, giving a weight value which is large in prediction of the current moment at the moment when the sudden change does not occur in the previous n moments, and giving a weight value which is larger when the vehicle posture is closer to the current moment; if the current moment does not generate mutation, giving the maximum weight at the current moment, and distributing the weights at other moments according to the time from the current moment and whether mutation occurs; finally, the fused and corrected data are output to a decision mechanism so as to facilitate road planning and intelligent steering;

(2) only one of the laser radar and the global shutter camera detects an obstacle target due to bumping;

when only one of the laser radar and the global shutter camera works normally, the current equipment detection result which works normally and the previous n moment detection results are adopted for weighted correction, and the correction process is consistent with the data correction mode of the fused laser radar and the global shutter camera;

(3) the laser radar and the global shutter camera lose detection targets due to bumping;

when the laser radar and the global shutter camera lose detection targets due to bumping, performing weighted average on the predicted value of the current moment at the previous n moments to judge the obstacle information of the current moment; namely, it isX1~XnThe data at the previous n moments; the weight value is distributed according to whether mutation occurs or not and the time from the current moment is comprehensively distributed;

and (3) calculating by utilizing the prediction mode of the previous n moments to the current moment, namely utilizing the obstacle information of the ith moment and the integral calculation of the vertical acceleration, the vehicle speed, the course angular velocity, the yaw angular velocity and the pitch angular velocity from the ith moment to the current moment.

While the invention has been described in terms of its preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention.

14页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:基于无人机激光雷达点云杆塔倾斜参数测量方法及系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!