Obstacle identification and positioning method based on fusion of monocular camera and millimeter wave radar

文档序号:1002428 发布日期:2020-10-23 浏览:4次 中文

阅读说明:本技术 基于单目相机与毫米波雷达融合的障碍物识别与定位方法 (Obstacle identification and positioning method based on fusion of monocular camera and millimeter wave radar ) 是由 黄攀峰 余航 张帆 孟中杰 张夷斋 刘正雄 于 2020-07-15 设计创作,主要内容包括:本发明涉及一种基于单目相机与毫米波雷达融合的障碍物识别与定位方法,包括传感器数据采集与时间同步、毫米波雷达与单目相机测量对象匹配和使用扩展卡尔曼滤波算法对两种传感器的测量数据进行滤波与融合;本发明采用多传感器融合的方式,使机器人能够感知比单个传感器更加丰富且精确的障碍物信息。相对单纯基于坐标转换的融合方式来说,本发明所使用的扩展卡尔曼滤波数据融合方式获取的数据更加接近真实值,且大大减少了传感器测量噪声带来的影响。(The invention relates to an obstacle identification and positioning method based on the fusion of a monocular camera and a millimeter wave radar, which comprises the steps of collecting data of a sensor and synchronizing time, matching a millimeter wave radar with a monocular camera measuring object and filtering and fusing the measuring data of the two sensors by using an extended Kalman filtering algorithm; the invention adopts a multi-sensor fusion mode, so that the robot can sense more abundant and accurate obstacle information than a single sensor. Compared with a fusion mode based on coordinate conversion, the data acquired by the extended Kalman filtering data fusion mode used by the invention is closer to a true value, and the influence brought by the measurement noise of the sensor is greatly reduced.)

1. A method for identifying and positioning obstacles based on fusion of a monocular camera and a millimeter wave radar is characterized by comprising the following steps:

step 1: sensor data acquisition and time synchronization

1.1) acquiring data of a millimeter wave radar and a monocular camera, reading two-dimensional point cloud data according to a communication protocol provided by the millimeter wave radar, and reading image information of the monocular camera by using OpenCV;

1.2) carrying out time sequence registration on data of the millimeter wave radar and the monocular camera according to the timestamp information of the point cloud and the image, and ensuring that the time for fusing the data of the two sensors is consistent;

step 2: matching the millimeter wave radar with a monocular camera measuring object;

2.1) identifying the obstacle in the visual image by using a deep learning model trained in advance to obtain a minimum rectangular frame enveloping the obstacle;

2.2) the coordinates (x) of the radar point cloud are calculated according to the external parameter matrix and the internal parameter matrix of the radar camerari,yri) Conversion to pixel coordinates (u)ri,vri) The specific conversion relationship is as follows:

wherein z iscFor the actual depth of a spatial point to the camera plane, K represents the camera's internal reference matrixR and TRespectively representing a rotation matrix and a translation vector between the camera and the radar, and being capable of being calibrated in advance or directly measured and obtained, wherein H represents the height difference between the millimeter wave radar and the camera;

2.3) traversing the center pixel position (u) of the visually detected rectangular frameci,vci) Distance (u)ri,vri) The nearest point is an obstacle which is detected by the millimeter wave radar and the monocular camera together;

and step 3: filtering and fusing the measurement data of the two sensors by using an extended Kalman filtering algorithm;

3.1) prediction of state quantity: the fused state quantity is the coordinate and the speed x under a radar coordinate systemf(px,py,pz,vx,vy) According to the kalman filter equation, the predicted value of the state quantity and its uncertainty can be expressed as:

wherein, x'fIs xfF is a coefficient of variation of a state quantity of the linear system, P, P 'is a system uncertainty and a prediction of the uncertainty, respectively, and Q is a linear system x'f=FxfAdditional interference factors of (2);

3.2) millimeter wave radar measurement updating: the position and the speed of the obstacle measured by the millimeter wave radar are xr(xr,yr,vxr,vyr) According to the kalman filter equation, the update values of the state quantity and its uncertainty can be expressed as:

wherein, x'fr、P′rH represents the conversion relation between the state quantity and the millimeter wave radar measured value, R is the state quantity and the uncertainty after the millimeter wave radar measured data is updatedradarRepresenting uncertainty of the millimeter-wave radar measurements, which may be based on radarObtaining the delivery parameters;

3.3) monocular camera measurement update: the coordinate of the center pixel of the visually detected obstacle matched with the millimeter wave radar in the step 2 is xs(u, v), according to the kalman filter equation, the update values of the state quantity and its uncertainty can be expressed as:

Figure FDA0002584682670000023

wherein, f (x'f) Coordinate transformation relations representing the projection of state quantities onto a pixel coordinate system, using a pinhole camera model, whereby the coordinates are transformed into a non-linear mapping, HjJacobian matrix, R, representing pixel coordinates in a non-linear mapping relationship with respect to state quantitiescamRepresenting the uncertainty of the monocular camera measurement value as an identity matrix;

and 3.4) acquiring the measured values of the millimeter wave radar and the monocular camera according to the time sequence registered in the step 1, continuously updating the state quantity, and performing prediction once every updating, thereby acquiring more accurate obstacle position information.

2. The obstacle recognition and positioning method based on the fusion of the monocular camera and the millimeter wave radar as recited in claim 1, wherein K in step 2.2 can be obtained by calibration through matlab or OpenCV Zhang-Sungyou method.

Technical Field

The invention relates to the technical field of multi-sensor fusion, in particular to a method for identifying and positioning an obstacle based on fusion of a monocular camera and a millimeter wave radar.

Background

With the development of intelligent robots such as unmanned driving, unmanned aerial vehicles and the like, the environment perception technology makes great progress. At present, a mature target identification and positioning method is to process images by using a deep learning algorithm, including RCNN, YOLO, SSD and other series, and the deep learning algorithm can realize high-accuracy identification of obstacles, and even obtain accurate object contour boundaries. However, the perception capability of a single vision sensor is limited and is easily interfered by the environment, so that the perception capability of the robot can be really improved by fusing other sensors with the vision sensor.

The millimeter wave radar is an active sensor that determines the distance to an object by emitting millimeter waves. The obstacle depth information acquired by the millimeter wave radar has the advantages of high precision, wide measurement range, good reliability, strong environmental robustness, reasonable cost and the like. The monocular camera and the millimeter wave radar are fused, so that the contour information of the obstacle can be obtained, the accurate three-dimensional coordinate can be obtained, and the problems that the monocular camera cannot obtain the accurate depth information and the point cloud of the millimeter wave radar is too sparse and is interfered by noise points are solved.

The existing processing method for integrating the monocular camera and the millimeter wave radar is mostly as follows: respectively collecting monocular camera data and millimeter wave radar data, and performing time synchronization processing; using a deep learning algorithm or directly carrying out morphological processing on the binary image to obtain the contour information and the pixel position of the obstacle; and projecting the millimeter wave radar point cloud data to a pixel plane by using the millimeter wave radar and monocular camera external parameters obtained by calibration in advance, and matching the pixel plane with the outline position obtained by vision to obtain final obstacle information. Although the method can fuse the measurement information of the two sensors, the noise interference of the millimeter wave radar is not considered, the finally fused data may have the phenomena of mutation, missing detection and the like, and the robustness of the whole algorithm is low. The sensor data are fused by using an extended Kalman filtering algorithm, so that more accurate coordinate information can be obtained, and the measurement robustness can be ensured.

Disclosure of Invention

Technical problem to be solved

In order to avoid the defects of the prior art, the invention provides the obstacle identification and positioning method based on the data fusion of the monocular camera and the millimeter wave radar, and the advantages of the monocular camera and the millimeter wave radar are combined, so that the robot can acquire the accurate position and the outline of the obstacle in the environment, and the measurement accuracy and the robustness are improved compared with the existing method.

Technical scheme

A method for identifying and positioning obstacles based on fusion of a monocular camera and a millimeter wave radar is characterized by comprising the following steps:

step 1: sensor data acquisition and time synchronization

1.1) acquiring data of a millimeter wave radar and a monocular camera, reading two-dimensional point cloud data according to a communication protocol provided by the millimeter wave radar, and reading image information of the monocular camera by using OpenCV;

1.2) carrying out time sequence registration on data of the millimeter wave radar and the monocular camera according to the timestamp information of the point cloud and the image, and ensuring that the time for fusing the data of the two sensors is consistent;

step 2: matching the millimeter wave radar with a monocular camera measuring object;

2.1) identifying the obstacle in the visual image by using a deep learning model trained in advance to obtain a minimum rectangular frame enveloping the obstacle;

2.2) the coordinates (x) of the radar point cloud are calculated according to the external parameter matrix and the internal parameter matrix of the radar camerari,yri) Conversion to pixel coordinates (u)ri,vri) The specific conversion relationship is as follows:

wherein z iscFor the actual depth of a spatial point to the camera plane, K represents the camera's internal reference matrixR and T respectively represent a rotation matrix and a translation vector between the camera and the radar, and can be calibrated in advance or directly measured and obtained, and H represents the height difference between the millimeter wave radar and the camera;

2.3) traversing the center pixel position (u) of the visually detected rectangular frameci,vci) Distance (u)ri,vri) The nearest point is an obstacle which is detected by the millimeter wave radar and the monocular camera together;

and step 3: filtering and fusing the measurement data of the two sensors by using an extended Kalman filtering algorithm;

3.1) prediction of state quantity: the fused state quantity is the coordinate and the speed x under a radar coordinate systemf(px,py,pz,vx,vy) According to the kalman filter equation, the predicted value of the state quantity and its uncertainty can be expressed as:

Figure BDA0002584682680000032

wherein, x'fIs xfF is a coefficient of variation of a state quantity of the linear system, P, P 'is a system uncertainty and a prediction of the uncertainty, respectively, and Q is a linear system x'f=FxfAdditional interference factors of (2);

3.2) millimeter wave radar measurement updating: the position and the speed of the obstacle measured by the millimeter wave radar are xr(xr,yr,vxr,vyr) According to the kalman filter equation, the update values of the state quantity and its uncertainty can be expressed as:

Figure BDA0002584682680000033

wherein, x'fr、P′rH represents the conversion relation between the state quantity and the millimeter wave radar measured value, R is the state quantity and the uncertainty after the millimeter wave radar measured data is updatedradarThe uncertainty of the millimeter wave radar measured value is represented and can be obtained according to the factory parameters of the radar;

3.3) monocular camera measurement update: the coordinate of the center pixel of the visually detected obstacle matched with the millimeter wave radar in the step 2 is xs(u, v), according to the kalman filter equation, the update values of the state quantity and its uncertainty can be expressed as:

wherein, f (x'f) Coordinate transformation relations representing the projection of state quantities onto a pixel coordinate system, using a pinhole camera model, whereby the coordinates are transformed into a non-linear mapping, HjJacobian matrix, R, representing pixel coordinates in a non-linear mapping relationship with respect to state quantitiescamRepresenting the uncertainty of the monocular camera measurement value as an identity matrix;

and 3.4) acquiring the measured values of the millimeter wave radar and the monocular camera according to the time sequence registered in the step 1, continuously updating the state quantity, and performing prediction once every updating, thereby acquiring more accurate obstacle position information.

The K in step 2.2 can be obtained by calibration through a dating method in matlab or OpenCV.

Advantageous effects

The invention provides an obstacle identification and positioning method based on fusion of a monocular camera and a millimeter wave radar. The invention adopts a multi-sensor fusion mode, so that the robot can sense more abundant and accurate obstacle information than a single sensor. Compared with a fusion mode based on coordinate conversion, the data acquired by the extended Kalman filtering data fusion mode used by the invention is closer to a true value, and the influence brought by the measurement noise of the sensor is greatly reduced.

Drawings

FIG. 1 is a flow chart of millimeter wave radar and monocular camera data fusion;

fig. 2 is a coordinate conversion relationship between the millimeter wave radar and the monocular camera.

Detailed Description

The invention will now be further described with reference to the following examples and drawings:

as shown in fig. 1, sensor data acquisition is synchronized with time;

1.1) collecting data of a millimeter wave radar and a monocular camera, analyzing sensor data according to a communication protocol provided by the millimeter wave radar, acquiring two-dimensional point cloud information, and reading image information of the monocular camera by using OpenCV;

1.2) carrying out time sequence registration on data of the millimeter wave radar and the monocular camera according to the timestamp information of the point cloud and the image, and ensuring that the time for fusing the data of the two sensors is consistent;

2) as shown in fig. 1, a millimeter wave radar is matched with a monocular camera measurement object;

2.1) acquiring a picture with obstacle information in advance, and using a deep learning training model based on YOLO-v3 to enable the model to identify obstacles in a visual image and obtain a minimum rectangular frame enveloping the obstacles;

2.2) coordinate conversion relationship between the millimeter wave radar and the monocular camera, O, as shown in FIG. 2rXrYrZr,OcXcYcZcAnd OsXsYsRespectively representing a radar coordinate system, a camera coordinate system and a pixel coordinate system, and calculating the coordinates (x) of the radar point cloud according to the radar camera external reference matrix and the camera internal reference matrixri,yri) Conversion to pixel coordinates (u)ri,vri) The specific conversion relationship is as follows:

wherein z iscFor the actual depth of a spatial point to the camera plane, K represents the camera's internal reference matrixCalibration and acquisition can be performed through a Zhang-Zhengyou method in matlab or OpenCV, R and T respectively represent a rotation matrix and a translation vector between a camera and a radar, calibration or direct measurement and acquisition can be performed in advance, H represents a height difference between a millimeter wave radar and the camera, and direct measurement and acquisition can be performed.

2.3) traversing the center pixel position (u) of the visually detected rectangular frameci,vci) Distance (u)ri,vri) The nearest point is an obstacle which is detected by the millimeter wave radar and the monocular camera together;

3) as shown on the right side of fig. 1, the measurement data of the two sensors are filtered and fused by using an extended kalman filtering algorithm;

3.1) prediction of state quantity: the fused state quantity is the coordinate and the speed x under a radar coordinate systemf(px,py,pz,vx,vy) According to the kalman filter equation, the predicted value of the state quantity and its uncertainty can be expressed as:

wherein, x'fIs xfP, P' represents the uncertainty of the system and the prediction of the uncertainty, respectively, and F represents the coefficient of change of the state quantity of the linear system, and since the system is a linear system, it is known that:

Figure BDA0002584682680000061

q represents a linear system x'f=FxfDue to the assumption of a linear systemThe disturbance variable is therefore related to the acceleration:

Figure BDA0002584682680000062

where v ═ Ga denotes the acceleration-to-velocity increment.

3.2) millimeter wave radar measurement updating: the position and the speed of the obstacle measured by the millimeter wave radar are xr(xr,yr,vxr,vyr) According to the kalman filter equation, the update values of the state quantity and its uncertainty can be expressed as:

wherein, x'fr、P′rFor the updated state quantity and uncertainty according to the millimeter wave radar measurement data, H ═ I,0]Representing the conversion relationship between the state quantity and the millimeter-wave radar measured value, RradarThe uncertainty of the millimeter wave radar measured value is represented, and the uncertainty can be obtained according to the factory parameters of the radar.

3.3) monocular camera measurement update: the coordinate of the center pixel of the visually detected obstacle matched with the millimeter wave radar in the step 2 is xs(u, v), according to the kalman filter equation, the update values of the state quantity and its uncertainty can be expressed as:

Figure BDA0002584682680000064

wherein R iscamThe uncertainty of the monocular camera measurement value is represented as an identity matrix. f (x'f) The coordinate transformation relationship representing the projection of the state quantities onto the pixel coordinate system uses a pinhole camera model, so the coordinates are transformed into a non-linear mapping:

f(xf)=[u,v]T=KT[px,py,pz,1]T

Hjjacobian matrix representing pixel coordinates in a nonlinear mapping relationship with respect to state quantities:

And 3.4) acquiring the measured values of the millimeter wave radar and the monocular camera according to the time sequence registered in the step 1, continuously updating the state quantity, and performing prediction once every updating, thereby acquiring more accurate obstacle position information.

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种驻车雷达系统、控制方法、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类