Target detection method and system based on camera and radar

文档序号:1534017 发布日期:2020-02-14 浏览:10次 中文

阅读说明:本技术 一种基于相机和雷达的目标检测方法及系统 (Target detection method and system based on camera and radar ) 是由 陈晓光 阎峰 王智新 史龙 李斌 吴穗宁 于 2019-10-18 设计创作,主要内容包括:本发明公开了一种基于相机和雷达的目标检测方法及系统,所述方法包括,首先获取雷达探测到的雷达数据与相机同步采集到的图像;然后对雷达数据中的雷达目标进行稳定性检测,筛选出有效雷达目标;其次,通过深度学习方法检测图像中的相机目标以及相机目标的目标信息;最后,根据交小比公式,融合有效雷达目标与相机目标,并筛选、输出融合目标结果。采用上述检测方法具有更强的鲁棒性,能够做到全天候检测,且具有较低的虚警率,获取的融合目标结果更加精确、全面。(The invention discloses a target detection method and a system based on a camera and a radar, wherein the method comprises the steps of firstly, acquiring radar data detected by the radar and an image synchronously acquired by the camera; then, performing stability detection on the radar target in the radar data, and screening out an effective radar target; secondly, detecting a camera target in the image and target information of the camera target by a deep learning method; and finally, fusing the effective radar target and the camera target according to a cross-to-small ratio formula, and screening and outputting a fused target result. The detection method has stronger robustness, can realize all-weather detection, has lower false alarm rate, and obtains more accurate and comprehensive fusion target results.)

1. A camera and radar based target detection method, characterized in that the method comprises,

acquiring radar data detected by a radar and an image synchronously acquired by a camera;

performing stability detection on radar targets in the radar data, and screening effective radar targets;

detecting a camera target and target information of the camera target in an image by a deep learning method;

and fusing the effective radar target and the camera target according to the cross-to-small ratio formula, and screening and outputting a fused target result.

2. The object detection method according to claim 1, characterized in that the method further comprises,

installing and adjusting the positions of the radar and the camera to enable the radar and the camera to have a common view field;

establishing an image plane coordinate system, a radar plane coordinate system and a target plane coordinate system;

and calibrating the structural parameters of the image plane and the radar plane.

3. The object detection method according to claim 2, wherein the radar data includes position coordinates, speed, distance, orientation information of the radar object in a radar plane coordinate system;

the target information of the camera target comprises a candidate frame position, a type and a size;

the fused target result includes the size and type of the camera target and the distance and speed of the corresponding valid radar target.

4. The object detection method of claim 3, wherein the calibrating the structural parameters of the image plane and the radar plane comprises,

placing the plane target on the ground within the field of view, shooting the plane target by a camera, and extracting at least 4 target image feature points in a target image;

obtaining a homography matrix H between the image plane coordinate system and the target plane coordinate system based on the coordinates of the target under the image plane coordinate system and the target plane coordinate systemti

Obtaining a homography matrix H from the plane coordinate system of the target to the plane coordinate system of the radar based on the placing angle of the plane target relative to the plane coordinate system of the radar and the translation relation of the origin of the plane coordinate system of the target relative to the plane coordinate system of the radartr

Based on the homography matrix HtiAnd HtrAnd obtaining a conversion relation H matrix between the radar plane and the image plane.

5. The method of claim 1, wherein the performing stability detection on radar targets in the radar data to screen out valid radar targets comprises,

determining whether the radar target is first present, wherein,

if the radar target appears for the first time, the continuous occurrence frequency of the radar target is equal to 1,

if the radar target does not appear for the first time, judging whether the radar target appears in the last frame or not, wherein,

if the radar target appears in the last frame, accumulating and adding 1 to the continuous occurrence times of the radar target;

if the radar target does not appear in the last frame, rejecting the radar target;

judging whether the continuous occurrence frequency of the radar target is greater than or equal to a first preset value or not, wherein,

if the continuous occurrence frequency of the radar target is greater than or equal to a first preset value, the radar target is a stably detected target and is output as an effective radar target;

and if the continuous occurrence frequency of the radar target is less than a first preset value, waiting for the next frame, and repeating the judgment.

6. The method of claim 5, wherein the performing stability detection on radar targets in the radar data to screen out valid radar targets further comprises the following steps when the stably detected targets disappear in the current frame:

judging whether the number of times of stably detecting the disappearance of the target is greater than or equal to a second preset value or not, wherein,

if the number of times that the stably detected target disappears is greater than or equal to a second preset value, rejecting the stably detected target;

if the number of times that the stably detected target disappears is smaller than a second preset value, accumulating and adding 1 to the number of times of disappearance, and judging whether the stably detected target reappears in the next frame; wherein the content of the first and second substances,

if the stably detected target reappears in the next frame, clearing the accumulated value of the disappearance times;

and if the stably detected target does not reappear in the next frame, repeatedly executing the steps.

7. The object detection method according to any one of claims 1 to 6, wherein fusing the valid radar object and the camera object according to the cross-to-fractional ratio formula, and screening and outputting the result of the fused object comprises,

acquiring the cross-to-fractional ratio of the effective radar target and the camera target according to a cross-to-fractional ratio calculation formula;

judging whether the cross-to-small ratio is larger than a third preset value or not, wherein,

if the cross-to-small ratio is larger than a third preset value, the effective radar target and the camera target are the same target, and 1 is added to the accumulated judgment times;

judging whether the judging times are more than or equal to a fourth preset value or not, wherein,

if the judgment times are larger than or equal to a fourth preset value, outputting a fusion target result;

and if the judgment times are smaller than a fourth preset value, repeating the steps on the effective radar target and the camera target in the next frame.

8. The method of claim 7, wherein the effective radar target to camera target intersection-to-minimality ratio is:

the ratio of the area of the overlap of the effective radar target candidate frame and the camera target candidate frame to the minimum area of the effective radar target candidate frame and the area of the camera target candidate frame.

9. The method of object detection according to claim 8, further comprising obtaining a candidate box for the valid radar object:

converting the effective radar target from radar coordinates to an image plane coordinate system based on the H matrix, wherein the candidate frame of the effective radar target is scale [ w, H ], wherein,

scale is the scaling factor of the candidate frame and satisfies:

Figure FDA0002238917850000031

wherein r isminIs the minimum value of the radar detection range, rmaxThe maximum value of the radar detection range, m and n are constants and are integers more than 0, and y is a y coordinate value of the effective radar target in a radar plane coordinate system;

w and h are respectively the middle values (r) of the set target in the radar detection depth rangemin+rmax) And/2, the width and height of the corresponding candidate frame.

10. A camera and radar based object detection system, the system comprising a radar and a camera, characterized in that it further comprises,

the acquisition unit is used for acquiring radar data detected by a radar and images synchronously acquired by a camera;

the filtering unit is used for performing stability detection on the radar target in the radar data and screening out an effective radar target;

an image detection unit for detecting a camera target and target information of the camera target in an image by a deep learning method;

and the fusion unit is used for fusing the effective radar target and the camera target according to the cross-to-small ratio formula, and screening and outputting a fusion target result.

11. The object detection system according to claim 10, characterized in that the system further comprises a setting unit for:

establishing an image plane coordinate system, a radar plane coordinate system and a target plane coordinate system;

and calibrating the structural parameters of the image plane and the radar plane, and obtaining a conversion relation H matrix from the radar plane to the image plane.

12. The object detection system of claim 11, wherein the radar data includes position coordinates, velocity, distance, orientation information of the radar target in a radar plane coordinate system;

the target information of the camera target comprises a candidate frame position, a type and a size;

the fused target result includes the size and type of the camera target and the distance and speed of the corresponding valid radar target.

13. The object detection system of claim 10, wherein the filtering unit is further configured to perform the steps of:

determining whether the radar target is first present, wherein,

if the radar target appears for the first time, the continuous occurrence frequency of the radar target is equal to 1,

if the radar target does not appear for the first time, judging whether the radar target appears in the last frame or not, wherein,

if the radar target appears in the last frame, accumulating and adding 1 to the continuous occurrence times of the radar target;

if the radar target does not appear in the last frame, rejecting the radar target;

judging whether the continuous occurrence frequency of the radar target is greater than or equal to a first preset value or not, wherein,

if the continuous occurrence frequency of the radar target is greater than or equal to a first preset value, the radar target is a stably detected target and is output as an effective radar target;

and if the continuous occurrence frequency of the radar target is less than a first preset value, waiting for the next frame, and repeating the judgment.

14. The object detection system of claim 10, wherein the filtering unit is further configured to perform the following steps when the stably detected object disappears in the current frame:

judging whether the number of times of stably detecting the disappearance of the target is greater than or equal to a second preset value or not, wherein,

if the number of times that the stably detected target disappears is greater than or equal to a second preset value, rejecting the stably detected target;

if the number of times that the stably detected target disappears is smaller than a second preset value, accumulating and adding 1 to the number of times of disappearance, and judging whether the stably detected target reappears in the next frame; wherein the content of the first and second substances,

if the stably detected target reappears in the next frame, clearing the accumulated value of the disappearance times;

and if the stably detected target does not reappear in the next frame, repeatedly executing the steps.

15. The object detection system according to any of claims 10-14, wherein the fusion unit is further configured to perform the steps of:

acquiring the cross-to-fractional ratio of the effective radar target and the camera target according to a cross-to-fractional ratio calculation formula;

judging whether the cross-to-small ratio is larger than a third preset value or not, wherein,

if the cross-to-small ratio is larger than a third preset value, the effective radar target and the camera target are the same target, and 1 is added to the accumulated judgment times;

judging whether the judging times are more than or equal to a fourth preset value or not, wherein,

if the judgment times are larger than or equal to a fourth preset value, outputting a fusion target result;

and if the judgment times are smaller than a fourth preset value, repeating the steps on the effective radar target and the camera target in the next frame.

16. The object detection system of claim 15, wherein the effective radar target to camera target intersection-to-minimality ratio is:

the ratio of the area of the overlap of the effective radar target candidate frame and the camera target candidate frame to the minimum area of the effective radar target candidate frame and the area of the camera target candidate frame.

17. The object detection system of claim 16, further comprising a processing unit to obtain candidate boxes for the valid radar target:

converting the effective radar target from radar coordinates to an image plane coordinate system based on the H matrix, wherein the candidate frame of the effective radar target is scale [ w, H ], wherein,

scale is the scaling factor of the candidate frame and satisfies:

wherein r isminIs the minimum value of the radar detection range, rmaxThe maximum value of the radar detection range, m and n are constants and are integers more than 0, and y is a y coordinate value of the effective radar target in a radar plane coordinate system;

w and h are respectively the middle values (r) of the set target in the radar detection depth rangemin+rmax) And/2, the width and height of the corresponding candidate frame.

Technical Field

The invention belongs to the technical field of target detection, and particularly relates to a target detection method and system based on a camera and a radar.

Background

The existing target detection schemes can be roughly divided into two categories, namely vision-based and non-vision-based, and the vision-based target detection is usually limited to the judgment of whether a target exists and the approximate position of the target, but cannot judge the specific position and speed of the target. Non-visual target detection, such as radar, can only obtain the position and speed information of the target, and cannot obtain the type and size of the target. However, in modern production life, it is often necessary to obtain the type, size information, and position and speed information of the target simultaneously. For example, in the detection of an intruding object at a tramcar intersection, if the type of the target and the position and the speed of the target can be known at the same time, the judgment can be made according to specific situations, and the loss can be reduced.

In view of the above problems, how to effectively implement the omnidirectional real-time target detection becomes a more and more urgent technical problem to be solved.

Disclosure of Invention

In view of the above problems, the present invention provides a target detection method and system based on a camera and a radar, wherein the detection method has a lower false alarm rate, and the obtained fused target result is more accurate and comprehensive.

The invention aims to provide a target detection method based on a camera and a radar, which comprises the following steps,

acquiring radar data detected by a radar and an image synchronously acquired by a camera;

performing stability detection on radar targets in the radar data, and screening effective radar targets;

detecting a camera target and target information of the camera target in an image by a deep learning method;

and fusing the effective radar target and the camera target according to the cross-to-small ratio formula, and screening and outputting a fused target result.

Further, the method may further comprise,

installing and adjusting the positions of the radar and the camera to enable the radar and the camera to have a common view field;

establishing an image plane coordinate system, a radar plane coordinate system and a target plane coordinate system;

and calibrating the structural parameters of the image plane and the radar plane.

Further, the radar data comprises position coordinates, speed, distance and azimuth information of the radar target under a radar plane coordinate system;

the target information of the camera target comprises a candidate frame position, a type and a size;

the fused target result includes the size and type of the camera target and the distance and speed of the corresponding valid radar target.

Further, the calibrating the structural parameters of the image plane and the radar plane comprises,

placing the plane target on the ground within the field of view, shooting the plane target by a camera, and extracting at least 4 target image feature points in a target image;

obtaining a homography matrix H between the image plane coordinate system and the target plane coordinate system based on the coordinates of the target under the image plane coordinate system and the target plane coordinate systemti

Obtaining a homography matrix H from the plane coordinate system of the target to the plane coordinate system of the radar based on the placing angle of the plane target relative to the plane coordinate system of the radar and the translation relation of the origin of the plane coordinate system of the target relative to the plane coordinate system of the radartr

Based on the homography matrix HtiAnd HtrAnd obtaining a conversion relation H matrix between the radar plane and the image plane.

Further, the detecting the stability of the radar target in the radar data and screening out the effective radar target comprises,

determining whether the radar target is first present, wherein,

if the radar target appears for the first time, the continuous occurrence frequency of the radar target is equal to 1,

if the radar target does not appear for the first time, judging whether the radar target appears in the last frame or not, wherein,

if the radar target appears in the last frame, accumulating and adding 1 to the continuous occurrence times of the radar target;

if the radar target does not appear in the last frame, rejecting the radar target;

judging whether the continuous occurrence frequency of the radar target is greater than or equal to a first preset value or not, wherein,

if the continuous occurrence frequency of the radar target is greater than or equal to a first preset value, the radar target is a stably detected target and is output as an effective radar target;

and if the continuous occurrence frequency of the radar target is less than a first preset value, waiting for the next frame to repeat the judgment.

Further, the performing stability detection on the radar target in the radar data and screening out an effective radar target further includes the following steps when the stably detected target disappears in the current frame:

judging whether the number of times of stably detecting the disappearance of the target is greater than or equal to a second preset value or not, wherein,

if the number of times that the stably detected target disappears is greater than or equal to a second preset value, rejecting the stably detected target;

if the number of times that the stably detected target disappears is smaller than a second preset value, accumulating and adding 1 to the number of times of disappearance, and judging whether the stably detected target reappears in the next frame; wherein the content of the first and second substances,

if the stably detected target reappears in the next frame, clearing the accumulated value of the disappearance times;

and if the stably detected target does not reappear in the next frame, repeatedly executing the steps.

Further, according to the cross-to-small ratio formula, fusing the effective radar target and the camera target, and screening and outputting the fusion result comprises,

acquiring the cross-to-fractional ratio of the effective radar target and the camera target according to a cross-to-fractional ratio calculation formula;

judging whether the cross-to-small ratio is larger than a third preset value or not, wherein,

if the cross-to-small ratio is larger than a third preset value, the effective radar target and the camera target are the same target, and 1 is added to the accumulated judgment times;

judging whether the judging times are more than or equal to a fourth preset value or not, wherein,

if the judgment times are larger than or equal to a fourth preset value, outputting a fusion target result;

and if the judgment times are smaller than a fourth preset value, repeating the steps on the effective radar target and the camera target in the next frame.

Further, the effective radar target to camera target intersection-to-minimality ratio is:

the ratio of the area of the overlap of the effective radar target candidate frame and the camera target candidate frame to the minimum area of the effective radar target candidate frame and the area of the camera target candidate frame.

Further, the method further comprises obtaining a candidate box for the valid radar target:

converting the effective radar target from radar coordinates to an image plane coordinate system based on the H matrix, wherein the candidate frame of the effective radar target is scale [ w, H ], wherein,

scale is the scaling factor of the candidate frame and satisfies:

Figure BDA0002238917860000041

wherein r isminIs the minimum value of the radar detection range, rmaxThe maximum value of the radar detection range, m and n are constants and are integers more than 0, and y is a y coordinate value of the effective radar target in a radar plane coordinate system;

w, h are respectively in radar detection for set targetsMiddle value of depth range (r)min+rmax) And/2, the width and height of the corresponding candidate frame.

It is another object of the present invention to provide a camera and radar based object detection system, said system comprising a radar and a camera, and further comprising,

the acquisition unit is used for acquiring radar data detected by a radar and images synchronously acquired by a camera;

the filtering unit is used for performing stability detection on the radar target in the radar data and screening out an effective radar target;

an image detection unit for detecting a camera target and target information of the camera target in an image by a deep learning method;

and the fusion unit is used for fusing the effective radar target and the camera target according to the cross-to-small ratio formula, and screening and outputting a fusion target result.

Further, the system further comprises a setting unit for:

establishing an image plane coordinate system, a radar plane coordinate system and a target plane coordinate system;

and calibrating the structural parameters of the image plane and the radar plane, and obtaining a conversion relation H matrix from the radar plane to the image plane.

Further, the radar data comprises position coordinates, speed, distance and azimuth information of the radar target under a radar plane coordinate system;

the target information of the camera target comprises a candidate frame position, a type and a size;

the fused target result includes the size and type of the camera target and the distance and speed of the corresponding valid radar target.

Further, the filtering unit is further configured to perform the following steps:

determining whether the radar target is first present, wherein,

if the radar target appears for the first time, the continuous occurrence frequency of the radar target is equal to 1,

if the radar target does not appear for the first time, judging whether the radar target appears in the last frame or not, wherein,

if the radar target appears in the last frame, accumulating and adding 1 to the continuous occurrence times of the radar target;

if the radar target does not appear in the last frame, rejecting the radar target;

judging whether the continuous occurrence frequency of the radar target is greater than or equal to a first preset value or not, wherein,

if the continuous occurrence frequency of the radar target is greater than or equal to a first preset value, the radar target is a stably detected target and is output as an effective radar target;

and if the continuous occurrence frequency of the radar target is less than a first preset value, waiting for the next frame to repeat the judgment.

Further, the filtering unit is further configured to, when the stably detected target disappears in the current frame, perform the following steps:

judging whether the number of times of stably detecting the disappearance of the target is greater than or equal to a second preset value or not, wherein,

if the number of times that the stably detected target disappears is greater than or equal to a second preset value, rejecting the stably detected target;

if the number of times that the stably detected target disappears is smaller than a second preset value, accumulating and adding 1 to the number of times of disappearance, and judging whether the stably detected target reappears in the next frame; wherein the content of the first and second substances,

if the stably detected target reappears in the next frame, clearing the accumulated value of the disappearance times;

and if the stably detected target does not reappear in the next frame, repeatedly executing the steps.

Further, the fusion unit is further configured to perform the following steps:

acquiring the cross-to-fractional ratio of the effective radar target and the camera target according to a cross-to-fractional ratio calculation formula;

judging whether the cross-to-small ratio is larger than a third preset value or not, wherein,

if the cross-to-small ratio is larger than a third preset value, the effective radar target and the camera target are the same target, and 1 is added to the accumulated judgment times;

judging whether the judging times are more than or equal to a fourth preset value or not, wherein,

if the judgment times are larger than or equal to a fourth preset value, outputting a fusion target result;

and if the judgment times are smaller than a fourth preset value, repeating the steps on the effective radar target and the camera target in the next frame.

Further, the effective radar target to camera target intersection-to-minimality ratio is:

the ratio of the area of the overlap of the effective radar target candidate frame and the camera target candidate frame to the minimum area of the effective radar target candidate frame and the area of the camera target candidate frame.

Further, the system further comprises a processing unit for obtaining candidate boxes for the valid radar target:

converting the effective radar target from radar coordinates to an image plane coordinate system based on the H matrix, wherein the candidate frame of the effective radar target is scale [ w, H ], wherein,

scale is the scaling factor of the candidate frame and satisfies:

Figure BDA0002238917860000061

wherein r isminIs the minimum value of the radar detection range, rmaxThe maximum value of the radar detection range, m and n are constants and are integers more than 0, and y is a y coordinate value of the effective radar target in a radar plane coordinate system;

w and h are respectively the middle values (r) of the set target in the radar detection depth rangemin+rmax) And/2, the width and height of the corresponding candidate frame.

Compared with a single detection system based on a camera or a detection system based on a radar, the target detection method has stronger robustness, can realize all-weather detection, has lower false alarm rate, and combines the advantages of the camera and the radar to fuse the detection result, so that the obtained target information is more accurate and comprehensive.

Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.

Fig. 1 is a schematic flow chart illustrating a camera and radar-based target detection method in an embodiment of the present invention;

FIG. 2 shows a schematic view of a field setup of a radar and camera in an embodiment of the invention;

FIG. 3 is a schematic diagram illustrating a radar target filtering process according to an embodiment of the present invention;

FIG. 4 is a schematic diagram illustrating another radar target filtering process in an embodiment of the present invention;

FIG. 5 is a schematic flow chart illustrating fusion of valid radar targets with camera targets according to an embodiment of the present invention;

FIG. 6 is a schematic diagram of an efficient radar target candidate box determination and camera target candidate box according to an embodiment of the present invention;

fig. 7 shows a schematic structural diagram of a target detection system based on a camera and a radar in an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

As shown in fig. 1, an embodiment of the present invention introduces a target detection method based on a camera and a radar, where the method includes, first, acquiring radar data detected by the radar and an image acquired by the camera synchronously; secondly, performing stability detection on the radar target in the radar data to screen out an effective radar target; then, detecting a camera target and target information of the camera target in the image by a deep learning method; and finally, fusing the effective radar target and the camera target according to a cross-to-small ratio formula, and screening and outputting a fused target result. Compared with an independent camera-based detection system or a radar-based detection system, the detection method has stronger robustness, can realize all-weather detection, has lower false alarm rate, and combines the advantages of the camera and the radar to fuse the detection result, so that the obtained target information is more accurate and comprehensive.

In this embodiment, the method further includes: firstly, mounting and adjusting the positions of a radar and a camera to enable the radar and the camera to have a common view field; then, establishing an image plane coordinate system, a radar plane coordinate system and a target plane coordinate system; and finally, calibrating the structural parameters of the image plane and the radar plane. Specifically, as shown in fig. 2, the system includes a camera and a radar, the detection plane of the radar is perpendicular to the ground, the angle between the camera and the radar is adjusted according to the actual shooting field of view, and the relative position between the camera and the radar is fixed after the adjustment is completed. The image plane coordinate system is O-uv, and the radar plane coordinate system is Or-xryrAnd the target plane coordinate system is Ot-xtyt. Due to radar detectionThe area is only a horizontal plane, so the radar plane coordinate system can be arranged on the ground, and in the embodiment of the invention, the radar plane coordinate system can also be regarded as a world coordinate system. The radar plane coordinate system takes the overlooking detection plane as a visual angle, the right direction is the positive direction of an x axis, and the forward direction is the positive direction of a y axis, and it should be noted that, in the embodiment of the invention, the right direction and the forward direction are both based on the reference of fig. 2, but are not limited to the reference, and are related to the radar plane. Preferably, the system further comprises an upper computer, sensors are arranged in the radar and the camera, data collected by the sensors are transmitted to the upper computer, the upper computer processes image data and radar data in real time, data fusion of the image data and the radar data is completed, and size, category, position and speed information of the target is output.

After the system is built up in each coordinate system, the system calibrates the structural parameters of the image plane and the radar plane, and the method specifically comprises the following processes:

firstly, a plane target is placed on the ground within the field of view, a camera shoots the target, at least 4 target image characteristic points in a target image are extracted, and a homography matrix H between an image plane coordinate system and a target plane coordinate system fixedly connected on the target is calculatedtiSatisfy the following requirements

p=HtiPt(1)

Where P is the coordinate of the camera target in the image, PtAre the coordinates of the camera object in real space, i.e. the target plane coordinate system.

Then, measuring the placing angle of the target relative to the radar plane coordinate system through a meter ruler and calculating the translation relation of the origin of the target plane coordinate system relative to the radar plane coordinate system, thereby obtaining a homography matrix H from the target plane coordinate system to the radar plane coordinate systemtrThen there is

Pr=HtrPt(2)

Wherein, PrThe coordinates of the camera target in the radar plane coordinate system are obtained.

Then according to the formulas (1) and (2), there are:

Figure BDA0002238917860000091

and (4) according to the formula (3), the conversion of the target detected by the radar from the radar plane coordinate system to the image plane coordinate system can be completed, and the detection result is mapped to the image plane from the radar plane. Further, in the embodiment of the present invention, the structural parameter includes a conversion relation H matrix between the radar plane and the image plane, so that the following formula (3) can be known:

Figure BDA0002238917860000092

the target coordinate detected by the radar can be converted into an image plane coordinate system from a radar plane coordinate system through the H matrix, so that data fusion is carried out on the radar detection result and the camera detection result in the image plane coordinate system.

In this embodiment, the radar uploads detected radar data to the upper computer at a certain frequency, and the radar data includes position coordinates, speed, distance, and azimuth information of the radar target under a radar plane coordinate system. The camera also acquires images at a certain frequency, and the target information of the camera target acquired through deep learning comprises the position, type and size of the candidate frame; the fused target result includes the size and type of the camera target and the distance and speed of the corresponding valid radar target. The camera target and the corresponding effective radar target in the fusion target result are the same target. Preferably, the uploading frequency is 8 Hz. Further, the radar data and the camera image are stored in a global variable of a system, and the radar and the camera target are fused aiming at the global variable. It should be noted that the type in the target information refers to the category of the target, for example: the types of objects may include humans, animals, and cars, among others.

In this embodiment, the radar target detection error detection and missed detection may be caused by environmental interference, for example, a perimeter reflection echo, so as to perform stability detection on the radar target in the radar data, and the method is also a filtering method for radar detection data, and can effectively remove an invalid target and retain an effective radar target, so as to reduce the false detection and missed detection conditions, as shown in fig. 3, to check whether the target detected by the radar is a false detection target, and specifically includes the following steps:

s11, determining whether a target detected by the radar appears for the first time, if so, setting the number of consecutive occurrences cnt _ app of the detected target to 1, and if not, executing step S12;

s12, judging whether the target detected by the radar appears in the last frame, wherein if the detected target appears in the last frame, executing the step S13; if the detected target does not appear in the last frame, discarding the target detected by the radar, and clearing the number of times of continuous appearance and accumulation of the target, namely cnt _ app is 0;

s13, performing cumulative addition of 1 to the cnt _ app of the number of consecutive occurrences of the radar target (i.e., cnt _ app ═ cnt _ app +1), and executing step S14;

s14, judging whether the cnt _ app continuously appears for a number of times is larger than or equal to n _ app, wherein if the cnt _ app is larger than or equal to n _ app, the target detected by the radar is a stable target, and executing the step S15; if the cnt _ app is smaller than the n _ app, temporarily storing, and waiting for the next frame to repeat the judgment of the steps S11-S14;

and S15, outputting the target detected by the radar as an effective radar target, or adding the effective radar target to a radar output list, and returning a list result after all radar targets in the current frame are detected.

And judging whether the frequency of the continuous occurrence of the target detected by the radar is greater than or equal to the n _ app frame, namely judging whether the target detected by the radar continuously occurs in the n _ app frame, and if the target detected by the radar continuously occurs in the n _ app frame, considering that the target detected by the radar is in a stable detection state, so that the false detection condition is reduced.

In this embodiment, in the radar target detection process, there is a case that the stably detected target (the stably detected target has at least an n _ app frame) disappears in a certain frame, so as to reduce the situation of missing detection of the target, as shown in fig. 4, after the stably detected target disappears in the current frame, the method specifically includes the following steps:

s21, whether the number of times cnt _ rem that the stably detected target disappears is larger than or equal to n _ rem or not is judged, and if the cnt _ rem is larger than or equal to n _ rem, the stably detected target is removed from a display list or deleted from a radar output list; if cnt _ rem is smaller than n _ rem, go to step S22;

s22, accumulating and adding 1 to the number of times cnt _ rem that the stably detected target disappears (that is, cnt _ rem is cnt _ rem +1), and the stably detected target is still stored in the radar display list (or called output list), moves at the final speed (that is, it is preset that the target still moves at the final speed), and executes step S23;

s23, determining whether the target of the stable detection reappears in a next frame, and if the target of the stable detection reappears, clearing the cnt _ rem to zero, that is, the cnt _ rem is equal to 0; if the stably detected target does not reappear, step S21 is executed.

In this embodiment, the radar targets output in the output list are all effective radar targets, and the radar target detection process not only performs filtering detection on whether the radar targets appear continuously, but also performs missed detection when the targets disappear stably, thereby ensuring the accuracy of radar target detection.

In this embodiment, each effective radar target in each frame is fused with all camera targets, so as to realize the fusion of the effective radar target and the camera target, specifically, as shown in fig. 5, the fusion of the effective radar target and the camera target according to the cross-to-minir formula, and the screening of the fused target result includes the following steps:

s31, acquiring the intersection-fraction ratio IOM of the effective radar target and the camera target according to an intersection-fraction ratio calculation formula;

s32, judging whether the cross-to-small ratio IOM is larger than a preset value T or notfusionIf IOM>TfusionIf it is determined that the valid radar target and the camera target are the same target, the determination count cnt _ n is cumulatively added by 1 (i.e., cnt _ n is cnt _ n +1), and step S33 is executed; if IOM is less than or equal to TfusionThen, thenFusion fails;

s33, judging whether the judgment times cnt _ n is more than or equal to the preset times nfusionIf cnt _ n is greater than or equal to nfusionOutputting a fusion target result; if cnt _ n < nfusionStep S34 is executed;

s34, repeating the above steps S31-S33 for the effective radar target and camera target in the next frame. And the radar target and the camera target which belong to the same target are fused and verified for multiple times, so that the accuracy of target fusion is ensured.

In this example, TfusionCan be taken to be 0.5, nfusionIs an integer greater than 0.

In this embodiment, the intersection minimality ratio of the effective radar target to the camera target is the overlapping area of the effective radar target candidate frame and the camera target candidate frame and the area S of the effective radar target candidate frameAAnd camera target candidate frame area SBThe ratio of the medium-to-minimum area, namely the intersection-to-minimum ratio, is calculated according to the formula:

Figure BDA0002238917860000111

as shown in fig. 6, the candidate frames of the radar target and the camera target are both rectangular frames, so that the coincidence degree of the two rectangles is calculated, and whether the regions represented by the two rectangles are the same region is judged through the coincidence degree, so that whether the effective radar target and the camera target are the same target is judged, and the fusion error is reduced.

Therefore, in order to fuse the radar target and the camera target, it is further required to acquire a candidate frame of the valid radar target, specifically:

converting the effective radar target from radar coordinates to an image plane coordinate system based on the H matrix, wherein the candidate frame of the effective radar target is scale [ w, H ], wherein,

scale is the scaling factor of the candidate frame, and satisfies the following conditions:

Figure BDA0002238917860000121

wherein r isminIs the minimum value of the radar detection range, rmaxAnd m and n are constants which are integers larger than 0, and y is a y coordinate value of the effective radar target in a radar plane coordinate system. Specifically, to calculate the IOM, the radar target needs to be extended from one point to one area. As shown in FIG. 6, let the radar detection range be [ r ]min,rmax](not shown), w and h are respectively the middle values (r) of the set target in the radar detection depth rangemin+rmax) In the case of/2, the width and height of a corresponding one of the candidate frames are set at rminThe candidate frame size of the radar target is 1/m of the middle value at rmaxAnd the size of the candidate frame of the radar target is n times of the intermediate value. Thus, the size of the target candidate frame at an arbitrary depth can be determined according to equation (6).

The embodiment of the invention also discloses a target detection system based on a camera and the radar, which comprises the radar and the camera, as shown in fig. 7, and further comprises an acquisition unit, a filtering unit, an image detection unit, a fusion unit, a setting unit and a processing unit, wherein the acquisition unit is used for acquiring radar data detected by the radar and an image synchronously acquired by the camera; the filtering unit is used for performing stability detection on the radar target in the radar data and screening out an effective radar target; the image detection unit is used for detecting a camera target and target information of the camera target in an image by a deep learning method; and the fusion unit is used for fusing the effective radar target and the camera target according to a cross-to-small ratio formula, and screening and outputting a fusion target result. The setting unit is used for firstly establishing an image plane coordinate system, a radar plane coordinate system and a target plane coordinate system; and then, calibrating the structural parameters of the image plane and the radar plane, and obtaining a conversion relation H matrix from the radar plane to the image plane.

In this embodiment, the radar data includes position coordinates, speed, distance, and azimuth information of the radar target in a radar plane coordinate system; the target information of the camera target comprises a candidate frame position, a type and a size; the fused target result includes the size and type of the camera target and the distance and speed of the corresponding valid radar target.

The filtering unit is further configured to perform the steps S11-S14 and S21-S23.

The fusion unit is also used for executing the steps S31-S34. The radar target detection process not only carries out filtering detection on whether the radar target appears continuously, but also carries out missed detection when the target disappears stably, thereby ensuring the accuracy of radar target detection.

The intersection minimality ratio of the effective radar target to the camera target is that the overlapping area of the effective radar target candidate frame and the camera target candidate frame is equal to the area S of the effective radar target candidate frameAAnd camera target candidate frame area SBThe ratio of the medium-to-minimum area, namely the intersection-to-minimum ratio, is calculated according to the formula:

Figure BDA0002238917860000131

as shown in fig. 6, the candidate frames of the radar target and the camera target are both rectangular frames, so that the coincidence degree of the two rectangles is calculated, and whether the regions represented by the two rectangles are the same region is judged through the coincidence degree, so that whether the effective radar target and the camera target are the same target is judged, and the fusion error is reduced.

Therefore, in order to fuse and calculate the radar target and the camera target, a candidate frame of the valid radar target is further required to be obtained, and specifically, the processing unit is configured to convert the valid radar target from radar coordinates to an image plane coordinate system based on the H matrix, where the candidate frame of the valid radar target is scale [ w, H ], where,

w is the width of the candidate frame, h is the height of the candidate frame, scale is the scaling factor of the candidate frame, and satisfies:

Figure BDA0002238917860000132

wherein r isminIs the minimum value of the radar detection range, rmaxAnd m and n are constants and are integers larger than 0, and y is a y coordinate value of the effective radar target in a radar plane coordinate system. Specifically, to calculate the IOM, the radar target needs to be extended from one point to one area. As shown in FIG. 6, let the radar detection range be [ r ]min,rmax](not shown), w and h are respectively the middle values (r) of the set target in the radar detection depth rangemin+rmax) In the case of/2, the width and height of a corresponding one of the candidate frames are set at rminThe candidate frame size of the radar target is 1/m of the middle value at rmaxAnd the size of the candidate frame of the radar target is n times of the intermediate value. Thus, the size of the target candidate frame at an arbitrary depth can be determined according to equation (6).

Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:无线通信系统中通过镜面反射的车辆微波成像方法和设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类