Obstacle sensing method and device and electronic equipment

文档序号:551871 发布日期:2021-05-14 浏览:2次 中文

阅读说明:本技术 一种障碍物感知方法、装置以及电子设备 (Obstacle sensing method and device and electronic equipment ) 是由 黎明慧 廖毅雄 马福龙 刘明 于 2021-01-29 设计创作,主要内容包括:本发明实施例涉及自动驾驶技术领域,特别是涉及一种障碍物感知方法、装置以及电子设备。该方法包括:获取单线雷达的点云图像以及单目相机图像;对单目相机图像进行目标检测,得到单目相机图像中的障碍物类型及预测框;将点云图像和单目相机图像进行同步处理;对点云图像中的点云进行聚类,得到标记框;将点云和标记框投入单目相机图像,并得到点云的像素坐标;获取预测框和标记框存在重叠的目标障碍物;确定所获取到的目标障碍物与车辆的距离。由于单目相机图像识别出的目标障碍物准确,则获得预测框,以及根据单线雷达的点云在单目相机图像中的像素坐标可准确的确定目标障碍物与设置单线雷达的车辆的距离。(The embodiment of the invention relates to the technical field of automatic driving, in particular to a method and a device for sensing obstacles and electronic equipment. The method comprises the following steps: acquiring a point cloud image and a monocular camera image of a single line radar; carrying out target detection on the monocular camera image to obtain the type of the obstacle and a prediction frame in the monocular camera image; carrying out synchronous processing on the point cloud image and the monocular camera image; clustering point clouds in the point cloud images to obtain a mark frame; putting the point cloud and the mark frame into a monocular camera image, and obtaining pixel coordinates of the point cloud; acquiring a target obstacle with overlapped prediction frames and mark frames; and determining the distance between the acquired target obstacle and the vehicle. And obtaining a prediction frame if the target obstacle identified by the monocular camera image is accurate, and accurately determining the distance between the target obstacle and the vehicle provided with the single line radar according to the pixel coordinates of the point cloud of the single line radar in the monocular camera image.)

1. An obstacle sensing method applied to a vehicle, wherein the vehicle is provided with a single line radar and a monocular camera, characterized in that the method comprises:

acquiring a point cloud image and a monocular camera image of a single line radar;

carrying out target detection on the monocular camera image to obtain the type of the obstacle and a prediction frame in the monocular camera image;

performing synchronous processing on the point cloud image and the monocular camera image;

clustering the point clouds in the point cloud images to obtain a mark frame;

putting the point cloud and the mark frame into the monocular camera image, and obtaining a pixel coordinate of the point cloud;

acquiring a target obstacle with overlapped prediction frames and mark frames;

and determining the distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame.

2. The method of claim 1, wherein the step of synchronizing the point cloud image and the monocular camera image further comprises:

when the single-line radar detects point cloud, setting a timestamp for the point cloud image detected by the single-line radar according to a preset clock source;

when the monocular camera detects an image, setting a timestamp for the image of the monocular camera detected by the monocular camera according to the preset clock source;

and taking the point cloud image and the monocular camera image of which the time stamp interval is smaller than a preset threshold value as the same frame data.

3. The method of claim 1, wherein the step of determining the distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame further comprises:

obtaining the orientation of the target obstacle according to the pixel coordinates of the target point cloud;

obtaining a 3D detection frame of the target obstacle according to the orientation of the target obstacle and the prediction frame;

acquiring a target vertex with the minimum distance to the single-line radar in eight vertexes of the 3D detection frame;

and determining the distance between the target obstacle and the vehicle according to the distance between the target vertex and the single line radar.

4. The method of claim 3, wherein the step of obtaining a 3D detection frame of the target obstacle according to the orientation of the target obstacle and the prediction frame further comprises:

acquiring four vertexes of the prediction frame;

and generating the 3D detection frame by taking the four vertexes as diagonal points of the 3D detection frame according to the orientation of the target obstacle.

5. The method according to claim 4, wherein the step of generating the 3D detection frame with the four vertices as diagonal points of the 3D detection frame according to the orientation of the target obstacle further comprises:

according to the pixel coordinates of the point cloud, acquiring the pixel coordinates (x) of the point closest to the single-line radar in the four vertexesmin,ymin) The width w of the prediction frame and the height h of the prediction frame;

according to the pixel coordinate (x) of the point closest to the single-line radar in the four vertexesmin,ymin) Generating eight vertexes of the 3D detection frame by the width w of the prediction frame and the height h of the prediction frame, wherein the connecting line of the eight vertexes is the 3D detection frame.

6. The method of claim 5, wherein the coordinates of the eight vertices of the 3D detection box are respectively: (x)min,ymin)、(xmin,ymin+h)、(xmin+w,ymin)、(xmin+w,ymin+h)、(xmin+0.8*w,ymin+0.1*h)、(xmin+0.8*w,ymin+1.1*h)、(xmin+0.2*w,ymin-0.15 x h) and (x)min+0.2*w,ymin+0.85*h)。

7. An obstacle sensing apparatus applied to a vehicle, wherein the vehicle is provided with a single line radar and a monocular camera, characterized in that the apparatus comprises:

the first acquisition module is used for acquiring a point cloud image of a single line radar and a monocular camera image;

the detection module is used for carrying out target detection on the monocular camera image to obtain the type of the obstacle and a prediction frame in the monocular camera image;

the synchronization module is used for carrying out synchronization processing on the point cloud image and the monocular camera image;

the clustering module is used for clustering the point cloud in the point cloud image to obtain a mark frame;

the coordinate conversion module is used for putting the point cloud and the mark frame into the monocular camera image and obtaining the pixel coordinate of the point cloud;

the second acquisition module is used for acquiring a target obstacle with overlapped prediction frames and mark frames;

and the determining module is used for determining the distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame.

8. The apparatus of claim 7, wherein the synchronization module comprises:

the first setting unit is used for setting a timestamp for the point cloud image detected by the single-line radar according to a preset clock source when the single-line radar detects the point cloud;

the second setting unit is used for setting a timestamp for the monocular camera image detected by the monocular camera according to the preset clock source when the monocular camera detects the image;

and the synchronization unit is used for taking the point cloud image and the monocular camera image of which the interval of the time stamps is smaller than a preset threshold value as the same frame data.

9. The apparatus of claim 7, wherein the determining module comprises:

the first acquisition unit is used for acquiring the orientation of the target obstacle according to the pixel coordinates of the target point cloud;

a second obtaining unit, configured to obtain a 3D detection frame of the target obstacle according to the direction of the target obstacle and the prediction frame;

a third obtaining unit, configured to obtain a target vertex with a smallest distance to the single line radar from among eight vertices of the 3D detection frame;

and the determining unit is used for determining the distance between the target obstacle and the vehicle according to the distance between the target vertex and the single line radar.

10. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform the method of any of claims 1-6.

Technical Field

The embodiment of the invention relates to the technical field of automatic driving, in particular to a method and a device for sensing obstacles and electronic equipment.

Background

An autonomous vehicle is a vehicle that can start, run, and stop without the need for a driver. With the development of unmanned technology, the automatic driving vehicle will be gradually popularized in people's daily life. Autonomous driving techniques rely on the perception of surrounding obstacles by an autonomous vehicle. In order to realize the perception of obstacles around the automatic driving vehicle, a multi-line laser radar is usually carried on the automatic driving vehicle, a 3D model of the environment around the automobile can be scanned through the multi-line laser radar, and the change of the environment of the previous frame and the environment of the next frame are compared by using a related algorithm, so that the surrounding vehicles and pedestrians can be easily detected.

However, in implementing the embodiments of the present invention, the inventors found that: the multi-line laser radar is expensive, the cost of the automatic driving vehicle can be increased, the single-line radar is low in price, and due to the fact that the obtained point cloud is sparse, obstacles such as vehicles, pedestrians and the like around the automatic driving vehicle are difficult to detect.

Disclosure of Invention

In view of the above problems, embodiments of the present invention provide an obstacle sensing method, an obstacle sensing apparatus, and an electronic device, which overcome or at least partially solve the above problems.

According to an aspect of an embodiment of the present invention, there is provided an obstacle sensing method applied to a vehicle, wherein the vehicle is provided with a single line radar and a monocular camera, the method including: acquiring a point cloud image and a monocular camera image of a single line radar; carrying out target detection on the monocular camera image to obtain the type of the obstacle and a prediction frame in the monocular camera image; performing synchronous processing on the point cloud image and the monocular camera image; clustering the point clouds in the point cloud images to obtain a mark frame; putting the point cloud and the mark frame into the monocular camera image, and obtaining a pixel coordinate of the point cloud; acquiring a target obstacle with overlapped prediction frames and mark frames; and determining the distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame.

In an optional manner, the step of synchronizing the point cloud image and the monocular camera image further includes: when the single-line radar detects point cloud, setting a timestamp for the point cloud image detected by the single-line radar according to a preset clock source; when the monocular camera detects an image, setting a timestamp for the image of the monocular camera detected by the monocular camera according to the preset clock source; and taking the point cloud image and the monocular camera image of which the time stamp interval is smaller than a preset threshold value as the same frame data.

In an optional manner, the step of determining the distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame further includes: obtaining the orientation of the target obstacle according to the pixel coordinates of the target point cloud; obtaining a 3D detection frame of the target obstacle according to the orientation of the target obstacle and the prediction frame; acquiring a target vertex with the minimum distance to the single-line radar in eight vertexes of the 3D detection frame; and determining the distance between the target obstacle and the vehicle according to the distance between the target vertex and the single line radar.

In an optional manner, the step of obtaining a 3D detection frame of the target obstacle according to the orientation of the target obstacle and the prediction frame further includes: acquiring four vertexes of the prediction frame; and generating the 3D detection frame by taking the four vertexes as diagonal points of the 3D detection frame according to the orientation of the target obstacle.

In an optional manner, the step of generating the 3D detection frame by using the four vertices as diagonal points of the 3D detection frame according to the orientation of the target obstacle further includes: according to the pixel coordinates of the point cloud, acquiring images of points closest to the single-line radar in the four vertexesElement coordinate (x)min,ymin) The width w of the prediction frame and the height h of the prediction frame; according to the pixel coordinate (x) of the point closest to the single-line radar in the four vertexesmin,ymin) Generating eight vertexes of the 3D detection frame by the width w of the prediction frame and the height h of the prediction frame, wherein the connecting line of the eight vertexes is the 3D detection frame.

In an alternative manner, the coordinates of the eight vertices of the 3D detection box are: (x)min,ymin)、(xmin,ymin+h)、(xmin+w,ymin)、(xmin+w,ymin+h)、(xmin+0.8*w,ymin+0.1*h)、(xmin+0.8*w,ymin+1.1*h)、(xmin+0.2*w,ymin-0.15 x h) and (x)min+0.2*w,ymin+0.85*h)。

According to an aspect of an embodiment of the present invention, there is provided an obstacle sensing apparatus applied to a vehicle provided with a single line radar and a monocular camera, the apparatus including: the first acquisition module is used for acquiring a point cloud image of a single line radar and a monocular camera image; the detection module is used for carrying out target detection on the monocular camera image to obtain the type of the obstacle and a prediction frame in the monocular camera image; the synchronization module is used for carrying out synchronization processing on the point cloud image and the monocular camera image; the clustering module is used for clustering the point cloud in the point cloud image to obtain a mark frame; the coordinate conversion module is used for putting the point cloud and the mark frame into the monocular camera image and obtaining the pixel coordinate of the point cloud; the second acquisition module is used for acquiring a target obstacle with overlapped prediction frames and mark frames; and the determining module is used for determining the distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame.

In an alternative form, the synchronization module includes: the first setting unit is used for setting a timestamp for the point cloud image detected by the single-line radar according to a preset clock source when the single-line radar detects the point cloud; the second setting unit is used for setting a timestamp for the monocular camera image detected by the monocular camera according to the preset clock source when the monocular camera detects the image; and the synchronization unit is used for taking the point cloud image and the monocular camera image of which the interval of the time stamps is smaller than a preset threshold value as the same frame data.

In an alternative, the determining module includes: the first acquisition unit is used for acquiring the orientation of the target obstacle according to the pixel coordinates of the target point cloud; a second obtaining unit, configured to obtain a 3D detection frame of the target obstacle according to the direction of the target obstacle and the prediction frame; a third obtaining unit, configured to obtain a target vertex with a smallest distance to the single line radar from among eight vertices of the 3D detection frame; and the determining unit is used for determining the distance between the target obstacle and the vehicle according to the distance between the target vertex and the single line radar.

In an optional manner, the second obtaining unit is specifically configured to: acquiring four vertexes of the prediction frame; and generating the 3D detection frame by taking the four vertexes as diagonal points of the 3D detection frame according to the orientation of the target obstacle.

In an optional manner, the second obtaining unit is further configured to: according to the pixel coordinates of the point cloud, acquiring the pixel coordinates (x) of the point closest to the single-line radar in the four vertexesmin,ymin) The width w of the prediction frame and the height h of the prediction frame; according to the pixel coordinate (x) of the point closest to the single-line radar in the four vertexesmin,ymin) Generating eight vertexes of the 3D detection frame by the width w of the prediction frame and the height h of the prediction frame, wherein the connecting line of the eight vertexes is the 3D detection frame.

In an alternative manner, the coordinates of the eight vertices of the 3D detection box are: (x)min,ymin)、(xmin,ymin+h)、(xmin+w,ymin)、(xmin+w,ymin+h)、(xmin+0.8*w,ymin+0.1*h)、(xmin+0.8*w,ymin+1.1*h)、(xmin+0.2*w,ymin-0.15 x h) and (x)min+0.2*w,ymin+0.85*h)。

According to an aspect of an embodiment of the present invention, there is provided an electronic apparatus including: at least one processor, and a memory communicatively coupled to the at least one processor, the memory storing instructions executable by the at least one processor to enable the at least one processor to perform a method as described above.

The embodiment of the invention has the beneficial effects that: different from the existing method for realizing obstacle sensing, the method can accurately identify the obstacle according to the monocular camera image, perform frame selection on the obstacle to obtain the prediction frame, perform clustering on the point cloud of the single-line radar, perform frame selection on the clustering to obtain the mark frame, put the point cloud and the mark frame into the monocular camera image to obtain the pixel coordinate of the point cloud, obtain the obstacle with overlapped prediction frame and mark frame, determine the distance between the obtained target obstacle and the single-line radar according to the pixel coordinate of the target point cloud corresponding to the target obstacle and the prediction frame, and further accurately determine the distance between the target obstacle and the vehicle provided with the single-line radar. The obstacle identification method and the obstacle identification device can overcome the defects that the obstacle identified according to the point cloud acquired by the single line radar is inaccurate, and further the distance between the obstacle acquired according to the identified obstacle and the single line radar is inaccurate.

Drawings

One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.

Fig. 1 is a schematic flow chart of a method for sensing an obstacle according to an embodiment of the present invention;

FIG. 2 is a schematic flow chart of a method for synchronously processing a point cloud image and a monocular camera image according to an embodiment of the present invention;

FIG. 3 is a schematic flow chart diagram illustrating a method for determining a distance between an acquired target obstacle and a vehicle according to an embodiment of the present invention;

fig. 4 is a schematic diagram of an obstacle sensing apparatus according to an embodiment of the present invention;

fig. 5 is a schematic diagram of a hardware structure of an electronic device that executes an obstacle sensing method according to an embodiment of the present invention.

Detailed Description

In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for descriptive purposes only.

In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.

Example one

Referring to fig. 1, fig. 1 is a schematic flow chart of a method for sensing an obstacle according to an embodiment of the present invention, the method being applied to a vehicle, and the method including the following steps:

and step S10, acquiring the point cloud of the single line radar and the monocular camera image.

Wherein, single line radar and monocular camera set up on the vehicle, and the position of single line radar and monocular camera on the vehicle can be the same, also can not be the same.

The single-line radar is used for acquiring a point cloud image of the obstacle and the distance between the point cloud in the point cloud image and the single-line radar. And calculating the coordinates of the point cloud in a world coordinate system according to the distance between the point cloud and the single-line radar. The origin of coordinates of the world coordinate system may be the location of the single line radar.

For example, if the distance between the vehicle head and the point cloud is defined as the distance between the vehicle and the point cloud, the distance between the point cloud and the point cloud can be converted to the distance between the vehicle and the point cloud based on the position of the single line radar on the vehicle.

The monocular camera is used to acquire images of obstacles.

And step S20, carrying out target detection on the monocular camera image to obtain the type of the obstacle and a prediction frame in the monocular camera image.

When the monocular camera images are subjected to target detection, a preset model can be adopted, the preset model is a deep learning model, a yolov4 algorithm based on darknet can be adopted, and a weight file obtained by training yolov4 is accelerated by TensorRT to obtain a trt file. Specifically, the image of the monocular camera may be input into yolov4 algorithm, and the obstacle in the image may be detected and identified by using the model after TensorRT acceleration.

According to the preset model, when the obstacle is identified, the type of the obstacle can be obtained simultaneously, and the type of the obstacle can be cars, people, electric cars, bread cars and the like.

The prediction frame is obtained by selecting the obstacle obtained in target detection.

If a plurality of obstacles are recognized on the image of the monocular camera, the method respectively selects each obstacle in a frame mode, and selects each obstacle in a frame mode to obtain a prediction frame, namely, how many prediction frames are obtained when the plurality of obstacles are recognized on the image of the monocular camera.

And step S30, carrying out synchronous processing on the point cloud image and the monocular camera image.

In some embodiments, referring to fig. 2, the step of synchronizing the point cloud image and the monocular camera image, i.e., step S30, further includes the steps of:

step S301, when the single-line radar detects a point cloud, a timestamp is set for the point cloud image detected by the single-line radar according to a preset clock source.

The preset clock source can be a clock of a master control computer which is respectively connected with the single-line radar and the monocular camera to carry out data interaction, or a clock of a vehicle, or a clock of a Beidou satellite navigation system.

Step S302, when the monocular camera detects an image, a timestamp is set for the monocular camera image detected by the monocular camera according to the preset clock source.

Step S303, the point cloud image and the monocular camera image with the time stamp interval smaller than a preset threshold are used as the same frame data.

In some embodiments, data in which an interval between a timestamp of the point cloud image detected by the monocular radar and a timestamp of the monocular camera image detected by the monocular camera is less than 3ms may be used as the same frame data, that is, the point cloud image and the monocular camera image are processed synchronously.

Due to the fact that the detection period, the data return period and the like are different, the point cloud image detected by the single-line radar and the data of the monocular camera image detected by the monocular camera do not completely correspond to each other, and the point cloud image of the single-line radar and the monocular camera image can be correlated in time by setting the timestamp and the preset threshold, so that obstacle sensing can be accurately conducted according to the point cloud image of the single-line radar and the monocular camera image of the same frame data.

And step S40, clustering the point clouds in the point cloud images to obtain a mark frame.

When the point clouds in the point cloud images are clustered, Euclidean clustering can be adopted, the essence is to obtain a target point closest to the single-line radar, and other points which are within a certain radius range from the target point can be regarded as belonging to the same cluster, namely the same obstacle.

For safety, the point in the same cluster where the distance and angle to the single line radar are the minimum is taken as the distance between the same cluster and the single line radar, that is, the distance between the obstacle corresponding to the same cluster and the single line radar. However, since the point clouds acquired by the single line radar are sparse, the same cluster cannot be accurately acquired during clustering, and the data of the obstacle and the single line radar acquired according to the point cloud of the single line radar is not accurate, but the accuracy of the distance between the acquired target obstacle and the vehicle can be determined according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame in the step S70 as reference data.

For example, the distance between the acquired target obstacle and the vehicle is determined as a detection value according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame in step S70, the distance between the target obstacle and the vehicle acquired according to the reference data is used as a reference value, a detection difference between the reference value and the detection value is acquired, an absolute value of a ratio of the detection difference to the detection value is acquired as a preset value, the detection value is determined to be inaccurate when the absolute value of the ratio of the difference between the reference value and the detection value to the detection value is greater than the preset value, and the detection value is determined to be inaccurate when the absolute value of the ratio of the difference between the reference value and the detection value to the detection value is less than or equal to the preset value.

The method comprises the steps of taking the actual distance between a target obstacle and a vehicle as an actual value, judging the accuracy of a detected value of a detected object by using the known actual value of the detected object and the vehicle, acquiring the absolute value of the ratio of the difference between a reference value and the detected value of the detected object to the detected value when the detected value is accurate, acquiring the absolute value of the ratio of the difference between the reference value and the detected value of a plurality of detected objects relative to the vehicle to the detected value, taking the maximum value of the absolute value of the ratio between the difference between the reference value and the detected value of the plurality of detected objects relative to the vehicle to the detected value, and taking the maximum value as the preset value.

And the mark frame is obtained by frame selection of the clusters.

If the point cloud is clustered to obtain a plurality of clusters, performing frame selection on each cluster respectively, and performing frame selection on each cluster to obtain a mark frame, namely obtaining the number of clusters and obtaining the number of mark frames.

And step S50, putting the point cloud and the mark frame into the monocular camera image, and obtaining the pixel coordinates of the point cloud.

The point cloud and the marking frame are put into the image, i.e. the coordinates of the point cloud in the world coordinate system are converted into the pixel coordinate system in the monocular camera image, and the coordinates of the points on the border of the marking frame in the world coordinate system are converted into the pixel coordinate system in the monocular camera image.

The transformation of the coordinates of a point in the world coordinate system to the pixel coordinate system in the image is prior art and will not be described here.

And step S60, acquiring the target obstacle with overlapped prediction frame and mark frame.

Wherein the acquiring of the target obstacle with the overlap of the prediction frame and the mark frame is acquired on the monocular camera image.

In the case where a plurality of target obstacles are recognized on a monocular camera image, the monocular camera has a plurality of prediction frames and a plurality of marker frames, but since the monocular camera has a request for light or the like, a special obstacle that cannot be recognized on the monocular camera image but is recognized by a single-line radar, that is, the special obstacle has a marker frame but does not have a prediction frame, or the prediction frame and the marker frame of the special obstacle do not overlap.

For a target obstacle recognized by both a monocular camera and a single line radar, namely the target obstacle has both a prediction frame and a marking frame, the coverage area of the prediction frame of the target obstacle overlaps with the coverage area of the marking frame of the target obstacle.

For the special obstacles which cannot be identified on the monocular camera image but are identified by the single-line radar, the distance between the special obstacles and the vehicle can be sensed only according to the point cloud detected by the single-line radar, but the type of the special obstacles is unknown.

Step S70, determining the distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame.

When the number of the target obstacles acquired in step S60 is plural, the distances between the plural obstacles and the vehicle may be confirmed respectively.

Referring to fig. 3, step S70 specifically includes the following steps:

and S701, acquiring the orientation of the target obstacle according to the pixel coordinates of the target point cloud.

Step S702, obtaining a 3D detection frame of the target obstacle according to the direction of the target obstacle and the prediction frame.

It should be noted that, if the monocular camera cannot detect the distance between the target obstacle and the monocular camera, the significance of the pixel coordinates of the prediction frame of the obstacle in the pixel coordinate system in the monocular camera image is not large, and the distance between the target obstacle and the monocular camera cannot be obtained from the pixel coordinates of the prediction frame of the target obstacle. However, after the coordinates of the point cloud in the world coordinate system are converted into the pixel coordinates in the pixel coordinate system, since the distance between the point cloud and the single-line radar is known and the coordinates of the point cloud in the world coordinate system are set according to the distance between the point cloud and the single-line radar, the distance between the point cloud and the single-line radar can be reversely deduced according to the pixel coordinates of the point cloud in the pixel coordinate system, and further, the distance between any one point in the pixel coordinate system and the single-line radar and the distance between any two points in the pixel coordinate system can be deduced.

In some embodiments, the world coordinate system may use the position of the single-line radar as the origin of coordinates, the monocular camera and the single-line radar may be disposed at the same position of the vehicle, and the origin of coordinates in the pixel coordinate system may be set to the positions of the single-line radar and the monocular camera, so that when the distance between any one point in the pixel coordinate system and the single-line radar is calculated, the calculation is simple and the calculation load of the system is light. In some embodiments, in the world coordinate system, the position of the single line radar may be used as the coordinate origin, the monocular camera and the single line radar may be disposed at different positions of the vehicle, and the coordinate origin in the pixel coordinate system may be set as the position of the monocular camera, so that the world coordinate of the point cloud in the point cloud image of the single line radar may be converted into the pixel coordinate according to the external parameters of the monocular camera and the single line radar, that is, the relative positions of the monocular camera and the single line radar.

Generally, the prediction frame is rectangular, the marking frame is rectangular, and an alternative method of the step of obtaining the 3D detection frame of the target obstacle according to the orientation of the target obstacle and the prediction frame may be to obtain four vertices of the prediction frame, and generate the 3D detection frame by using the four vertices as diagonal points of the 3D detection frame according to the orientation of the target obstacle.

An optional method for generating the 3D detection frame by using the four vertexes as diagonal points of the 3D detection frame according to the orientation of the target obstacle may be to obtain pixel coordinates (x) of a point closest to the single line radar among the four vertexes according to pixel coordinates of the point cloudmin,ymin) The width w and the height h of the prediction frame, and the pixel coordinate (x) of the point closest to the single-line radar in the four vertexesmin,ymin) Generating eight vertexes of the 3D detection frame by the width w of the prediction frame and the height h of the prediction frame, wherein the connecting line of the eight vertexes is the 3D detection frame.

When the coordinate origin in the pixel coordinate system is set as the position of the single line radar and the monocular camera, the distance between the four vertexes of the prediction frame and the single line radar is the distance between the four vertexes of the prediction frame and the coordinate origin in the pixel coordinate system,that is, after the distances between the four vertices of the prediction frame and the origin of coordinates in the pixel coordinate system are obtained, the distances between the four vertices of the prediction frame and the single-line radar can be obtained without performing additional conversion. After the distances between the four vertexes of the prediction box and the single line radar are obtained, the point closest to the single line radar in the four vertexes and the coordinate of the point can be recorded as (x)min,ymin)。

The width w of the prediction box and the height h of the prediction box can be obtained according to the coordinates of four vertexes of the prediction box.

The eight vertexes of the 3D detection frame may be based on the type of obstacle recognized on the image of the monocular camera, the pixel coordinates (x) of the point closest to the single line radar among the four vertexes of the prediction framemin,ymin) Generating the width w of the prediction frame and the height h of the prediction frame, wherein pixel coordinates of the eight vertexes in a pixel coordinate system can also be obtained, and connecting lines of the eight vertexes are the 3D detection frame.

The pixel coordinates of the eight vertices may be determined empirically, and may be (x), for examplemin,ymin)、(xmin,ymin+h)、(xmin+w,ymin)、(xmin+w,ymin+h)、(xmin+0.8*w,ymin+0.1*h)、(xmin+0.8*w,ymin+1.1*h)、(xmin+0.2*w,ymin-0.15 x h) and (x)min+0.2*w,ymin+0.85*h)。

In some embodiments, the method for obtaining the 3D detection frame of the target obstacle according to the orientation of the target obstacle and the prediction frame may further include obtaining a central point of the prediction frame and pixel coordinates of the central point, and generating eight vertices of the 3D detection frame according to the pixel coordinates of the central point, the width w of the prediction frame, and the height h of the prediction frame, where a connection line of the eight vertices is the 3D detection frame.

The 3D detection frame can accurately frame the target barrier, the target barrier is accurately framed, and the distance between the target barrier and the vehicle is accurate due to the fact that the target barrier and the single-line radar are measured at the later stage.

Step S703 is to acquire a target vertex having the smallest distance to the single-line radar among the eight vertices of the 3D detection frame.

After pixel coordinates of eight vertexes of the 3D detection frame are obtained, distances between the eight vertexes and the single line radar can be respectively obtained, and a target vertex with the minimum distance between the eight vertexes of the 3D detection frame and the single line radar is obtained.

Step S704, determining the distance between the target obstacle and the vehicle according to the distance between the target vertex and the single line radar.

And because the purpose of obstacle perception is to assist vehicle driving and avoid collision between the vehicle and the obstacle, the distance between the top point of the target and the single-line radar is taken as the distance between the acquired target obstacle and the single-line radar from the safety viewpoint. Therefore, the distance between the target obstacle and the vehicle can be determined according to the position relation between the single-line radar and the vehicle.

For example, the distance of the target obstacle from the vehicle may be preselected to be the distance of the target obstacle from the head of the vehicle, the distance of the target obstacle from the center of the vehicle, or others. If the distance between the target obstacle and the vehicle head is the distance between the target obstacle and the vehicle head, if the single line radar is arranged on the vehicle head, the distance between the target obstacle and the single line radar is the distance between the target obstacle and the vehicle, and if the single line radar is arranged on other positions of the vehicle, such as the vehicle roof, after the distance between the target obstacle and the single line radar is obtained, the distance between the target obstacle and the vehicle can be obtained through conversion according to the distance between the single line radar and the vehicle head.

In the embodiment of the invention, a point cloud image of a single-line radar and a monocular camera image are obtained; carrying out target detection on the monocular camera image to obtain the type of the obstacle and a prediction frame in the monocular camera image; performing synchronous processing on the point cloud image and the monocular camera image; clustering the point clouds in the point cloud images to obtain a mark frame; putting the point cloud and the mark frame into the monocular camera image, and obtaining a pixel coordinate of the point cloud; acquiring a target obstacle with overlapped prediction frames and mark frames; and determining the distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame, and determining the distance between the obstacle and the vehicle provided with the single line radar according to the pixel coordinates of the point cloud of the single line radar in the image of the single line camera and the prediction frame because the target obstacle can be accurately identified and the obstacle can be accurately selected according to the image of the single line camera to obtain the prediction frame.

Example two

Referring to fig. 4, fig. 4 is a schematic diagram of an obstacle sensing apparatus according to an embodiment of the present invention, where the apparatus 400 is applied to a vehicle, where the vehicle is provided with a single line radar and a monocular camera, and the apparatus 400 includes: a first obtaining module 401, configured to obtain a point cloud image of a single line radar and a monocular camera image; a detection module 402, configured to perform target detection on the monocular camera image to obtain an obstacle type and a prediction frame in the monocular camera image; a synchronization module 403, configured to perform synchronization processing on the point cloud image and the monocular camera image; a clustering module 404, configured to cluster point clouds in the point cloud images to obtain mark frames; a coordinate conversion module 405, configured to put the point cloud and the mark frame into the monocular camera image, and obtain a pixel coordinate of the point cloud; a second obtaining module 406, configured to obtain a target obstacle where the prediction frame and the mark frame overlap; a determining module 407, configured to determine, according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame, a distance between the acquired target obstacle and the vehicle.

In some embodiments, the synchronization module 403 includes: a first setting unit 4031, configured to set a timestamp on the point cloud image detected by the single-line radar according to a preset clock source when the point cloud is detected by the single-line radar; a second setting unit 4032, configured to set, when the monocular camera detects an image, a timestamp for the monocular camera image detected by the monocular camera according to the preset clock source; a synchronizing unit 4033, configured to use the point cloud image and the monocular camera image with the time stamp interval smaller than a preset threshold as the same frame data.

In some embodiments, the determining module 407 comprises: a first obtaining unit 4071, configured to obtain an orientation of the target obstacle according to the pixel coordinates of the target point cloud; a second obtaining unit 4072, configured to obtain a 3D detection frame of the target obstacle according to the direction of the target obstacle and the prediction frame; a third obtaining unit 4073, configured to obtain a target vertex with a smallest distance to the single-line radar from among the eight vertices of the 3D detection frame; a determining unit 4074, configured to determine a distance between the target obstacle and the vehicle according to a distance between the target vertex and the single line radar.

In some embodiments, the second obtaining unit 4072 is specifically configured to: acquiring four vertexes of the prediction frame; and generating the 3D detection frame by taking the four vertexes as diagonal points of the 3D detection frame according to the orientation of the target obstacle.

In some embodiments, the second obtaining unit 4072 is further configured to: according to the pixel coordinates of the point cloud, acquiring the pixel coordinates (x) of the point closest to the single-line radar in the four vertexesmin,ymin) The width w of the prediction frame and the height h of the prediction frame; according to the pixel coordinate (x) of the point closest to the single-line radar in the four vertexesmin,ymin) Generating eight vertexes of the 3D detection frame by the width w of the prediction frame and the height h of the prediction frame, wherein the connecting line of the eight vertexes is the 3D detection frame.

In some embodiments, the coordinates of the eight vertices of the 3D detection box are: (x)min,ymin)、(xmin,ymin+h)、(xmin+w,ymin)、(xmin+w,ymin+h)、(xmin+0.8*w,ymin+0.1*h)、(xmin+0.8*w,ymin+1.1*h)、(xmin+0.2*w,ymin-0.15 x h) and (x)min+0.2*w,ymin+0.85*h)。

In the embodiment of the present invention, the first obtaining module 401 is configured to obtain a point cloud image of a single line radar and a monocular camera image; a detection module 402, configured to perform target detection on the monocular camera image to obtain an obstacle type and a prediction frame in the monocular camera image; a synchronization module 403, configured to perform synchronization processing on the point cloud image and the monocular camera image; a clustering module 404, configured to cluster point clouds in the point cloud images to obtain mark frames; a coordinate conversion module 405, configured to put the point cloud and the mark frame into the monocular camera image, and obtain a pixel coordinate of the point cloud; a second obtaining module 406, configured to obtain a target obstacle where the prediction frame and the mark frame overlap; a determining module 407, configured to determine a distance between the acquired target obstacle and the vehicle according to the pixel coordinates of the target point cloud corresponding to the target obstacle and the prediction frame, and then, because the target obstacle can be accurately identified according to the monocular camera image and the frame selection can be performed on the target obstacle to obtain the prediction frame, the distance between the target obstacle and the vehicle provided with the single line radar can be accurately determined according to the pixel coordinates of the point cloud of the single line radar in the monocular camera image and the prediction frame.

EXAMPLE III

Referring to fig. 5, fig. 5 is a schematic diagram of a hardware structure of an electronic device executing a method for sensing an obstacle according to an embodiment of the present invention. The electronic device 500 includes: one or more processors 501 and memory 502, one for example in fig. 5.

The processor 501 and the memory 502 may be connected by a bus or other means, and in the embodiment of the present invention, the bus connection is taken as an example.

The memory 502, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules (e.g., the modules shown in fig. 4) corresponding to the obstacle sensing method in the embodiment of the present invention. The processor 501 executes various functional applications and data processing of the obstacle sensing apparatus by running a nonvolatile software program, instructions and modules stored in the memory 502, that is, implements the obstacle sensing method of the above-described method embodiment.

The memory 502 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the obstacle sensing device, and the like. Further, the memory 502 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 502 may optionally include memory located remotely from the processor 501, which may be connected to the obstacle sensing device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.

The one or more modules are stored in the memory 502 and when executed by the one or more processors 501 perform the obstacle sensing method in any of the method embodiments described above.

The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.

Embodiments of the present invention provide a non-volatile computer-readable storage medium, where computer-executable instructions are stored in the non-volatile computer-readable storage medium, and the computer-executable instructions are executed by an electronic device to perform the obstacle sensing method in any of the above method embodiments.

Embodiments of the present invention provide a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the obstacle sensing method of any of the above method embodiments.

The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.

Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:对象抓取方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!