Method and system for detecting position and orientation of target vehicle and storage medium

文档序号:1832829 发布日期:2021-11-12 浏览:4次 中文

阅读说明:本技术 一种目标车辆位置和朝向的检测方法、系统及存储介质 (Method and system for detecting position and orientation of target vehicle and storage medium ) 是由 刘前飞 刘康 张三林 蔡璐珑 于 2020-04-24 设计创作,主要内容包括:本发明提供一种目标车辆位置和朝向的检测方法,包括步骤:步骤S10,通过车载摄像头采集本车的前视图像;步骤S11,对车载摄像头所采集的前视图像进行预处理;步骤S12,根据车载的惯性测量设备对所述前视图像进行图像运动补偿;步骤S13,根据逆透视变换规则,将经图像运动补偿后的前视图中各目标车辆位置转换为俯视图;步骤S14,将俯视图输入预先训练好的卷积神经网络,获得各目标车辆的位置以及朝向信息。本发明还提供了相应系统及存储介质。实施本发明,可以大大提高基于视觉的目标车辆的距离以及朝向检测的精度。(The invention provides a method for detecting the position and the orientation of a target vehicle, which comprises the following steps: step S10, a front view image of the vehicle is collected through a vehicle-mounted camera; step S11, preprocessing the front view image collected by the vehicle-mounted camera; step S12, performing image motion compensation on the foresight image according to the vehicle-mounted inertial measurement equipment; step S13, converting the position of each target vehicle in the front view after image motion compensation into a top view according to the inverse perspective transformation rule; step S14, the top view is input to a convolutional neural network trained in advance, and the position and orientation information of each target vehicle is obtained. The invention also provides a corresponding system and a storage medium. By implementing the invention, the distance and orientation detection precision of the target vehicle based on vision can be greatly improved.)

1. A method of detecting a position and orientation of a target vehicle, comprising the steps of:

step S10, a front view image of the vehicle is collected through a vehicle-mounted camera, and the front view image comprises an image of at least one other vehicle;

step S11, preprocessing the foresight image collected by the vehicle-mounted camera to obtain a foresight image according with a preset size;

step S12, obtaining information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment, and performing image motion compensation on the forward-looking image according to the information representing the vehicle attitude change;

step S13, converting the front view after image motion compensation into a top view according to the inverse perspective transformation rule;

and step S14, inputting the converted top view into a pre-trained convolutional neural network to obtain the position and orientation information of each target vehicle.

2. The method of claim 1, wherein the step S12 includes:

step S120, acquiring information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment, wherein the information representing the vehicle attitude change is triaxial angular rate and acceleration;

step S121, obtaining a camera motion compensation parameter matrix Q according to the information representing the vehicle attitude change and the camera external parameters:

wherein R is11、R12、R21、R22As coordinate rotation parameter, tx、tyCoordinate translation parameters;

step S121, using the camera motion compensation parameter matrix to perform image motion compensation on the forward-looking image by adopting the following formula:

wherein (u, v) is the coordinates of each position in the forward-looking image before compensation, and (u ', v') is the coordinates of each position in the forward-looking image after compensation.

3. The method according to claim 2, wherein the step S13 is specifically:

and (3) converting the position of each target vehicle in the front view after image motion compensation from an image space to a top view of which the distance scale is in linear relation with a vehicle coordinate system by utilizing a homography transformation matrix and adopting the following formula for calculation:

wherein, (u ', v') is the coordinate of each position in the foresight image after compensation, and (x, y) is the coordinate of the position point in the corresponding top view after inverse perspective transformation; h is a predetermined homography transformation matrix.

4. The method of claim 3, wherein the step S14 further comprises:

step S140, inputting the converted top view into a pre-trained convolutional neural network, and outputting the center point coordinates (b) of the two-dimensional rectangular frame of the target vehiclex,by) Width b of the rectangular framewHeight bhAnd the attitude orientation angle b of the target vehicle relative to the host vehicle in the top viewo

Step S141, filtering the convolutional neural network through the cross-over ratio parameters, reserving the two-dimensional contour parameter with the maximum probability prediction for each target vehicle, and removing the rest two-dimensional contour parameters;

step S142, calculating coordinates of the grounding point position of the target vehicle in the vehicle coordinate system according to the following formula, and outputting the coordinates together with the attitude heading angle:

wherein, (u, v) is the coordinate of the lowest edge point of the rectangular frame of the target vehicle in the top view, and (x, y,1) is the coordinate of the corresponding point in the vehicle coordinate system;

is a parameter matrix inside the camera head,is a transformation matrix.

5. A system for detecting a position and orientation of a target vehicle, comprising:

the device comprises an image acquisition unit, a camera module and a display unit, wherein the image acquisition unit is used for acquiring a forward looking image of a vehicle through a vehicle-mounted camera, and the forward looking image comprises at least one image of other vehicles except the vehicle;

the preprocessing unit is used for preprocessing the foresight image acquired by the vehicle-mounted camera to obtain a foresight image in accordance with a preset size;

the motion compensation unit is used for acquiring information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment and performing image motion compensation on the forward-looking image according to the information representing the vehicle attitude change;

the inverse perspective transformation unit is used for converting the front view subjected to the image motion compensation into a top view according to an inverse perspective transformation rule;

and the position and orientation obtaining unit is used for inputting the converted top view into a pre-trained convolutional neural network to obtain the position and orientation information of each target vehicle.

6. The system of claim 5, wherein the motion compensation unit comprises:

the attitude information acquisition unit is used for acquiring information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment, wherein the information representing the vehicle attitude change is triaxial angular rate and acceleration;

a compensation parameter matrix obtaining unit, configured to obtain a camera motion compensation parameter matrix Q according to the information representing the vehicle attitude change and the camera external parameter:

wherein R is11、R12、R21、R22The coordinate rotation parameters are adopted, and tx and ty are coordinate translation parameters;

a compensation calculating unit, configured to perform image motion compensation on the forward-looking image by using the camera motion compensation parameter matrix Q according to the following formula:

wherein (u, v) is the coordinates of each position in the forward-looking image before compensation, and (u ', v') is the coordinates of each position in the forward-looking image after compensation.

7. The system of claim 6, wherein the inverse perspective transformation unit is specifically configured to transform each target vehicle position in the image motion compensated front view from image space to a top view with a distance scale that is linear with the vehicle coordinate system, using a homography transformation matrix H, calculated using the following formula:

wherein, (u ', v') is the coordinate of each position in the foresight image after compensation, and (x, y) is the coordinate of the position point in the corresponding top view after inverse perspective transformation; h is a predetermined homography transformation matrix.

8. The system of claim 7, wherein the position and orientation obtaining unit further comprises:

a neural network processing unit for inputting the converted top view into a pre-trained convolutional neural network and outputting the coordinates (b) of the center point of the two-dimensional rectangular frame of the target vehiclex,by) Width b of the rectangular framewHeight bhAnd the attitude orientation angle b of the target vehicle relative to the host vehicle in the top viewo

The filtering unit is used for filtering the convolutional neural network through the cross-over ratio parameter, reserving the two-dimensional contour parameter with the maximum probability prediction for each target vehicle, and removing the rest two-dimensional contour parameters;

a coordinate calculation unit for calculating coordinates of the ground point position of the target vehicle in the vehicle coordinate system according to the following formula and outputting together with the attitude heading angle:

wherein, (u, v) is the coordinate of the lowest edge point of the rectangular frame of the target vehicle in the top view, and (x, y,1) is the coordinate of the corresponding point in the vehicle coordinate system;

is a parameter matrix inside the camera head,is a transformation matrix.

9. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-4.

Technical Field

The invention relates to the technical field of intelligent driving, in particular to a method and a system for detecting the position and the orientation of a target vehicle and a storage medium.

Background

In the smart driving of an automobile, it is necessary to detect the distance between front and rear targets according to the driving environment. The current vision-based target detection method mainly comprises the following steps: and acquiring a two-dimensional rectangular box (bounding box) of the vehicle target in the image according to a CNN convolutional neural network (YOLO, SSD, Faster-rcnn and the like) in the front view. The general process flow is shown in fig. 1, and the steps include: firstly, carrying out preprocessing operations such as resize and the like on an input foresight image; then, carrying out neural network reasoning on the preprocessed front view to obtain possible two-dimensional rectangular frames (bounding box) of all target vehicles; then, in the post-processing stage, all repeated two-dimensional rectangular frames are filtered for each vehicle target; and finally, taking the lower boundary of the two-dimensional rectangular frame as a grounding point coordinate of the vehicle target in the image, and converting the lower boundary into a vehicle coordinate system to output a corresponding position distance.

However, the existing treatment method has some defects:

firstly, the distance measurement of the vehicle target position is inaccurate, and the error is large. In the front view, the lower boundary of the vehicle target two-dimensional rectangular frame is not the position of the grounding point of the vehicle, so that the detected position distance of the target vehicle has a larger error relative to the true value, and the farther the target vehicle is away from the vehicle, the larger the error of the measured distance value is.

Secondly, the attitude and direction of the target vehicle cannot be effectively detected. In the front view, only two-dimensional sizes of the width and height directions of the vehicle target are often detected, and it is difficult to detect the attitude orientation of the target vehicle.

Therefore, the existing vehicle target detection based on the front view has the defects that the moving posture is not easy to measure, and the position distance error is large.

Disclosure of Invention

The present invention is directed to a method, a system, and a storage medium for detecting a position and an orientation of a target vehicle, which can improve accuracy of detecting a position and a distance of the target vehicle and detect and obtain a posture and an orientation of the target vehicle.

As an aspect of the present invention, there is provided a method of detecting a position and an orientation of a target vehicle, comprising the steps of:

step S10, a front view image of the vehicle is collected through a vehicle-mounted camera, and the front view image comprises an image of at least one other vehicle;

step S11, preprocessing the foresight image collected by the vehicle-mounted camera to obtain a foresight image according with a preset size;

step S12, obtaining information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment, and performing image motion compensation on the forward-looking image according to the information representing the vehicle attitude change;

step S13, converting the position of each target vehicle in the front view after image motion compensation from image space to a top view with the distance scale in linear relation with the vehicle coordinate system according to the inverse perspective transformation rule;

and step S14, inputting the converted top view into a pre-trained convolutional neural network to obtain the position and orientation information of each target vehicle.

Wherein the step S12 includes:

step S120, acquiring information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment, wherein the information representing the vehicle attitude change is triaxial angular rate and acceleration;

step S121, obtaining a camera motion compensation parameter matrix Q according to the information representing the vehicle attitude change and the camera external parameters:

wherein R is11、R12、R21、R22The coordinate rotation parameters are adopted, and tx and ty are coordinate translation parameters; the parameters are obtained by pre-calculation or calibration;

step S121, using the camera motion compensation parameter matrix Q to perform image motion compensation on the forward-looking image by adopting the following formula:

wherein (u, v) is the coordinates of each position in the forward-looking image before compensation, and (u ', v') is the coordinates of each position in the forward-looking image after compensation.

Wherein, the step S13 specifically includes:

and (3) calculating by using a homography transformation matrix H by adopting the following formula, and converting the position of each target vehicle in the front view after image motion compensation from an image space to a top view of which the distance scale and the vehicle coordinate system have a linear relation:

wherein, (u ', v') is the coordinate of each position in the foresight image after compensation, and (x, y) is the coordinate of the position point in the corresponding top view after inverse perspective transformation; h is a predetermined homography transformation matrix, which is obtained by pre-calculation or calibration.

Wherein the step S14 further includes:

step S140, inputting the converted top view into a pre-trained convolutional neural network, and outputting the center point coordinates (b) of the two-dimensional rectangular frame of the target vehiclex,by) Rectangular, rectangularWidth b of the framewHeight bhAnd the attitude orientation angle b of the target vehicle relative to the host vehicle in the top viewo

Step S141, filtering the convolutional neural network through the cross-over ratio parameters, reserving the two-dimensional contour parameter with the maximum probability prediction for each target vehicle, and removing the rest two-dimensional contour parameters;

step S142, calculating coordinates of the grounding point position of the target vehicle in the vehicle coordinate system according to the following formula, and outputting the coordinates together with the attitude heading angle:

wherein, (u, v) is the coordinate of the lowest edge point of the rectangular frame of the target vehicle in the top view, and (x, y,1) is the coordinate of the corresponding point in the vehicle coordinate system;

is a parameter matrix inside the camera head,for the transformation matrix, the two matrices are obtained by pre-calculation or calibration.

Accordingly, as another aspect of the present invention, a target vehicle position and orientation detection system includes:

the device comprises an image acquisition unit, a camera module and a display unit, wherein the image acquisition unit is used for acquiring a forward looking image of a vehicle through a vehicle-mounted camera, and the forward looking image comprises at least one image of other vehicles except the vehicle;

the preprocessing unit is used for preprocessing the foresight image acquired by the vehicle-mounted camera to obtain a foresight image in accordance with a preset size;

the motion compensation unit is used for acquiring information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment and performing image motion compensation on the forward-looking image according to the information representing the vehicle attitude change;

the inverse perspective transformation unit is used for converting the position of each target vehicle in the front view after image motion compensation from an image space to a top view of which the distance scale and the vehicle coordinate system have a linear relation according to an inverse perspective transformation rule;

and the position and orientation obtaining unit is used for inputting the converted top view into a pre-trained convolutional neural network to obtain the position and orientation information of each target vehicle.

Wherein the motion compensation unit comprises:

the attitude information acquisition unit is used for acquiring information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment, wherein the information representing the vehicle attitude change is triaxial angular rate and acceleration;

a compensation parameter matrix obtaining unit, configured to obtain a camera motion compensation parameter matrix Q according to the information representing the vehicle attitude change and the camera external parameter:

wherein R is11、R12、R21、R22The coordinate rotation parameters are adopted, and tx and ty are coordinate translation parameters;

a compensation calculating unit, configured to perform image motion compensation on the forward-looking image by using the camera motion compensation parameter matrix Q according to the following formula:

wherein (u, v) is the coordinates of each position in the forward-looking image before compensation, and (u ', v') is the coordinates of each position in the forward-looking image after compensation.

The inverse perspective transformation unit is specifically configured to utilize a homography transformation matrix H to calculate by using the following formula, and convert the position of each target vehicle in the front view after image motion compensation from an image space to a top view in which a distance scale and a vehicle coordinate system have a linear relationship:

wherein, (u ', v') is the coordinate of each position in the foresight image after compensation, and (x, y) is the coordinate of the position point in the corresponding top view after inverse perspective transformation; h is a predetermined homography transformation matrix.

Wherein the position and orientation obtaining unit further comprises:

a neural network processing unit for inputting the converted top view into a pre-trained convolutional neural network and outputting the coordinates (b) of the center point of the two-dimensional rectangular frame of the target vehiclex,by) Width b of the rectangular framewHeight bhAnd the attitude orientation angle b of the target vehicle relative to the host vehicle in the top viewo

The filtering unit is used for filtering the convolutional neural network through the cross-over ratio parameter, reserving the two-dimensional contour parameter with the maximum probability prediction for each target vehicle, and removing the rest two-dimensional contour parameters;

a coordinate calculation unit for calculating coordinates of the ground point position of the target vehicle in the vehicle coordinate system according to the following formula and outputting together with the attitude heading angle:

wherein, (u, v) is the coordinate of the lowest edge point of the rectangular frame of the target vehicle in the top view, and (x, y,1) is the coordinate of the corresponding point in the vehicle coordinate system;

is a parameter matrix inside the camera head,is a transformation matrix.

Accordingly, as a further aspect of the present invention, there is also provided a computer-readable storage medium storing computer instructions which, when run on a computer, cause the computer to perform the aforementioned method.

The embodiment of the invention has the following beneficial effects:

the embodiment of the invention provides a method, a system and a storage medium for detecting the position and the orientation of a target vehicle. The position deviation of the vehicle target in the forward-looking image caused by the vibration of the camera in the self-movement process of the vehicle is eliminated through image motion compensation, and the final position distance detection precision of the vehicle target is improved;

the position distance and the attitude orientation of the vehicle target are detected by converting the front view image into the top view image, the attitude orientation of the vehicle target can be more directly reflected in the top view, and the distance scale of the top view is in linear proportional relation with the vehicle coordinate system;

in the detection output of the convolutional neural network to the vehicle target, the prediction of the attitude orientation angle of the vehicle target is increased, and the motion attitude orientation of the vehicle target is ensured to be more accurate.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is within the scope of the present invention for those skilled in the art to obtain other drawings based on the drawings without inventive exercise.

FIG. 1 is a schematic illustration of a main flow chart of one embodiment of a method for detecting a position and an orientation of a target vehicle according to the present invention;

FIG. 2 is a more detailed flowchart of step S12 in FIG. 1;

FIG. 3 is a schematic diagram illustrating a comparison between the pictures before and after the inverse perspective transformation involved in step S13 in FIG. 1;

FIG. 4 is a more detailed flowchart of step S14 in FIG. 1;

FIG. 5 is a schematic diagram of the output results referred to in FIG. 4;

FIG. 6 is a schematic diagram of an embodiment of a system for detecting a position and an orientation of a target vehicle according to the present invention;

FIG. 7 is a schematic diagram of the motion compensation unit in FIG. 6;

fig. 8 is a schematic structural diagram of the position and orientation obtaining unit in fig. 6.

Detailed Description

In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings.

As shown in fig. 1, a main flow diagram of an embodiment of a method for detecting a position and an orientation of a target vehicle according to the present invention is shown; referring to fig. 2 to 5 together, in this embodiment, the present invention provides a method for detecting a position and an orientation of a target vehicle, including the following steps:

step S10, a front view image of the vehicle is collected through a vehicle-mounted camera, and the front view image comprises at least one image of other vehicles except the vehicle;

step S11, pre-processing the front view image collected by the vehicle-mounted camera to obtain a front view image in accordance with a preset size, wherein the pre-processing can be such as expansion and contraction processing of image size;

step S12, obtaining information representing vehicle attitude change in real time according to vehicle-mounted Inertial Measurement Unit (IMU), and performing image motion compensation on the forward-looking image according to the information representing the vehicle attitude change;

it will be appreciated that the camera mounted on the vehicle will tend to change attitude relative to the ground due to movement of the vehicle, i.e. the pitch or roll angle of the camera relative to the ground will change. Corresponding attitude change can be obtained in real time through inertial measurement equipment installed on the vehicle, and in order to reduce the position error of a vehicle target in a forward-looking image caused by the attitude change of a camera, the forward-looking image needs to be subjected to motion compensation according to attitude change information.

Specifically, in one example, the step S12 includes:

step S120, acquiring information representing vehicle attitude change in real time according to vehicle-mounted inertial measurement equipment, wherein the information representing the vehicle attitude change is triaxial angular rate and acceleration;

step S121, obtaining a camera motion compensation parameter matrix Q according to the information representing the vehicle attitude change and the camera external parameters:

wherein R is11、R12、R21、R22The coordinate rotation parameters are adopted, and tx and ty are coordinate translation parameters; the parameters are obtained by pre-calculation or calibration;

step S121, using the camera motion compensation parameter matrix Q to perform image motion compensation on the forward-looking image by adopting the following formula:

wherein (u, v) is the coordinates of each position in the forward-looking image before compensation, and (u ', v') is the coordinates of each position in the forward-looking image after compensation.

Step S13, converting the position of each target vehicle in the front view after image motion compensation from image space to a top view with the distance scale in linear relation with the vehicle coordinate system according to the inverse perspective transformation rule;

specifically, in an example, the step S13 specifically includes:

and (3) calculating by using a homography transformation matrix H by adopting the following formula, and converting the position of each target vehicle in the front view after image motion compensation from an image space to a top view of which the distance scale and the vehicle coordinate system have a linear relation:

wherein, (u ', v') is the coordinate of each position in the foresight image after compensation, and (x, y) is the coordinate of the position point in the corresponding top view after inverse perspective transformation; h is a predetermined homography transformation matrix, which is obtained by pre-calculation or calibration.

The specific transformation effect can be seen with reference to fig. 3.

And step S14, inputting the converted top view into a pre-trained convolutional neural network to obtain the position and orientation information of each target vehicle. In some examples, the convolutional neural network is a CNN convolutional neural network, and the convolutional neural network is trained in advance and can be used for performing detection and inference on the contour of the target vehicle in the overhead view.

Specifically, in one example, the step S14 further includes:

step S140, inputting the converted top view into a pre-trained convolutional neural network, and outputting the center point coordinate (b) of the two-dimensional rectangular frame (bounding box) of the target vehiclex,by) Width b of the rectangular framewHeight bhAnd the attitude orientation angle b of the target vehicle relative to the host vehicle in the top viewo(ii) a It will be appreciated that in this step, all possible two-dimensional rectangular frames of the target vehicle may be obtained, i.e. obtainedThe number of the two-dimensional rectangular frames is plural.

Step S141, filtering the convolutional neural network through the cross-over ratio parameters, reserving the two-dimensional contour parameter with the maximum probability prediction for each target vehicle, and removing the rest two-dimensional contour parameters;

step S142, calculating coordinates of the grounding point position of the target vehicle in the vehicle coordinate system according to the following formula, and outputting the coordinates together with the attitude heading angle:

wherein, (u, v) is the coordinate of the lowest edge point of the rectangular frame of the target vehicle in the top view, and (x, y,1) is the coordinate of the corresponding point in the vehicle coordinate system;

is a parameter matrix inside the camera head,for the transformation matrix, the two matrices are obtained by pre-calculation or calibration.

It can be understood that the attitude orientation angle b between the vehicle target and the host vehicleoHas been obtained in the previous step. For the position distance detection of the vehicle target, only the coordinates of the position of the grounding point of the vehicle target in a vehicle coordinate system need to be calculated.

FIG. 5 is a diagram illustrating the output of neural network processing of data from a target vehicle, according to one example; wherein the solid line box represents the outline of one target vehicle in the top view; and the dotted line square frame is a contour schematic diagram of the target vehicle output after being processed by the convolutional neural network.

FIG. 6 is a schematic structural diagram of an embodiment of a system for detecting a position and an orientation of a target vehicle according to the present invention; referring to fig. 7 and 8 together, in the present embodiment, the present invention provides a system 1 for detecting a position and an orientation of a target vehicle, including:

the image acquisition unit 11 is used for acquiring a forward-looking image of the vehicle through the vehicle-mounted camera, wherein the forward-looking image comprises images of at least one vehicle except the vehicle;

the preprocessing unit 12 is used for preprocessing the foresight image acquired by the vehicle-mounted camera to obtain a foresight image in accordance with a preset size;

the motion compensation unit 13 is configured to obtain information representing vehicle attitude change in real time according to a vehicle-mounted inertial measurement device, and perform image motion compensation on the forward-looking image according to the information representing vehicle attitude change;

the inverse perspective transformation unit 14 is used for transforming the position of each target vehicle in the front view after image motion compensation from an image space to a top view with a linear relation between a distance scale and a vehicle coordinate system according to an inverse perspective transformation rule;

and a position and orientation obtaining unit 15, configured to input the converted top view into a pre-trained convolutional neural network, and obtain position and orientation information of each target vehicle.

More specifically, in one example, the motion compensation unit 13 includes:

the attitude information obtaining unit 130 is configured to obtain information representing a vehicle attitude change in real time according to a vehicle-mounted inertial measurement device, where the information representing the vehicle attitude change is a triaxial angular rate and an acceleration;

a compensation parameter matrix obtaining unit 131, configured to obtain a camera motion compensation parameter matrix Q according to the information representing the vehicle attitude change and the camera external parameter:

wherein R is11、R12、R21、R22The coordinate rotation parameters are adopted, and tx and ty are coordinate translation parameters;

a compensation calculating unit 132, configured to perform image motion compensation on the forward-looking image by using the camera motion compensation parameter matrix Q according to the following formula:

wherein (u, v) is the coordinates of each position in the forward-looking image before compensation, and (u ', v') is the coordinates of each position in the forward-looking image after compensation.

More specifically, in one example, the inverse perspective transformation unit 14 is specifically configured to transform each target vehicle position in the image motion compensated front view from the image space to a top view with a distance scale in a linear relationship with the vehicle coordinate system by using a homography transformation matrix H and using the following formula:

wherein, (u ', v') is the coordinate of each position in the foresight image after compensation, and (x, y) is the coordinate of the position point in the corresponding top view after inverse perspective transformation; h is a predetermined homography transformation matrix.

More specifically, in one example, the position and orientation obtaining unit 15 further includes:

a neural network processing unit 150 for inputting the converted top view into a pre-trained convolutional neural network and outputting the coordinates (b) of the center point of the two-dimensional rectangular frame of the target vehiclex,by) Width b of the rectangular framewHeight bhAnd the attitude orientation angle b of the target vehicle relative to the host vehicle in the top viewo(ii) a In particular, reference may be made to what is shown in fig. 5;

the filtering unit 151 is configured to filter the convolutional neural network through the cross-over ratio parameter, reserve the two-dimensional contour parameter with the largest probability prediction for each target vehicle, and remove the remaining two-dimensional contour parameters;

a coordinate calculation unit 152 for calculating coordinates of the grounding point position of the target vehicle in the vehicle coordinate system according to the following formula, and outputting together with the attitude heading angle:

wherein, (u, v) is the coordinate of the lowest edge point of the rectangular frame of the target vehicle in the top view, and (x, y,1) is the coordinate of the corresponding point in the vehicle coordinate system;

is a parameter matrix inside the camera head,is a transformation matrix.

For more details, reference may be made to the foregoing description of fig. 1 to 5, which is not repeated herein.

Based on the same inventive concept, embodiments of the present invention further provide a computer-readable storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the method for detecting the position and orientation of the target vehicle described in fig. 1 to 5 in the above method embodiment of the present invention.

The embodiment of the invention has the following beneficial effects:

the embodiment of the invention provides a method, a system and a storage medium for detecting the position and the orientation of a target vehicle. The position deviation of the vehicle target in the forward-looking image caused by the vibration of the camera in the self-movement process of the vehicle is eliminated through image motion compensation, and the final position distance detection precision of the vehicle target is improved;

the position distance and attitude orientation detection of the vehicle target is performed by converting the forward-looking image into the downward-looking image. The attitude and the direction of the vehicle target can be reflected more directly in the top view. The distance scale of the top view is in linear proportional relation with the vehicle coordinate system, the actual distance of the vehicle target can be directly obtained as long as the position of the two-dimensional outline frame of the vehicle target is detected, and the position distance of the vehicle target in the vehicle coordinate system can be obtained without coordinate space conversion like the existing method;

in the detection output of the convolutional neural network to the vehicle target, the prediction of the attitude orientation angle of the vehicle target is increased, and the motion attitude orientation of the vehicle target is ensured to be more accurate.

As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.

While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiment, but on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

18页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:相机位姿确定、虚拟物体显示方法、装置及电子设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!