Vehicle driving assisting method and device based on vehicle-mounted real-time monitoring

文档序号:1899005 发布日期:2021-11-30 浏览:26次 中文

阅读说明:本技术 一种基于车载实时监控的车辆驾驶辅助方法及装置 (Vehicle driving assisting method and device based on vehicle-mounted real-time monitoring ) 是由 陈世彬 丁应俊 于 2021-08-31 设计创作,主要内容包括:本发明公开了一种基于车载实时监控的车辆驾驶辅助方法及装置,其中,所述方法包括:进行车辆四周的实时视频采集处理,获得车辆四周的实时视频数据;进行视频拼接处理,形成360度全景实时视频数据;进行分帧处理,形成全景实时视频帧数据;识别全景实时视频帧数据中的目标数据,并确定目标数据在车辆当前位置的相对位置和所述目标数据的类别;进行目标识别和/或者目标轨迹预测处理,获得所述目标数据的识别结果和/或目标轨迹预测结果;基于车辆的当前状态和所述目标数据的识别结果和/或目标轨迹预测结果向车辆的驾驶用户进行驾驶辅助预警。在本发明实施例中,可以有效的根据车辆外部复杂环境进行有效的辅助驾驶预警,提高车辆的行驶安全性。(The invention discloses a vehicle driving assistance method and device based on vehicle-mounted real-time monitoring, wherein the method comprises the following steps: acquiring and processing real-time videos around the vehicle to obtain real-time video data around the vehicle; performing video splicing processing to form 360-degree panoramic real-time video data; performing framing processing to form panoramic real-time video frame data; identifying target data in the panoramic real-time video frame data, and determining the relative position of the target data at the current position of the vehicle and the category of the target data; performing target identification and/or target track prediction processing to obtain an identification result and/or a target track prediction result of the target data; and carrying out driving auxiliary early warning on a driving user of the vehicle based on the current state of the vehicle and the recognition result and/or the target track prediction result of the target data. In the embodiment of the invention, the driving assistance early warning can be effectively carried out according to the complex environment outside the vehicle, and the driving safety of the vehicle is improved.)

1. A vehicle driving assistance method based on vehicle-mounted real-time monitoring is characterized by comprising the following steps:

the method comprises the steps that real-time video acquisition processing of the periphery of a vehicle is carried out on the basis of camera equipment arranged on the periphery of the vehicle, and real-time video data of the periphery of the vehicle are obtained;

performing video splicing processing based on the real-time video data around the vehicle to form 360-degree panoramic real-time video data;

framing the 360-degree panoramic real-time video data to form panoramic real-time video frame data;

identifying target data in the panoramic real-time video frame data, and determining the relative position of the target data at the current position of the vehicle and the category of the target data, wherein the category of the target data comprises target traffic identification data, target vehicle data and target pedestrian data;

performing target identification and/or target track prediction processing based on the relative position of the target data at the current position of the vehicle and the category of the target data to obtain an identification result and/or a target track prediction result of the target data;

and carrying out driving auxiliary early warning on a driving user of the vehicle based on the current state of the vehicle and the recognition result and/or target track prediction result of the target data.

2. The vehicle driving assistance method according to claim 1, wherein the acquiring and processing of the real-time video around the vehicle based on the camera devices arranged around the vehicle to obtain the real-time video data around the vehicle comprises:

after the vehicle is started, camera equipment arranged around the vehicle is started to acquire and process real-time videos around the vehicle, and the acquired real-time video data are marked according to the position of the acquisition camera arranged in the vehicle to form real-time video data around the vehicle.

3. The vehicle driving assistance method according to claim 1, wherein the performing video stitching processing based on the real-time video data around the vehicle to form 360-degree panoramic real-time video data includes:

obtaining distortion coefficients of camera equipment arranged around the vehicle;

calculating transformation coefficients between the distortion coefficients and camera equipment arranged around the vehicle;

and performing fusion splicing processing on the real-time video data around the vehicle based on the transformation coefficient to form 360-degree panoramic real-time video data.

4. The vehicle driving assistance method according to claim 3, wherein the fusion splicing processing is performed on the real-time video data around the vehicle based on the transformation coefficient to form 360-degree panoramic real-time video data, and the method comprises:

carrying out distortion removal processing on the real-time video data around the vehicle by using the transformation coefficient to obtain real-time video data with distortion removed;

splicing the real-time video data after distortion removal to obtain spliced real-time video data;

performing curved surface projection processing on the spliced real-time video data to obtain curved surface projected real-time video data;

and performing multiband fusion processing on the real-time video data subjected to the curved surface projection to form 360-degree panoramic real-time video data.

5. The vehicle driving assist method according to claim 1, wherein the framing the 360-degree panoramic real-time video data to form panoramic real-time video frame data includes:

performing framing processing on the 360-degree panoramic real-time video data to obtain first panoramic real-time video frame data;

and performing interval frame extraction and redundancy removal processing on the first panoramic real-time video frame data to form panoramic real-time video frame data.

6. The vehicle driving assist method of claim 1, wherein the identifying target data in the panoramic real-time video frame data and determining a relative position of the target data at a current position of the vehicle and a category of the target data comprises:

performing target data identification processing on the panoramic real-time video frame data based on a convolutional neural network model to obtain identification target data;

performing covering area complementing processing on the identification target data based on an image splicing algorithm to obtain complemented target data;

determining the relative position of the supplemented target data at the current position of the vehicle based on the position of the supplemented target data in the panoramic real-time video frame;

and carrying out classification fuzzy matching processing on the supplemented target data, and determining the class of the supplemented target data.

7. The vehicle driving assistance method according to claim 6, wherein the performing, based on the convolutional neural network model, target data identification processing on the panoramic real-time video frame data to obtain identification target data comprises:

inputting the panoramic real-time video frame data into a convolutional neural network model, and extracting and processing target features with different dimensions in a forward propagation network in the convolutional neural network model to obtain target features with different dimensions;

screening and positioning the target features with different dimensions for the primary candidate frame through an RPN (resilient packet network), and removing the candidate frame which does not contain similar targets;

inputting the candidate frame without the similar target into a deconvolution network, and outputting a corrected image with the same size as the original target data;

and inputting the corrected image into a full-connection network and a full-connection layer to perform target data identification processing, and obtaining identification target data.

8. The vehicular drive assist method according to claim 1, wherein the performing of the target recognition and/or target trajectory prediction processing based on the relative position of the target data at the current position of the vehicle and the category of the target data includes:

when only target traffic identification data exists in the target data and the target traffic identification data is right in front of or laterally in front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data;

when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are in the front of the current position of the vehicle or in the front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data and carrying out target track prediction processing on the target vehicle data and/or the target pedestrian data;

when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are not in the front of the current position of the vehicle or in the front of the current position of the vehicle, target track prediction processing is carried out on the target vehicle data and/or the target pedestrian data;

and when target vehicle data and/or target pedestrian data exist in the target data, performing target track prediction processing on the target vehicle data and/or the target pedestrian data.

9. The vehicle driving assist method according to claim 1, wherein the performing of the driving assist warning to the driving user of the vehicle based on the current state of the vehicle and the recognition result of the target data and/or the target trajectory prediction result includes:

and displaying and pushing the current state and the recognition result and/or the target track prediction result of the target data to a driving user of the vehicle on the vehicle central control screen, and playing a voice-assisted prompt.

10. A vehicle driving assist apparatus based on-vehicle real-time monitoring, characterized in that the apparatus comprises:

the video acquisition module: the system comprises a camera device, a data acquisition module, a data processing module and a data processing module, wherein the camera device is used for acquiring real-time video around a vehicle based on the camera device arranged around the vehicle to obtain real-time video data around the vehicle;

a video splicing module: the system is used for carrying out video splicing processing on the basis of real-time video data around the vehicle to form 360-degree panoramic real-time video data;

a video framing module: the panoramic real-time video processing device is used for performing framing processing on the 360-degree panoramic real-time video data to form panoramic real-time video frame data;

a determination module: the panoramic real-time video frame data acquisition and display device is used for identifying target data in the panoramic real-time video frame data and determining the relative position of the target data at the current position of a vehicle and the category of the target data, wherein the category of the target data comprises target traffic identification data, target vehicle data and target pedestrian data;

a target identification and trajectory prediction module: the target recognition and/or target track prediction processing is carried out on the basis of the relative position of the target data at the current position of the vehicle and the category of the target data, and a recognition result and/or a target track prediction result of the target data are/is obtained;

the auxiliary early warning module: and the target trajectory prediction module is used for carrying out driving auxiliary early warning on a driving user of the vehicle based on the current state of the vehicle and the recognition result and/or target trajectory prediction result of the target data.

Technical Field

The invention relates to the technical field of image processing, in particular to a vehicle driving assisting method and device based on vehicle-mounted real-time monitoring.

Background

The existing driving assistance of automobiles is generally divided into a plurality of systems such as a lane keeping assistance system, an automatic parking assistance system, a brake assistance system, a backing assistance system and a driving assistance system; the existing vehicle-mounted 360-degree panoramic video monitoring is generally associated with a reversing auxiliary system, and is started after a user enters a reverse gear, and the vehicle-mounted 360-degree panoramic video monitoring can be started only by manually setting the user under other conditions, so that the problem that in the normal driving process, how to utilize the vehicle-mounted 360-degree panoramic video monitoring to be fused into the related driving assistance is solved, and the safety of driving is further facilitated.

Disclosure of Invention

The invention aims to overcome the defects of the prior art, and provides a vehicle driving assisting method and device based on vehicle-mounted real-time monitoring, which can effectively perform effective driving assisting early warning according to a complex environment outside a vehicle and improve the driving safety of the vehicle.

In order to solve the technical problem, an embodiment of the present invention provides a vehicle driving assistance method based on vehicle-mounted real-time monitoring, where the method includes:

the method comprises the steps that real-time video acquisition processing of the periphery of a vehicle is carried out on the basis of camera equipment arranged on the periphery of the vehicle, and real-time video data of the periphery of the vehicle are obtained;

performing video splicing processing based on the real-time video data around the vehicle to form 360-degree panoramic real-time video data;

framing the 360-degree panoramic real-time video data to form panoramic real-time video frame data;

identifying target data in the panoramic real-time video frame data, and determining the relative position of the target data at the current position of the vehicle and the category of the target data, wherein the category of the target data comprises target traffic identification data, target vehicle data and target pedestrian data;

performing target identification and/or target track prediction processing based on the relative position of the target data at the current position of the vehicle and the category of the target data to obtain an identification result and/or a target track prediction result of the target data;

and carrying out driving auxiliary early warning on a driving user of the vehicle based on the current state of the vehicle and the recognition result and/or target track prediction result of the target data.

Optionally, carry out vehicle real-time video acquisition all around based on setting up camera equipment all around at the vehicle and handle, obtain vehicle real-time video data all around, include:

after the vehicle is started, camera equipment arranged around the vehicle is started to acquire and process real-time videos around the vehicle, and the acquired real-time video data are marked according to the position of the acquisition camera arranged in the vehicle to form real-time video data around the vehicle.

Optionally, based on the real-time video data around the vehicle carries out video stitching processing, form 360 degrees panorama real-time video data, include:

obtaining distortion coefficients of camera equipment arranged around the vehicle;

calculating transformation coefficients between the distortion coefficients and camera equipment arranged around the vehicle;

and performing fusion splicing processing on the real-time video data around the vehicle based on the transformation coefficient to form 360-degree panoramic real-time video data.

Optionally, the real-time video data around the vehicle is fused and spliced based on the transformation coefficients to form 360-degree panoramic real-time video data, including:

carrying out distortion removal processing on the real-time video data around the vehicle by using the transformation coefficient to obtain real-time video data with distortion removed;

splicing the real-time video data after distortion removal to obtain spliced real-time video data;

performing curved surface projection processing on the spliced real-time video data to obtain curved surface projected real-time video data;

and performing multiband fusion processing on the real-time video data subjected to the curved surface projection to form 360-degree panoramic real-time video data.

Optionally, the framing the 360-degree panoramic real-time video data to form panoramic real-time video frame data includes:

performing framing processing on the 360-degree panoramic real-time video data to obtain first panoramic real-time video frame data;

and performing interval frame extraction and redundancy removal processing on the first panoramic real-time video frame data to form panoramic real-time video frame data.

Optionally, the identifying target data in the panoramic real-time video frame data, and determining a relative position of the target data at a current position of the vehicle and a category of the target data includes:

performing target data identification processing on the panoramic real-time video frame data based on a convolutional neural network model to obtain identification target data;

performing covering area complementing processing on the identification target data based on an image splicing algorithm to obtain complemented target data;

determining the relative position of the supplemented target data at the current position of the vehicle based on the position of the supplemented target data in the panoramic real-time video frame;

and carrying out classification fuzzy matching processing on the supplemented target data, and determining the class of the supplemented target data.

Optionally, the identifying processing of target data is performed on the panoramic real-time video frame data based on the convolutional neural network model to obtain identified target data, including:

inputting the panoramic real-time video frame data into a convolutional neural network model, and extracting and processing target features with different dimensions in a forward propagation network in the convolutional neural network model to obtain target features with different dimensions;

screening and positioning the target features with different dimensions for the primary candidate frame through an RPN (resilient packet network), and removing the candidate frame which does not contain similar targets;

inputting the candidate frame without the similar target into a deconvolution network, and outputting a corrected image with the same size as the original target data;

and inputting the corrected image into a full-connection network and a full-connection layer to perform target data identification processing, and obtaining identification target data.

Optionally, the performing target identification and/or target trajectory prediction processing based on the relative position of the target data at the current position of the vehicle and the category of the target data includes:

when only target traffic identification data exists in the target data and the target traffic identification data is right in front of or laterally in front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data;

when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are in the front of the current position of the vehicle or in the front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data and carrying out target track prediction processing on the target vehicle data and/or the target pedestrian data;

when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are not in the front of the current position of the vehicle or in the front of the current position of the vehicle, target track prediction processing is carried out on the target vehicle data and/or the target pedestrian data;

and when target vehicle data and/or target pedestrian data exist in the target data, performing target track prediction processing on the target vehicle data and/or the target pedestrian data.

Optionally, the performing driving assistance early warning to the driving user of the vehicle based on the current state of the vehicle and the recognition result and/or the target trajectory prediction result of the target data includes:

and displaying and pushing the current state and the recognition result and/or the target track prediction result of the target data to a driving user of the vehicle on the vehicle central control screen, and playing a voice-assisted prompt.

In addition, the embodiment of the invention also provides a vehicle driving auxiliary device based on vehicle-mounted real-time monitoring, which comprises:

the video acquisition module: the system comprises a camera device, a data acquisition module, a data processing module and a data processing module, wherein the camera device is used for acquiring real-time video around a vehicle based on the camera device arranged around the vehicle to obtain real-time video data around the vehicle;

a video splicing module: the system is used for carrying out video splicing processing on the basis of real-time video data around the vehicle to form 360-degree panoramic real-time video data;

a video framing module: the panoramic real-time video processing device is used for performing framing processing on the 360-degree panoramic real-time video data to form panoramic real-time video frame data;

a determination module: the panoramic real-time video frame data acquisition and display device is used for identifying target data in the panoramic real-time video frame data and determining the relative position of the target data at the current position of a vehicle and the category of the target data, wherein the category of the target data comprises target traffic identification data, target vehicle data and target pedestrian data;

a target identification and trajectory prediction module: the target recognition and/or target track prediction processing is carried out on the basis of the relative position of the target data at the current position of the vehicle and the category of the target data, and a recognition result and/or a target track prediction result of the target data are/is obtained;

the auxiliary early warning module: and the target trajectory prediction module is used for carrying out driving auxiliary early warning on a driving user of the vehicle based on the current state of the vehicle and the recognition result and/or target trajectory prediction result of the target data.

In the embodiment of the invention, the driving assistance early warning can be effectively carried out according to the external complex environment of the vehicle, so that the driving safety of the vehicle is improved; target recognition is carried out through the real-time video data of 360 degrees panoramas formed, and therefore driving auxiliary early warning can be carried out on a target recognition result and the current state of a vehicle, and safety of a driver driving the vehicle is guaranteed.

Drawings

In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.

Fig. 1 is a schematic flow chart of a vehicle driving assistance method based on vehicle-mounted real-time monitoring in an embodiment of the invention;

fig. 2 is a schematic structural composition diagram of a vehicle driving assistance device based on vehicle-mounted real-time monitoring in an embodiment of the invention.

Detailed Description

The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Example one

Referring to fig. 1, fig. 1 is a schematic flow chart of a vehicle driving assistance method based on vehicle-mounted real-time monitoring according to an embodiment of the present invention.

As shown in fig. 1, a vehicle driving assistance method based on vehicle-mounted real-time monitoring includes:

s11: the method comprises the steps that real-time video acquisition processing of the periphery of a vehicle is carried out on the basis of camera equipment arranged on the periphery of the vehicle, and real-time video data of the periphery of the vehicle are obtained;

in a specific implementation process of the present invention, the acquiring and processing of the real-time video around the vehicle based on the camera devices arranged around the vehicle to obtain the real-time video data around the vehicle includes: after the vehicle is started, camera equipment arranged around the vehicle is started to acquire and process real-time videos around the vehicle, and the acquired real-time video data are marked according to the position of the acquisition camera arranged in the vehicle to form real-time video data around the vehicle.

Specifically, a plurality of camera devices are arranged on a vehicle, at least one camera device is arranged right in front of the vehicle, right behind the vehicle and on the left side and the right side of the vehicle respectively, and video data collected by the camera devices can form a 360-degree panoramic video image; after the vehicle is started, starting camera equipment arranged around the vehicle to acquire and process real-time video around the vehicle, and marking the acquired real-time video data according to the position of the acquisition camera arranged in the vehicle to form real-time video data around the vehicle; the marking can be generally carried out according to camera equipment arranged on the vehicle, so that the relative direction of the target in the video relative to the position of the vehicle can be conveniently determined subsequently, and whether the driving is influenced or not is determined; thereby improving the processing efficiency of subsequent video data.

S12: performing video splicing processing based on the real-time video data around the vehicle to form 360-degree panoramic real-time video data;

in a specific implementation process of the present invention, the video stitching processing is performed based on the real-time video data around the vehicle to form 360-degree panoramic real-time video data, and the method includes: obtaining distortion coefficients of camera equipment arranged around the vehicle; calculating transformation coefficients between the distortion coefficients and camera equipment arranged around the vehicle; and performing fusion splicing processing on the real-time video data around the vehicle based on the transformation coefficient to form 360-degree panoramic real-time video data.

Further, the fusion and splicing processing is performed on the real-time video data around the vehicle based on the transformation coefficient to form 360-degree panoramic real-time video data, and the method includes: carrying out distortion removal processing on the real-time video data around the vehicle by using the transformation coefficient to obtain real-time video data with distortion removed; splicing the real-time video data after distortion removal to obtain spliced real-time video data; performing curved surface projection processing on the spliced real-time video data to obtain curved surface projected real-time video data; and performing multiband fusion processing on the real-time video data subjected to the curved surface projection to form 360-degree panoramic real-time video data.

Specifically, the camera devices arranged in the vehicle all have distortion coefficients, and therefore, the distortion coefficients of the camera devices arranged around the vehicle need to be read and called to obtain; then calculating transformation coefficients between the obtained distortion coefficients and all camera devices arranged around the vehicle; and then, carrying out fusion splicing processing on the real-time video data around the vehicle according to the transformation coefficient to obtain 360-degree panoramic real-time video data.

During fusion splicing processing, firstly, distortion removal processing is carried out on real-time video data around the vehicle by using a transformation coefficient to obtain the real-time video data with distortion removed; then splicing the real-time video data after distortion removal to obtain spliced real-time video data; performing curved surface projection processing on the spliced real-time video data to obtain curved surface projected real-time video data; finally, multi-band fusion processing is carried out on the real-time video data after the curved surface projection, and 360-degree panoramic real-time video data is formed; the video is spliced through the splicing algorithm, seamless splicing of the video can be achieved, a splicing overlapping area does not exist between the spliced videos, subsequent target identification, tracking and processing are facilitated, and processing efficiency is improved.

S13: framing the 360-degree panoramic real-time video data to form panoramic real-time video frame data;

in a specific implementation process of the present invention, the framing the 360-degree panoramic real-time video data to form panoramic real-time video frame data includes: performing framing processing on the 360-degree panoramic real-time video data to obtain first panoramic real-time video frame data; and performing interval frame extraction and redundancy removal processing on the first panoramic real-time video frame data to form panoramic real-time video frame data.

Specifically, the 360-degree panoramic real-time video data needs to be framed, framing is generally performed according to the shooting frequency of camera equipment, and the shooting frequencies of the cameras arranged on the vehicle are consistent, so that the first panoramic real-time video frame data can be obtained; therefore, the first panoramic real-time video frame data needs to be subjected to interval frame extraction and redundancy removal processing to form panoramic real-time video frame data; and part of redundant frames are removed, so that the calculation efficiency can be improved, and the accuracy of calculation, identification and tracking can be ensured.

S14: identifying target data in the panoramic real-time video frame data, and determining the relative position of the target data at the current position of the vehicle and the category of the target data, wherein the category of the target data comprises target traffic identification data, target vehicle data and target pedestrian data;

in a specific implementation process of the present invention, the identifying target data in the panoramic real-time video frame data, and determining a relative position of the target data at a current position of a vehicle and a category of the target data includes: performing target data identification processing on the panoramic real-time video frame data based on a convolutional neural network model to obtain identification target data; performing covering area complementing processing on the identification target data based on an image splicing algorithm to obtain complemented target data; determining the relative position of the supplemented target data at the current position of the vehicle based on the position of the supplemented target data in the panoramic real-time video frame; and carrying out classification fuzzy matching processing on the supplemented target data, and determining the class of the supplemented target data.

Further, the identifying processing of the target data is performed on the panoramic real-time video frame data based on the convolutional neural network model to obtain identified target data, including: inputting the panoramic real-time video frame data into a convolutional neural network model, and extracting and processing target features with different dimensions in a forward propagation network in the convolutional neural network model to obtain target features with different dimensions; screening and positioning the target features with different dimensions for the primary candidate frame through an RPN (resilient packet network), and removing the candidate frame which does not contain similar targets; inputting the candidate frame without the similar target into a deconvolution network, and outputting a corrected image with the same size as the original target data; and inputting the corrected image into a full-connection network and a full-connection layer to perform target data identification processing, and obtaining identification target data.

Specifically, a convolutional neural network model is firstly constructed, and the convolutional neural network model comprises a forward propagation network, an RPN network, a deconvolution network, a full-connection network and a full-connection layer, wherein an image correction module is arranged in the deconvolution network, and the correction module has a function of correcting an image.

Then, carrying out identification processing on target data on the panoramic real-time video frame data according to the convolutional neural network model to obtain identification target data; then, performing covering area complementing processing on the identified target data according to an image splicing algorithm to obtain complemented target data; then determining the relative position of the supplemented target data at the current position of the vehicle according to the position of the supplemented target data in the panoramic real-time video frame; and finally, carrying out classification fuzzy matching processing on the supplemented target data, and determining the category to which the supplemented target data belongs, wherein the category specifically comprises target traffic identification data, target vehicle data and target pedestrian data.

When identifying target data in panoramic real-time video frame data, inputting the panoramic real-time video frame data into a convolutional neural network model, and then extracting and processing target features with different dimensions in a forward propagation network in the convolutional neural network model to obtain target features with different dimensions; screening and positioning the target features with different dimensions for the primary candidate frame through an RPN (resilient packet network), and removing the candidate frame which does not contain similar targets; inputting the candidate frame without the similar target into a deconvolution network, and outputting a corrected image with the same size as the original target data; and inputting the corrected image into a full-connection network and a full-connection layer to perform target data identification processing, and obtaining identification target data.

When the panoramic real-time video frame number is required to be transmitted forwards in a forward transmission network in a convolutional neural network model in the process of extracting and processing the target features with different dimensions, batch normalization and example normalization processing are sequentially carried out on the panoramic real-time video frame number, and the target features with different dimensions are extracted to obtain the target features with different dimensions; the purpose of example normalization is to reduce the interference of illumination on the convolutional neural network model for extracting target features of different dimensions and improve the accuracy of the convolutional neural network model for extracting target information of a target image in a complex environment.

S15: performing target identification and/or target track prediction processing based on the relative position of the target data at the current position of the vehicle and the category of the target data to obtain an identification result and/or a target track prediction result of the target data;

in a specific implementation process of the present invention, the performing target identification and/or target trajectory prediction processing based on the relative position of the target data at the current position of the vehicle and the category of the target data includes: when only target traffic identification data exists in the target data and the target traffic identification data is right in front of or laterally in front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data; when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are in the front of the current position of the vehicle or in the front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data and carrying out target track prediction processing on the target vehicle data and/or the target pedestrian data; when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are not in the front of the current position of the vehicle or in the front of the current position of the vehicle, target track prediction processing is carried out on the target vehicle data and/or the target pedestrian data; and when target vehicle data and/or target pedestrian data exist in the target data, performing target track prediction processing on the target vehicle data and/or the target pedestrian data.

Specifically, the relevant processing is required to be performed according to the target traffic identification data and/or the target vehicle data and/or the target pedestrian data existing in the target data; for example, when only target traffic identification data exists in the target data and the target traffic identification data is right ahead or side ahead of the current position of the vehicle, the target traffic identification data is identified; when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data is right in front of or laterally in front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data and carrying out target track prediction processing on the target vehicle data and/or the target pedestrian data; when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are not in the front of the current position of the vehicle or in the front of the current position of the vehicle, target track prediction processing is carried out on the target vehicle data and/or the target pedestrian data; and when the target data comprises target vehicle data and/or target pedestrian data, performing target track prediction processing on the target vehicle data and/or the target pedestrian data.

When the target traffic identification data is identified, firstly, a trained traffic sign detection network model is adopted for identification, namely panoramic real-time video frame data with the target traffic identification data is input into the trained traffic sign detection network model, an area which is possibly a traffic sign is detected, the traffic sign is positioned in the panoramic real-time video frame data, and a first frame image of the traffic sign is obtained; extracting SIFT key points of a first frame of traffic identification image, drawing a traffic identification candidate region on a next frame of image according to the position of the first frame of traffic identification image, and extracting SIFT key points of the candidate region image; performing key point matching search on the candidate region image by using SIFT key points of the first frame of traffic sign image by adopting an SIFT matching search algorithm, finding the position and bounding box of the traffic sign in the next frame of image, obtaining an image of the second frame of traffic sign, and obtaining a third frame of traffic sign image by adopting a similar method; respectively extracting image characteristics from three traffic identification area images obtained from the three frames of images by using a pre-trained traffic identification characteristic extraction network to obtain three image characteristic data; and then fusing the three image characteristic data, and classifying the fused characteristics by using a pre-trained traffic identification recognition network, so that the traffic identification region can be determined to which traffic identification category the traffic identification region belongs.

When the target trajectory prediction processing is performed on the target vehicle data and/or the target pedestrian data, the target trajectory prediction is performed in a Kalman filtering mode, and before the trajectory prediction is performed, firstly, the shielded part of the target vehicle data and/or the target pedestrian data needs to be corrected, in the application, the correction processing is performed in a jigsaw mode, and specifically, the method comprises the following steps: setting an example segmentation result, and extracting the outline of a segmentation mask area from the target vehicle data and/or the target pedestrian data according to the example segmentation result; then, fitting the contour by using a least square method to obtain a temporary rough fitting result, traversing each coordinate point on the contour, calculating the distance from the center of the fitting result to each point on the contour and solving a local maximum value point of the distance; solving maximum value points from pixel points on the fitting result outer contour segment to the center of the fitting result, and taking the first two maximum value points with the largest distance; repeating the steps to obtain the target shape approximate fitting result of all the segmentation mask areas; and finally, obtaining a corrected image of the target vehicle data and/or the target pedestrian data according to the target shape approximate fitting result of all the segmentation mask areas.

S16: and carrying out driving auxiliary early warning on a driving user of the vehicle based on the current state of the vehicle and the recognition result and/or target track prediction result of the target data.

In a specific implementation process of the present invention, the performing driving assistance early warning to a driving user of the vehicle based on a current state of the vehicle and a recognition result and/or a target trajectory prediction result of the target data includes: and displaying and pushing the current state and the recognition result and/or the target track prediction result of the target data to a driving user of the vehicle on the vehicle central control screen, and playing a voice-assisted prompt.

Specifically, the current state of the vehicle includes, but is not limited to, forward driving, reverse driving, preparation for lane change, braking, driving, turning around, and the like; and displaying and pushing the vehicle driving user on a vehicle central control screen according to the current state and the recognition result and/or the target track prediction result of the target data, and playing a voice-assisted prompt.

In the embodiment of the invention, the driving assistance early warning can be effectively carried out according to the external complex environment of the vehicle, so that the driving safety of the vehicle is improved; target recognition is carried out through the real-time video data of 360 degrees panoramas formed, and therefore driving auxiliary early warning can be carried out on a target recognition result and the current state of a vehicle, and safety of a driver driving the vehicle is guaranteed.

Example two

Referring to fig. 2, fig. 2 is a schematic structural diagram of a vehicle driving assistance device based on vehicle-mounted real-time monitoring according to an embodiment of the present invention.

As shown in fig. 2, a vehicle driving assistance apparatus based on-vehicle real-time monitoring, the apparatus includes:

the video acquisition module 21: the system comprises a camera device, a data acquisition module, a data processing module and a data processing module, wherein the camera device is used for acquiring real-time video around a vehicle based on the camera device arranged around the vehicle to obtain real-time video data around the vehicle;

in a specific implementation process of the present invention, the acquiring and processing of the real-time video around the vehicle based on the camera devices arranged around the vehicle to obtain the real-time video data around the vehicle includes: after the vehicle is started, camera equipment arranged around the vehicle is started to acquire and process real-time videos around the vehicle, and the acquired real-time video data are marked according to the position of the acquisition camera arranged in the vehicle to form real-time video data around the vehicle.

Specifically, a plurality of camera devices are arranged on a vehicle, at least one camera device is arranged right in front of the vehicle, right behind the vehicle and on the left side and the right side of the vehicle respectively, and video data collected by the camera devices can form a 360-degree panoramic video image; after the vehicle is started, starting camera equipment arranged around the vehicle to acquire and process real-time video around the vehicle, and marking the acquired real-time video data according to the position of the acquisition camera arranged in the vehicle to form real-time video data around the vehicle; the marking can be generally carried out according to camera equipment arranged on the vehicle, so that the relative direction of the target in the video relative to the position of the vehicle can be conveniently determined subsequently, and whether the driving is influenced or not is determined; thereby improving the processing efficiency of subsequent video data.

The video stitching module 22: the system is used for carrying out video splicing processing on the basis of real-time video data around the vehicle to form 360-degree panoramic real-time video data;

in a specific implementation process of the present invention, the video stitching processing is performed based on the real-time video data around the vehicle to form 360-degree panoramic real-time video data, and the method includes: obtaining distortion coefficients of camera equipment arranged around the vehicle; calculating transformation coefficients between the distortion coefficients and camera equipment arranged around the vehicle; and performing fusion splicing processing on the real-time video data around the vehicle based on the transformation coefficient to form 360-degree panoramic real-time video data.

Further, the fusion and splicing processing is performed on the real-time video data around the vehicle based on the transformation coefficient to form 360-degree panoramic real-time video data, and the method includes: carrying out distortion removal processing on the real-time video data around the vehicle by using the transformation coefficient to obtain real-time video data with distortion removed; splicing the real-time video data after distortion removal to obtain spliced real-time video data; performing curved surface projection processing on the spliced real-time video data to obtain curved surface projected real-time video data; and performing multiband fusion processing on the real-time video data subjected to the curved surface projection to form 360-degree panoramic real-time video data.

Specifically, the camera devices arranged in the vehicle all have distortion coefficients, and therefore, the distortion coefficients of the camera devices arranged around the vehicle need to be read and called to obtain; then calculating transformation coefficients between the obtained distortion coefficients and all camera devices arranged around the vehicle; and then, carrying out fusion splicing processing on the real-time video data around the vehicle according to the transformation coefficient to obtain 360-degree panoramic real-time video data.

During fusion splicing processing, firstly, distortion removal processing is carried out on real-time video data around the vehicle by using a transformation coefficient to obtain the real-time video data with distortion removed; then splicing the real-time video data after distortion removal to obtain spliced real-time video data; performing curved surface projection processing on the spliced real-time video data to obtain curved surface projected real-time video data; finally, multi-band fusion processing is carried out on the real-time video data after the curved surface projection, and 360-degree panoramic real-time video data is formed; the video is spliced through the splicing algorithm, seamless splicing of the video can be achieved, a splicing overlapping area does not exist between the spliced videos, subsequent target identification, tracking and processing are facilitated, and processing efficiency is improved.

The video framing module 23: the panoramic real-time video processing device is used for performing framing processing on the 360-degree panoramic real-time video data to form panoramic real-time video frame data;

in a specific implementation process of the present invention, the framing the 360-degree panoramic real-time video data to form panoramic real-time video frame data includes: performing framing processing on the 360-degree panoramic real-time video data to obtain first panoramic real-time video frame data; and performing interval frame extraction and redundancy removal processing on the first panoramic real-time video frame data to form panoramic real-time video frame data.

Specifically, the 360-degree panoramic real-time video data needs to be framed, framing is generally performed according to the shooting frequency of camera equipment, and the shooting frequencies of the cameras arranged on the vehicle are consistent, so that the first panoramic real-time video frame data can be obtained; therefore, the first panoramic real-time video frame data needs to be subjected to interval frame extraction and redundancy removal processing to form panoramic real-time video frame data; and part of redundant frames are removed, so that the calculation efficiency can be improved, and the accuracy of calculation, identification and tracking can be ensured.

The determination module 24: the panoramic real-time video frame data acquisition and display device is used for identifying target data in the panoramic real-time video frame data and determining the relative position of the target data at the current position of a vehicle and the category of the target data, wherein the category of the target data comprises target traffic identification data, target vehicle data and target pedestrian data;

in a specific implementation process of the present invention, the identifying target data in the panoramic real-time video frame data, and determining a relative position of the target data at a current position of a vehicle and a category of the target data includes: performing target data identification processing on the panoramic real-time video frame data based on a convolutional neural network model to obtain identification target data; performing covering area complementing processing on the identification target data based on an image splicing algorithm to obtain complemented target data; determining the relative position of the supplemented target data at the current position of the vehicle based on the position of the supplemented target data in the panoramic real-time video frame; and carrying out classification fuzzy matching processing on the supplemented target data, and determining the class of the supplemented target data.

Further, the identifying processing of the target data is performed on the panoramic real-time video frame data based on the convolutional neural network model to obtain identified target data, including: inputting the panoramic real-time video frame data into a convolutional neural network model, and extracting and processing target features with different dimensions in a forward propagation network in the convolutional neural network model to obtain target features with different dimensions; screening and positioning the target features with different dimensions for the primary candidate frame through an RPN (resilient packet network), and removing the candidate frame which does not contain similar targets; inputting the candidate frame without the similar target into a deconvolution network, and outputting a corrected image with the same size as the original target data; and inputting the corrected image into a full-connection network and a full-connection layer to perform target data identification processing, and obtaining identification target data.

Specifically, a convolutional neural network model is firstly constructed, and the convolutional neural network model comprises a forward propagation network, an RPN network, a deconvolution network, a full-connection network and a full-connection layer, wherein an image correction module is arranged in the deconvolution network, and the correction module has a function of correcting an image.

Then, carrying out identification processing on target data on the panoramic real-time video frame data according to the convolutional neural network model to obtain identification target data; then, performing covering area complementing processing on the identified target data according to an image splicing algorithm to obtain complemented target data; then determining the relative position of the supplemented target data at the current position of the vehicle according to the position of the supplemented target data in the panoramic real-time video frame; and finally, carrying out classification fuzzy matching processing on the supplemented target data, and determining the category to which the supplemented target data belongs, wherein the category specifically comprises target traffic identification data, target vehicle data and target pedestrian data.

When identifying target data in panoramic real-time video frame data, inputting the panoramic real-time video frame data into a convolutional neural network model, and then extracting and processing target features with different dimensions in a forward propagation network in the convolutional neural network model to obtain target features with different dimensions; screening and positioning the target features with different dimensions for the primary candidate frame through an RPN (resilient packet network), and removing the candidate frame which does not contain similar targets; inputting the candidate frame without the similar target into a deconvolution network, and outputting a corrected image with the same size as the original target data; and inputting the corrected image into a full-connection network and a full-connection layer to perform target data identification processing, and obtaining identification target data.

When the panoramic real-time video frame number is required to be transmitted forwards in a forward transmission network in a convolutional neural network model in the process of extracting and processing the target features with different dimensions, batch normalization and example normalization processing are sequentially carried out on the panoramic real-time video frame number, and the target features with different dimensions are extracted to obtain the target features with different dimensions; the purpose of example normalization is to reduce the interference of illumination on the convolutional neural network model for extracting target features of different dimensions and improve the accuracy of the convolutional neural network model for extracting target information of a target image in a complex environment.

Target identification and trajectory prediction module 25: the target recognition and/or target track prediction processing is carried out on the basis of the relative position of the target data at the current position of the vehicle and the category of the target data, and a recognition result and/or a target track prediction result of the target data are/is obtained;

in a specific implementation process of the present invention, the performing target identification and/or target trajectory prediction processing based on the relative position of the target data at the current position of the vehicle and the category of the target data includes: when only target traffic identification data exists in the target data and the target traffic identification data is right in front of or laterally in front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data; when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are in the front of the current position of the vehicle or in the front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data and carrying out target track prediction processing on the target vehicle data and/or the target pedestrian data; when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are not in the front of the current position of the vehicle or in the front of the current position of the vehicle, target track prediction processing is carried out on the target vehicle data and/or the target pedestrian data; and when target vehicle data and/or target pedestrian data exist in the target data, performing target track prediction processing on the target vehicle data and/or the target pedestrian data.

Specifically, the relevant processing is required to be performed according to the target traffic identification data and/or the target vehicle data and/or the target pedestrian data existing in the target data; for example, when only target traffic identification data exists in the target data and the target traffic identification data is right ahead or side ahead of the current position of the vehicle, the target traffic identification data is identified; when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data is right in front of or laterally in front of the current position of the vehicle, carrying out recognition processing on the target traffic identification data and carrying out target track prediction processing on the target vehicle data and/or the target pedestrian data; when target traffic identification data, target vehicle data and/or target pedestrian data exist in the target data and the target traffic identification data are not in the front of the current position of the vehicle or in the front of the current position of the vehicle, target track prediction processing is carried out on the target vehicle data and/or the target pedestrian data; and when the target data comprises target vehicle data and/or target pedestrian data, performing target track prediction processing on the target vehicle data and/or the target pedestrian data.

When the target traffic identification data is identified, firstly, a trained traffic sign detection network model is adopted for identification, namely panoramic real-time video frame data with the target traffic identification data is input into the trained traffic sign detection network model, an area which is possibly a traffic sign is detected, the traffic sign is positioned in the panoramic real-time video frame data, and a first frame image of the traffic sign is obtained; extracting SIFT key points of a first frame of traffic identification image, drawing a traffic identification candidate region on a next frame of image according to the position of the first frame of traffic identification image, and extracting SIFT key points of the candidate region image; performing key point matching search on the candidate region image by using SIFT key points of the first frame of traffic sign image by adopting an SIFT matching search algorithm, finding the position and bounding box of the traffic sign in the next frame of image, obtaining an image of the second frame of traffic sign, and obtaining a third frame of traffic sign image by adopting a similar method; respectively extracting image characteristics from three traffic identification area images obtained from the three frames of images by using a pre-trained traffic identification characteristic extraction network to obtain three image characteristic data; and then fusing the three image characteristic data, and classifying the fused characteristics by using a pre-trained traffic identification recognition network, so that the traffic identification region can be determined to which traffic identification category the traffic identification region belongs.

When the target trajectory prediction processing is performed on the target vehicle data and/or the target pedestrian data, the target trajectory prediction is performed in a Kalman filtering mode, and before the trajectory prediction is performed, firstly, the shielded part of the target vehicle data and/or the target pedestrian data needs to be corrected, in the application, the correction processing is performed in a jigsaw mode, and specifically, the method comprises the following steps: setting an example segmentation result, and extracting the outline of a segmentation mask area from the target vehicle data and/or the target pedestrian data according to the example segmentation result; then, fitting the contour by using a least square method to obtain a temporary rough fitting result, traversing each coordinate point on the contour, calculating the distance from the center of the fitting result to each point on the contour and solving a local maximum value point of the distance; solving maximum value points from pixel points on the fitting result outer contour segment to the center of the fitting result, and taking the first two maximum value points with the largest distance; repeating the steps to obtain the target shape approximate fitting result of all the segmentation mask areas; and finally, obtaining a corrected image of the target vehicle data and/or the target pedestrian data according to the target shape approximate fitting result of all the segmentation mask areas.

The auxiliary early warning module 26: and the target trajectory prediction module is used for carrying out driving auxiliary early warning on a driving user of the vehicle based on the current state of the vehicle and the recognition result and/or target trajectory prediction result of the target data.

In a specific implementation process of the present invention, the performing driving assistance early warning to a driving user of the vehicle based on a current state of the vehicle and a recognition result and/or a target trajectory prediction result of the target data includes: and displaying and pushing the current state and the recognition result and/or the target track prediction result of the target data to a driving user of the vehicle on the vehicle central control screen, and playing a voice-assisted prompt.

Specifically, the current state of the vehicle includes, but is not limited to, forward driving, reverse driving, preparation for lane change, braking, driving, turning around, and the like; and displaying and pushing the vehicle driving user on a vehicle central control screen according to the current state and the recognition result and/or the target track prediction result of the target data, and playing a voice-assisted prompt.

In the embodiment of the invention, the driving assistance early warning can be effectively carried out according to the external complex environment of the vehicle, so that the driving safety of the vehicle is improved; target recognition is carried out through the real-time video data of 360 degrees panoramas formed, and therefore driving auxiliary early warning can be carried out on a target recognition result and the current state of a vehicle, and safety of a driver driving the vehicle is guaranteed.

Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, or the like.

In addition, the vehicle driving assistance method and device based on vehicle-mounted real-time monitoring provided by the embodiment of the invention are described in detail, a specific example is adopted herein to explain the principle and the implementation manner of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

17页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:行车提醒方法、装置、电子设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!