Binocular vision three-dimensional optical image and ground radar data fusion method and system

文档序号:1935980 发布日期:2021-12-07 浏览:10次 中文

阅读说明:本技术 双目视觉三维光学图像与地基雷达数据融合方法和系统 (Binocular vision three-dimensional optical image and ground radar data fusion method and system ) 是由 黄平平 谭维贤 乞耀龙 陈曙光 于 2021-10-11 设计创作,主要内容包括:本发明提供一种双目视觉三维光学图像与地基雷达数据融合方法和系统,其中方法包括:通过对双目系统进行标定和拍摄操作,获取光学地形图数据即光学深度数据;通过计算机设置文件路径及参数;根据参数及数据路径文件中的参数设定,读取对应文件,生成对应文件的三维光学图像数据;通过读取参数及数据路径文件中的信息,判断雷达类型为直线扫描雷达还是旋转扫描雷达;将三维光学图像数据转为球坐标系,对球坐标系进行角度调整;读取雷达数据并与球坐标系下的三维光学图像数据进行融合,根据方位向和距离向的约束条件将其进行匹配融合;将雷达数据与三维光学图像数据融合的结果图进行显示。提高了地基雷达监测应用过程中的时效性、便携性。(The invention provides a method and a system for fusing binocular vision three-dimensional optical images and ground-based radar data, wherein the method comprises the following steps: calibrating and shooting a binocular system to obtain optical topographic map data, namely optical depth data; setting a file path and parameters through a computer; reading the corresponding file according to the parameter and the parameter setting in the data path file, and generating three-dimensional optical image data of the corresponding file; judging whether the radar type is a linear scanning radar or a rotary scanning radar by reading the parameters and the information in the data path file; converting the three-dimensional optical image data into a spherical coordinate system, and performing angle adjustment on the spherical coordinate system; reading radar data, fusing the radar data with three-dimensional optical image data in a spherical coordinate system, and performing matching fusion on the radar data according to constraint conditions of the azimuth direction and the distance direction; and displaying a result graph obtained by fusing the radar data and the three-dimensional optical image data. The timeliness and the portability of the ground-based radar in the monitoring application process are improved.)

1. A binocular vision three-dimensional optical image and ground-based radar data fusion method is characterized by comprising the following steps:

s1: data acquisition: calibrating and shooting a binocular system to obtain optical depth data;

s2: setting and initializing system parameters: setting a file path and parameters through a computer;

s3: generating three-dimensional optical image data: reading the corresponding file according to the set file path and the set parameters, and generating three-dimensional optical image data of the corresponding file;

s4: selecting a radar type mode: judging whether the radar type is a linear scanning radar or a rotary scanning radar by reading the set information of the file path and the parameters, and selecting a fusion algorithm of three-dimensional optical image data and radar data according to different radar types;

s5: processing three-dimensional optical image data: converting the three-dimensional optical image data into a spherical coordinate system, and carrying out angle adjustment on the spherical coordinate system according to geographic coordinates measured by different types of radars; reading radar data, fusing the radar data with three-dimensional optical image data in a spherical coordinate system, and performing matching fusion according to constraint conditions of the azimuth direction and the distance direction;

s6: and (5) verifying the fusion result: and displaying a result graph obtained by fusing the radar data and the three-dimensional optical image data according to a set display method.

2. The binocular vision three-dimensional optical image and ground-based radar data fusion method according to claim 1, wherein the file path and parameters comprise: the method comprises the steps of parameter and data path file path, file type path generation of three-dimensional optical image data, radar data path and name reading, fusion result output path selection and file type parameter setting.

3. The binocular vision three-dimensional optical image and ground-based radar data fusion method according to claim 1, wherein before the three-dimensional optical image data is converted into a spherical coordinate system, the method further comprises:

data preprocessing: the 8-page storage form of the three-dimensional optical image data is converted into an 8-column storage form.

4. The binocular vision three-dimensional optical image and ground-based radar data fusion method according to claim 3, wherein for a line scanning radar, the specific method for converting the three-dimensional optical image data into a spherical coordinate system comprises:

firstly, coordinates of two end points of a linear track are obtained, a central point of the linear track is calculated, and the central point of the linear track is used as an origin of a linear scanning radar; taking the original point of the linear scanning radar as a new original point of the preprocessed three-dimensional optical image data, and enabling the three-dimensional optical image data and the radar data to be in the same coordinate system; then, performing spherical coordinate system conversion on the three-dimensional optical image data in the same coordinate system, and converting the result into an angle unit; finally, the horizontal angle initial position of the three-dimensional optical image data is converted into the same horizontal angle initial position as the radar data.

5. The binocular vision three-dimensional optical image and ground-based radar data fusion method according to claim 4, wherein the specific method for adjusting the angle of the geographic coordinates measured by the spherical coordinate system linear scanning radar comprises the following steps: using 180 degrees to subtract theta data corresponding to the three-dimensional optical image data in the spherical coordinate system, taking the radar scanning initial position as a reference, and moving the three-dimensional optical image data to the reference position to obtain initial adjustment data; the initial adjustment data is added or subtracted by 360 degrees to realize that the data angle range is between 0 and 360 degrees, and then 90 degrees is subtracted to obtain intermediate processing data; and finally, subtracting 360 degrees from the data larger than 180 degrees in the intermediate processing data to obtain the final angle adjustment data.

6. The binocular vision three-dimensional optical image and ground-based radar data fusion method according to claim 5, wherein for a rotating scanning radar, the specific method for converting the three-dimensional optical image data into a spherical coordinate system comprises:

firstly, a rotary scanning radar takes a radar rotary table as an origin; taking the rotation center of the rotary scanning radar as a new origin point of the preprocessed three-dimensional optical image data, and enabling the three-dimensional optical image data and the radar data to be in the same coordinate system; then, performing spherical coordinate system conversion on the three-dimensional optical image data in the same coordinate system, and converting the result into an angle unit; finally, the horizontal angle initial position of the three-dimensional optical image data is converted into the same horizontal angle initial position as the radar data.

7. The binocular vision three-dimensional optical image and ground-based radar data fusion method according to claim 6, wherein the specific method for performing angle adjustment on the geographic coordinates measured by the spherical coordinate system rotation scanning radar comprises the following steps: using 180 degrees to subtract theta data corresponding to the three-dimensional optical image data in the spherical coordinate system, taking the radar scanning initial position as a reference, and moving the three-dimensional optical image data to the reference position to obtain initial adjustment data; and the angle range of the data is between 0 and 360 degrees by adding or subtracting 360 degrees to the initial adjustment data, so that the final angle adjustment data is obtained.

8. The binocular vision three-dimensional optical image and ground-based radar data fusion method according to claim 7, wherein the specific method for reading radar data comprises the following steps:

reading radar data, wherein the read data are composed of azimuth and distance directions;

processing the read radar data, firstly processing and obtaining the angle information scanned by the radar image, defining the data as a polar angle, and converting the polar angle into an angle unit; then processing the distance information set by the radar image scanning, and defining the data as a radial distance;

respectively obtaining the maximum value and the minimum value of polar angle and radial distance in radar data, selecting the polar angle range to be processed, firstly storing the data smaller than the minimum value of the polar angle and larger than the maximum value of the polar angle, and selecting the polar angle data to be processed in the data range between the maximum value and the minimum value of the polar angle; storing the radial distance data smaller than the minimum radial distance value and larger than the maximum radial distance value, and finally extracting the data to be processed, and processing the data independently to improve the operation speed;

the specific method for matching and fusing according to the constraint conditions of the azimuth direction and the distance direction comprises the following steps:

and fusing the processed three-dimensional optical image data and the radar data one by one, calling a minimum function by taking the length of the selected three-dimensional optical image data as the cycle number, wherein the parameters are the absolute value of the radial distance difference and the absolute value of the polar angle difference of the three-dimensional optical image data and the radar data respectively, and the return value is the row and column number of the point successfully fused.

9. A binocular vision three-dimensional optical image and ground-based radar data fusion system is characterized by comprising: the system comprises a data acquisition module, a system parameter setting and initializing module, a three-dimensional optical image data generating module, a radar type mode selecting module, a three-dimensional optical image data processing module and a fusion result verifying module;

the data acquisition module: obtaining topographic map data by calibrating and shooting a binocular system;

the system parameter setting and initializing module: setting a file path and parameters through a computer;

the module for generating three-dimensional optical image data comprises: reading the corresponding file according to the set file path and the set parameters, and generating three-dimensional optical image data of the corresponding file;

the select radar type mode module: judging whether the radar type is a linear scanning radar or a rotary scanning radar by reading the set information of the file path and the parameters, and selecting a fusion algorithm of three-dimensional optical image data and radar data according to different radar types;

the processing module of the three-dimensional optical image data comprises: converting the three-dimensional optical image data into a spherical coordinate system, and carrying out angle adjustment on the spherical coordinate system according to geographic coordinates measured by different types of radars; reading radar data, fusing the radar data with three-dimensional optical image data in a spherical coordinate system, and performing matching fusion according to constraint conditions of the azimuth direction and the distance direction;

the fusion result is verified by the module: and displaying a result graph obtained by fusing the radar data and the three-dimensional optical image data according to a set display method.

10. A memory storing one or more programs executable by one or more processors to perform the steps of a binocular visual stereoscopic three-dimensional optical image and ground-based radar data fusion method according to any one of claims 1 to 8.

Technical Field

The invention relates to the technical field of machine vision, in particular to a method and a system for fusing binocular vision three-dimensional optical images and ground radar data.

Background

The binocular stereo vision is an important branch of machine vision, measurement is carried out by directly simulating human eyes to observe and process scenery, the method is a non-contact measurement method which is high in speed, high in precision and simple and convenient to operate, and the method is particularly suitable for dangerous working environments which are not suitable for manual operation or occasions where manual vision cannot meet requirements easily. With the rapid development of artificial intelligence and computer technology, a series of achievements have been achieved in the field of binocular vision at home and abroad. The achievements are mainly applied to the aspects of bionic robots, unmanned automobiles and the like, for example, bionic (anthropomorphic) robots are designed in the United states Bouguton Power company and the Qinghua university intelligent technology and system country key laboratory, and can avoid obstacles, even robots designed in the Qinghua university intelligent technology and system country key laboratory can walk, jog, cross obstacles, pick up objects and other higher-level motions similar to human beings; in the aspect of unmanned automobiles, Zhang Feng of Nanjing aerospace university and the like utilize binocular stereo vision to measure the safe automobile distance, so that the safe driving of the automobile is ensured. Existing products or applications in the market are close-range, and if the ground-based radar is adopted for landslide monitoring, long-distance or long-distance three-dimensional modeling equipment is used for three-dimensional terrain data acquisition. Because the landslide area cannot be visually judged by the two-dimensional data of the foundation radar, the landslide area is visually displayed by adopting a method of fusing the three-dimensional optical image data acquired and processed by binocular vision and the foundation radar data.

The existing ground monitoring radar has many problems, and the two-dimensional data coordinate system of the radar is an azimuth coordinate and a distance coordinate, so that a landslide area is not easy to judge visually, and the geographic position of the specific landslide area is difficult to analyze. Two-dimensional data and three-dimensional optical image data must be mapped, and a landslide area can be visually displayed by the three-dimensional optical image data. However, most of the existing methods for acquiring high-precision three-dimensional optical image data are methods for unmanned aerial vehicle acquisition, manual dotting and the like, and have the defects of high cost, long time and the like, and some regions have special situations such as unmanned aerial vehicle flight prohibition and the like, so that accurate three-dimensional optical image data cannot be acquired in time.

Disclosure of Invention

The invention aims to provide a binocular vision three-dimensional optical image and ground radar data fusion method and system to solve the technical problems in the prior art.

The invention provides a method for fusing binocular vision three-dimensional optical images and ground-based radar data, which comprises the following steps:

s1: data acquisition: calibrating and shooting a binocular system to obtain optical depth data;

s2: setting and initializing system parameters: setting a file path and parameters through a computer;

s3: generating three-dimensional optical image data: reading the corresponding file according to the set file path and the set parameters, and generating three-dimensional optical image data of the corresponding file;

s4: selecting a radar type mode: judging whether the radar type is a linear scanning radar or a rotary scanning radar by reading the set information of the file path and the parameters, and selecting a fusion algorithm of three-dimensional optical image data and radar data according to different radar types;

s5: processing three-dimensional optical image data: converting the three-dimensional optical image data into a spherical coordinate system, and carrying out angle adjustment on the spherical coordinate system according to geographic coordinates measured by different types of radars; reading radar data, fusing the radar data with three-dimensional optical image data in a spherical coordinate system, and performing matching fusion according to constraint conditions of the azimuth direction and the distance direction;

s6: and (5) verifying the fusion result: and displaying a result graph obtained by fusing the radar data and the three-dimensional optical image data according to a set display method.

According to the method provided by the first aspect of the present invention, the file path and the parameters include: the method comprises the steps of parameter and data path file path, file type path generation of three-dimensional optical image data, radar data path and name reading, fusion result output path selection and file type parameter setting.

According to the method provided by the first aspect of the present invention, before the converting the three-dimensional optical image data into the spherical coordinate system, the method further includes:

data preprocessing: the 8-page storage form of the three-dimensional optical image data is converted into an 8-column storage form.

According to the method provided by the first aspect of the invention, for the line scanning radar, the specific method for converting the three-dimensional optical image data into the spherical coordinate system comprises the following steps:

firstly, coordinates of two end points of a linear track are obtained, a central point of the linear track is calculated, and the central point of the linear track is used as an origin of a linear scanning radar; taking the original point of the linear scanning radar as a new original point of the preprocessed three-dimensional optical image data, and enabling the three-dimensional optical image data and the radar data to be in the same coordinate system; then, performing spherical coordinate system conversion on three-dimensional optical image data in the same coordinate system, and converting the result into an angle unit; finally, the horizontal angle initial position of the three-dimensional optical image data is converted into the same horizontal angle initial position as the radar data.

According to the method provided by the first aspect of the present invention, a specific method for performing angle adjustment on geographic coordinates measured by a linear scanning radar in a spherical coordinate system includes: using 180 degrees to subtract theta data corresponding to the three-dimensional optical image data in the spherical coordinate system, taking the radar scanning initial position as a reference, and moving the three-dimensional optical image data to the reference position to obtain initial adjustment data; the initial adjustment data is added or subtracted by 360 degrees to realize that the data angle range is between 0 and 360 degrees, and then 90 degrees is subtracted to obtain intermediate processing data; and finally, subtracting 360 degrees from the data larger than 180 degrees in the intermediate processing data to obtain the final angle adjustment data.

According to the method provided by the first aspect of the invention, for the rotary scanning radar, the specific method for converting the three-dimensional optical image data into the spherical coordinate system comprises the following steps:

firstly, a rotary scanning radar takes a radar rotary table as an origin; taking the rotation center of the rotary scanning radar as a new origin point of the preprocessed three-dimensional optical image data, and enabling the three-dimensional optical image data and the radar data to be in the same coordinate system; then, performing spherical coordinate system conversion on the three-dimensional optical image data in the same coordinate system, and converting the result into an angle unit; finally, the horizontal angle initial position of the three-dimensional optical image data is converted into the same horizontal angle initial position as the radar data.

According to the method provided by the first aspect of the present invention, a specific method for performing angle adjustment on geographic coordinates measured by a spherical coordinate system rotation scanning radar comprises: using 180 degrees to subtract theta data corresponding to the three-dimensional optical image data in the spherical coordinate system, taking the radar scanning initial position as a reference, and moving the three-dimensional optical image data to the reference position to obtain initial adjustment data; and the angle range of the data is between 0 and 360 degrees by adding or subtracting 360 degrees to the initial adjustment data, so that the final angle adjustment data is obtained.

According to the method provided by the first aspect of the present invention, the specific method for reading radar data includes:

reading radar data, wherein the read data are composed of azimuth and distance directions;

processing the read radar data, firstly processing and obtaining the angle information scanned by the radar image, defining the data as a polar angle, and converting the polar angle into an angle unit; then processing the distance information set by the radar image scanning, and defining the data as a radial distance;

respectively obtaining the maximum value and the minimum value of polar angle and radial distance in radar data, selecting the polar angle range to be processed, firstly storing the data smaller than the minimum value of the polar angle and larger than the maximum value of the polar angle, and selecting the polar angle data to be processed in the data range between the maximum value and the minimum value of the polar angle; storing the radial distance data smaller than the minimum radial distance value and larger than the maximum radial distance value, and finally extracting the data to be processed, and processing the data independently to improve the operation speed;

according to the method provided by the first aspect of the present invention, a specific method for performing matching fusion according to the constraints of the azimuth direction and the distance direction includes:

and fusing the processed three-dimensional optical image data and the radar data one by one, calling a minimum function by taking the length of the selected three-dimensional optical image data as the cycle number, wherein the parameters are the absolute value of the radial distance difference and the absolute value of the polar angle difference of the three-dimensional optical image data and the radar data respectively, and the return value is the row and column number of the point successfully fused.

A second aspect of the invention provides a binocular vision three-dimensional optical image and ground-based radar data fusion system,

the system comprises: the system comprises a data acquisition module, a system parameter setting and initializing module, a three-dimensional optical image data generating module, a radar type mode selecting module, a three-dimensional optical image data processing module and a fusion result verifying module;

the data acquisition module: obtaining topographic map data by calibrating and shooting a binocular system;

the system parameter setting and initializing module: setting a file path and parameters through a computer;

the module for generating three-dimensional optical image data comprises: reading the corresponding file according to the set file path and the set parameters, and generating three-dimensional optical image data of the corresponding file;

the select radar type mode module: judging whether the radar type is a linear scanning radar or a rotary scanning radar by reading the set information of the file path and the parameters, and selecting a fusion algorithm of three-dimensional optical image data and radar data according to different radar types;

the processing module of the three-dimensional optical image data comprises: converting the three-dimensional optical image data into a spherical coordinate system, and carrying out angle adjustment on the spherical coordinate system according to geographic coordinates measured by different types of radars; reading radar data, fusing the radar data with three-dimensional optical image data in a spherical coordinate system, and performing matching fusion according to constraint conditions of the azimuth direction and the distance direction;

the fusion result is verified by the module: and displaying a result graph obtained by fusing the radar data and the three-dimensional optical image data according to a set display method.

A third aspect of the present invention provides a memory storing one or more programs executable by one or more processors for implementing the steps of a binocular visual stereoscopic three-dimensional optical image and ground-based radar data fusion method according to the first aspect.

Compared with the prior art, the technical scheme provided by the application has the following advantages:

the three-dimensional optical image is acquired through binocular vision, and three-dimensional color optical stereo image data with high precision can be processed more quickly. By adopting the fusion of binocular vision and radar data, the timeliness and the portability of the foundation radar in the slope monitoring application process are improved. Binocular vision three-dimensional imaging is coded through the angle of image, and the imaging effect accords with radar scanning range, the reduction that can be better fuses the angle error that exists.

Drawings

In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having letter suffixes or different letter suffixes may represent different instances of similar components. The drawings illustrate various embodiments generally by way of example and not by way of limitation, and together with the description and claims serve to explain the disclosed embodiments. The same reference numbers will be used throughout the drawings to refer to the same or like parts, where appropriate. Such embodiments are illustrative, and are not intended to be exhaustive or exclusive embodiments of the present apparatus or method.

FIG. 1 is a flow chart of a binocular vision three-dimensional optical image and ground-based radar data fusion method adopted by the invention;

fig. 2 is a structural diagram of a binocular vision three-dimensional optical image and ground-based radar data fusion system adopted by the invention.

Detailed Description

The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.

Example 1:

fig. 1 is a flowchart of a method for fusing binocular vision three-dimensional optical images and ground-based radar data, which is adopted by the present invention, and as shown in fig. 1, an embodiment 1 of the present invention provides a method for fusing binocular vision three-dimensional optical images and ground-based radar data, including:

s1: data acquisition: calibrating and shooting a binocular system to obtain optical depth data;

s2: setting and initializing system parameters: setting a file path and parameters through a computer;

in some embodiments, the file path and parameters include: parameter and data path file path, generating file type path of three-dimensional optical image data, reading radar data path and name, merging result output path, and selecting setting of file type parameter;

s3: generating three-dimensional optical image data: reading the corresponding file according to the set file path and the parameters in the parameters and the parameters in the data path file, and generating three-dimensional optical image data of the corresponding file;

s4: selecting a radar type mode: judging whether the radar type is a linear scanning radar or a rotary scanning radar by reading the set file path, the parameters in the parameters and the information in the data path file, and selecting a fusion algorithm of three-dimensional optical image data and radar data according to different radar types;

s5: processing three-dimensional optical image data: converting an 8-page storage mode of the three-dimensional optical image data into an 8-column storage mode, extracting effective data points for subsequent processing, and temporarily storing invalid data; converting the three-dimensional optical image data into a spherical coordinate system, and carrying out angle adjustment on the spherical coordinate system according to geographic coordinates measured by different types of radars; and reading radar data, fusing the radar data with three-dimensional optical image data in a spherical coordinate system, and matching and fusing the radar data and the three-dimensional optical image data according to constraint conditions of the azimuth direction and the distance direction.

In some embodiments, for a line scan radar, the specific method for converting the three-dimensional optical image data into a spherical coordinate system includes:

firstly, coordinates of two end points of a linear track are obtained, a central point of the linear track is calculated, and the central point of the linear track is used as an origin of a linear scanning radar; taking the original point of the preprocessed three-dimensional optical image data of the linear orbit scanning radar as a new original point, and enabling the three-dimensional optical image data and the radar data to be in the same coordinate system; then, performing spherical coordinate system conversion on the three-dimensional optical image data in the same coordinate system, and converting the result into an angle unit; finally, converting the horizontal angle initial position of the three-dimensional optical image data into a horizontal angle initial position which is the same as the radar data;

since radar scanning is performed clockwise from left to right, while a common coordinate system is performed counterclockwise from right to left; the specific method for adjusting the angle of the geographic coordinate measured by the spherical coordinate system linear track radar comprises the following steps: the method comprises the following steps of subtracting theta data corresponding to three-dimensional optical image data in a spherical coordinate system by 180 degrees to ensure that the coordinate system is a common coordinate system, and moving the three-dimensional optical image data to a reference position by taking a radar scanning initial position as a reference to obtain initial adjustment data; through the operation, data larger than 360 degrees and smaller than 0 degree can appear, so that the data angle range is between 0 degree and 360 degrees by adding or subtracting 360 degrees to the initial adjustment data, and then 90 degrees is subtracted to obtain intermediate processing data; and finally, subtracting 360 degrees from the data larger than 180 degrees in the intermediate processing data to obtain the final angle adjustment data.

According to the method, further, for the rotary scanning radar, a specific method for converting the three-dimensional optical image data into the spherical coordinate system comprises the following steps:

firstly, a rotary scanning radar takes a radar rotary table as an origin; taking the rotation center of the rotary scanning radar as a new origin point of the preprocessed three-dimensional optical image data, and enabling the three-dimensional optical image data and the radar data to be in the same coordinate system; then, performing spherical coordinate system conversion on the three-dimensional optical image data in the same coordinate system, and converting the result into an angle unit; finally, converting the horizontal angle initial position of the three-dimensional optical image data into a horizontal angle initial position which is the same as the radar data;

since radar scanning is performed clockwise from left to right, while a common coordinate system is performed counterclockwise from right to left; the specific method for adjusting the angle of the geographic coordinate measured by the spherical coordinate system rotating radar comprises the following steps: the method comprises the following steps of subtracting theta data corresponding to three-dimensional optical image data in a spherical coordinate system by 180 degrees to ensure that the coordinate system is a common coordinate system, and moving the three-dimensional optical image data to a reference position by taking a radar scanning initial position as a reference to obtain initial adjustment data; through the above operation, data greater than 360 ° and less than 0 ° may occur, and therefore, the data angle range is between 0 ° and 360 ° by adding or subtracting 360 ° to the initial adjustment data, and the final angle adjustment data is obtained.

Preferably, the specific method for reading radar data includes:

reading radar data, wherein the read data are composed of azimuth and distance directions;

processing the read radar data, firstly processing and obtaining the angle information scanned by the radar image, defining the data as a polar angle, and converting the polar angle into an angle unit; then processing the distance information set by the radar image scanning, and defining the data as a radial distance;

since some invalid data may appear in the three-dimensional color optical stereo image data, we only process the valid data. Respectively obtaining the maximum value and the minimum value of polar angle and radial distance in radar data, selecting the polar angle range to be processed, firstly storing the data smaller than the minimum value of the polar angle and larger than the maximum value of the polar angle, and selecting the polar angle data to be processed in the data range between the maximum value and the minimum value of the polar angle; and storing the radial distance data smaller than the minimum radial distance value and larger than the maximum radial distance value, and selecting the data processed finally.

The specific method for matching and fusing the orientation direction and the distance direction according to the constraint conditions comprises the following steps:

and fusing the processed three-dimensional optical image data and the radar data one by one, calling a minimum function by taking the length of the selected three-dimensional optical image data as the cycle number, wherein the parameters are the absolute value of the radial distance difference and the absolute value of the polar angle difference of the three-dimensional optical image data and the radar data respectively, and the return value is the row and column number of the point successfully fused.

In some embodiments, the data result generated by the above operations is saved, the saved invalid data is put into the matrix first, and then the successfully fused data is put into the matrix. The whole data is based on the equator and the meridian, the displayed image is inconvenient to observe, and the reference is converted into a processed area by solving the mean values of the first two columns in the result, so that the observation is facilitated. Finally, the data is saved in dat file type;

s6: and (5) verifying the fusion result: displaying a result graph obtained by fusing the radar data and the three-dimensional optical image data according to a set display method;

and displaying the three-dimensional optical image data and selecting the three-dimensional optical image data according to the radar deformation data corresponding to the row and column numbers through color difference.

Example 2:

according to the binocular vision three-dimensional optical image and ground-based radar data fusion method described in embodiment 1, in some specific schemes, the application and implementation of the method are specifically described as follows:

and (4) deploying equipment, firstly, performing data acquisition by adopting binocular vision, and then performing radar data acquisition by using a rotary scanning radar. Firstly, binocular vision equipment is used for carrying out the processes of calibration, image shooting, data acquisition and the like, and a rotary scanning radar is deployed for carrying out radar data acquisition. The monitoring target is a mountain body with a middle and long distance, and the rapid three-dimensional modeling of the mountain body is realized. And the radar data collected by the rotary scanning radar is fused with the data.

Firstly, calibrating binocular vision equipment, shooting 20 groups of calibration plate pictures, processing to obtain internal reference matrixes and external reference data of two cameras of the binocular vision equipment, and setting a calibration angle to be an initial 0-degree position. And after calibration is finished, rotating the binocular vision position by 90 degrees clockwise, shooting a group of mountain topographic maps, acquiring three-dimensional angle information of the optical depth data, and attaching color information to the three-dimensional optical image data. And replacing the rotary scanning radar to acquire radar data from the initial 0-degree position, wherein the acquisition range is 0-180 degrees, and processing the radar data to acquire the deformation data of the mountain every minute within two hours.

And rotating the three-dimensional optical image data center acquired by binocular vision by 90 degrees clockwise as an initial position, wherein the acquisition range is 71.55-108.45 degrees, extracting radar deformation data in the range and the three-dimensional optical image data acquired by the binocular vision, and fusing according to the corresponding polar angle and the corresponding radial distance. And finally displaying the fused result through software.

The three-dimensional optical image data acquired in the implementation process achieves high precision and can be well matched and fused with the data acquired by the rotary scanning radar accurately. The method has the advantages that three-dimensional imaging of the terrain with the middle and long distances is well achieved, the speed of obtaining three-dimensional optical image data is improved, radar data can be matched and fused with the three-dimensional optical image data accurately, and the radar data can be explained easily.

Example 3:

according to the method for fusing binocular vision three-dimensional optical image and ground-based radar data provided in embodiments 1-2, embodiment 3 provides a corresponding system for fusing binocular vision three-dimensional optical image and ground-based radar data, fig. 2 is a structural diagram of a system for fusing binocular vision three-dimensional optical image and ground-based radar data, as shown in fig. 2, the system 200 includes: the system comprises a data acquisition module 201, a system parameter setting and initializing module 202, a three-dimensional optical image data generating module 203, a radar type mode selecting module 204, a three-dimensional optical image data processing module 205 and a fusion result verifying module 206.

The data acquisition module 201: calibrating and shooting a binocular system to obtain optical depth data;

the system parameter setting and initializing module 202: setting a file path and parameters through a computer;

the generate three-dimensional optical image data module 203: reading the corresponding file according to the parameter and the parameter setting in the data path file, and generating three-dimensional optical image data of the corresponding file;

the select radar type mode module 204: and judging whether the radar type is a linear scanning radar or a rotary scanning radar by reading the parameters and the information in the data path file, and selecting a fusion algorithm of the three-dimensional optical image data and the radar data according to different radar types.

The processing module 205 of the three-dimensional optical image data: converting the three-dimensional optical image data into a spherical coordinate system, and carrying out angle adjustment on the spherical coordinate system according to geographic coordinates measured by different types of radars; and reading radar data, fusing the radar data with three-dimensional optical image data in a spherical coordinate system, and matching and fusing the radar data and the three-dimensional optical image data according to constraint conditions of the azimuth direction and the distance direction.

The fusion result verification module 206: and displaying a result graph obtained by fusing the radar data and the three-dimensional optical image data according to a set display method.

Example 4:

embodiment 4 of the present invention provides a memory, including: the memory stores one or more programs that are executable by the one or more processors to implement the steps of the binocular visual optical stereo image and ground-based radar data fusion method of embodiments 1-2.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.

It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.

The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.

For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the invention. One of ordinary skill in the art can understand and implement it without inventive effort.

Embodiments of the subject matter and the functional operations described in this specification can be implemented in: digital electronic circuitry, tangibly embodied computer software or firmware, computer hardware including the structures disclosed in this specification and their structural equivalents, or a combination of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or additionally, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode and transmit information to suitable receiver apparatus for execution by the data processing apparatus. The computer memory may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.

The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform corresponding functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).

Computers suitable for executing computer programs include, for example, general and/or special purpose microprocessors, or any other type of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory and/or a random access memory. The basic components of a computer include a central processing unit for implementing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer does not necessarily have such a device. Moreover, a computer may be embedded in another device, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device such as a Universal Serial Bus (USB) flash drive, to name a few.

Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices), magnetic disks (e.g., an internal hard disk or a removable disk), magneto-optical disks, and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. In other instances, features described in connection with one embodiment may be implemented as discrete components or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

Thus, particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. Further, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.

The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:目标检测方法及装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类