Precision detection method and device of positioning algorithm, computer equipment and storage medium

文档序号:1111061 发布日期:2020-09-29 浏览:8次 中文

阅读说明:本技术 定位算法的精度检测方法、装置、计算机设备和存储介质 (Precision detection method and device of positioning algorithm, computer equipment and storage medium ) 是由 徐棨森 于 2019-03-18 设计创作,主要内容包括:本申请涉及一种定位算法的精度检测方法、装置、计算机设备和存储介质。所述方法包括:获取运动对象的实时点云图像;采用实时定位算法,对实时点云图像和已构建的点云地图中的源点云图像进行匹配,得到运动对象的实时定位数据;采用离线定位算法,对实时点云图像和点云地图中的源点云图像进行匹配,得到运动对象的离线定位数据;离线定位算法的定位精度大于所述实时定位算法的定位精度;将离线定位数据作为实时定位数据的参考值,根据所述参考值和所述实时定位数据,确定所述实时定位数据的精度。采用将离线定位数据作为实时定位数据的参考值,实现了对实时定位算法的精度的有效检测。(The application relates to a precision detection method and device of a positioning algorithm, computer equipment and a storage medium. The method comprises the following steps: acquiring a real-time point cloud image of a moving object; matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of a moving object; matching the real-time point cloud image with a source point cloud image in a point cloud map by adopting an offline positioning algorithm to obtain offline positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm; and taking the offline positioning data as a reference value of the real-time positioning data, and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data. The offline positioning data are used as the reference value of the real-time positioning data, so that the effective detection of the precision of the real-time positioning algorithm is realized.)

1. A method of accuracy detection of a positioning algorithm, the method comprising:

acquiring a real-time point cloud image of a moving object;

matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object;

matching the real-time point cloud image with a source point cloud image in the point cloud map by adopting an offline positioning algorithm to obtain offline positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm;

and taking the offline positioning data as a reference value of the real-time positioning data, and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data.

2. The method of claim 1, wherein the point cloud map is constructed by:

acquiring a source point cloud image, and performing target detection to obtain a source obstacle point cloud image;

and filtering the source obstacle point cloud image in the source point cloud image, and constructing a point cloud map according to the filtered source point cloud image.

3. The method of claim 2, wherein the obtaining a source point cloud image, performing target detection, and obtaining a source obstacle point cloud image comprises:

acquiring a source point cloud image, and inputting the source point cloud image into a trained target detection model to obtain a source obstacle point cloud image; the target detection model is obtained by training according to a point cloud sample image containing an obstacle.

4. The method of claim 1, wherein the matching the real-time point cloud image with a source point cloud image in a constructed point cloud map using a real-time localization algorithm to obtain real-time localization data of the moving object comprises:

matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image;

and taking the position information of the source point cloud image corresponding to the real-time point cloud image as the real-time positioning data of the moving object in the point cloud map.

5. The method of claim 1, wherein the using an offline positioning algorithm to match the real-time point cloud image with a source point cloud image in the point cloud map to obtain offline positioning data of the moving object comprises:

acquiring a real-time point cloud image, and inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm to obtain a target obstacle point cloud image;

filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image;

inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image; the matching model is obtained by training according to the real-time point cloud image and the corresponding source point cloud image;

and taking the position information of the source point cloud image corresponding to the real-time point cloud image as the offline positioning data of the moving object.

6. The method of claim 5, wherein inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image comprises:

and screening out the real-time point cloud image with the maximum matching value according to the matching value of each filtered real-time point cloud image, and outputting a source point cloud image corresponding to the real-time point cloud image with the maximum matching value as a matching model.

7. The method of claim 5, wherein the matching value of the static point cloud of objects in the matching model is weighted higher than the matching value of the dynamic point cloud of objects.

8. The method of claim 1, wherein the offline positioning data is used as a reference value of the real-time positioning data, and the accuracy of the real-time positioning data is determined according to the reference value and the real-time positioning data, and the reference value comprises:

acquiring the real-time positioning data and the corresponding offline positioning data;

calculating the difference value between each real-time positioning data and the corresponding off-line positioning data;

and calculating the average value or the weighted average value or the median value of the difference values, and taking the average value or the weighted average value or the median value as the precision of the positioning algorithm.

9. An accuracy evaluation device for a positioning algorithm, the device comprising:

the image acquisition module is used for acquiring a real-time point cloud image of the moving object;

the real-time positioning module is used for matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object;

the off-line positioning module is used for matching the real-time point cloud image with a source point cloud image in the point cloud map by adopting an off-line positioning algorithm to obtain off-line positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm;

and the precision calculation module is used for taking the offline positioning data as a reference value of the real-time positioning data and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data.

10. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 8 when executing the computer program.

Technical Field

The present application relates to the field of unmanned driving technologies, and in particular, to a method and an apparatus for precision detection of a positioning algorithm, a computer device, and a storage medium.

Background

With the development of the field of unmanned development, the positioning module is used for obtaining the position information of the unmanned vehicle in a map, wherein the positioning algorithm can be combined with other auxiliary algorithms to realize the main function of the positioning module.

However, the current positioning algorithm cannot acquire the position reference value of the unmanned vehicle, and when the accuracy of the positioning algorithm is evaluated, the acquired errors integrate the errors of map construction and the errors of the positioning algorithm, so that the accuracy detection of the positioning algorithm of the vehicle cannot be performed.

Disclosure of Invention

In view of the above, it is necessary to provide a method and an apparatus for detecting accuracy of a positioning algorithm, a computer device, and a storage medium.

A method of accuracy detection of a positioning algorithm, the method comprising:

acquiring a real-time point cloud image of a moving object;

matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object;

matching the real-time point cloud image with a source point cloud image in the point cloud map by adopting an offline positioning algorithm to obtain offline positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm;

and taking the offline positioning data as a reference value of the real-time positioning data, and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data.

In one embodiment, the point cloud map is constructed in a manner that:

acquiring a source point cloud image, and performing target detection to obtain a source obstacle point cloud image;

and filtering the source obstacle point cloud image in the source point cloud image, and constructing a point cloud map according to the filtered source point cloud image.

In one embodiment, the obtaining a source point cloud image and performing target detection to obtain a source obstacle point cloud image includes:

acquiring a source point cloud image, and inputting the source point cloud image into a trained target detection model to obtain a source obstacle point cloud image; the target detection model is obtained by training according to a point cloud sample image containing an obstacle.

In one embodiment, the matching the real-time point cloud image with the source point cloud image in the constructed point cloud map by using a real-time positioning algorithm to obtain the real-time positioning data of the moving object includes:

matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image;

and taking the position information of the source point cloud image corresponding to the real-time point cloud image as the real-time positioning data of the moving object in the point cloud map.

In one embodiment, the matching the real-time point cloud image and the source point cloud image in the point cloud map by using an offline positioning algorithm to obtain offline positioning data of the moving object includes:

acquiring a real-time point cloud image, and inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm to obtain a target obstacle point cloud image;

filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image;

inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image; the matching model is obtained by training according to the real-time point cloud image and the corresponding source point cloud image;

and taking the position information of the source point cloud image corresponding to the real-time point cloud image as the offline positioning data of the moving object.

In one embodiment, the inputting the filtered real-time point cloud image into a trained matching model, calculating a matching value of the filtered real-time point cloud image, and obtaining a source point cloud image corresponding to the filtered real-time point cloud image includes:

and screening out the real-time point cloud image with the maximum matching value according to the size of the matching value of each real-time point cloud image after filtering, and outputting a source point cloud image corresponding to the real-time point cloud image with the maximum matching value as a matching model.

In one embodiment, the weight corresponding to the matching value of the static object point cloud in the matching model is higher than the weight corresponding to the matching value of the dynamic object point cloud.

In one embodiment, the determining the accuracy of the real-time positioning data according to the reference value and the real-time positioning data by using the offline positioning data as the reference value of the real-time positioning data includes:

acquiring the real-time positioning data and the corresponding offline positioning data;

calculating the difference value between each real-time positioning data and the corresponding off-line positioning data;

and calculating the average value or the weighted average value or the median value of the difference values, and taking the average value or the weighted average value or the median value as the precision of the positioning algorithm.

An apparatus for accuracy detection of a positioning algorithm, the apparatus comprising:

the image acquisition module is used for acquiring a real-time point cloud image of the moving object;

the real-time positioning module is used for matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain real-time positioning data of the moving object;

the off-line positioning module is used for matching the real-time point cloud image with a source point cloud image in the point cloud map by adopting an off-line positioning algorithm to obtain off-line positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm;

and the precision calculation module is used for taking the offline positioning data as a reference value of the real-time positioning data and determining the precision of the real-time positioning data according to the reference value and the real-time positioning data.

In one embodiment, the apparatus further comprises a mapping module; the map building module comprises:

the first target detection unit is used for acquiring a source point cloud image and carrying out target detection to obtain a source obstacle point cloud image;

the first obstacle filtering unit is used for filtering a source obstacle point cloud image in the source point cloud image;

and the construction unit is used for constructing a point cloud map according to the filtered source point cloud image.

In one embodiment, the target detection unit is further configured to obtain a source point cloud image, input the source point cloud image to a trained target detection model, and obtain a source obstacle point cloud image; the target detection model is obtained by training according to a point cloud sample image containing an obstacle.

In one embodiment, the real-time positioning module includes:

the image matching unit is used for matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image;

and the data acquisition unit is used for taking the position information of the source point cloud image corresponding to the real-time point cloud image as the real-time positioning data of the moving object in the point cloud map.

In one embodiment, the offline positioning module includes:

the second target detection unit is used for acquiring a real-time point cloud image, inputting the real-time point cloud image into a target detection model trained in an offline positioning algorithm, and obtaining a target obstacle point cloud image;

the second obstacle filtering unit is used for filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image;

the image matching unit is used for inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image, and the matching model is obtained by training according to the real-time point cloud image and the corresponding source point cloud image;

and the data acquisition unit is used for taking the position information of the source point cloud image corresponding to the real-time point cloud image as the offline positioning data of the moving object.

In one embodiment, the image matching unit is further configured to screen out a real-time point cloud image with a maximum matching value according to the size of the matching value of each filtered real-time point cloud image, and output a source point cloud image corresponding to the real-time point cloud image with the maximum matching value as a matching model.

In one embodiment, the image matching unit is further configured to set a weight corresponding to a matching value of a static object point cloud in the matching model higher than a weight corresponding to a matching value of a dynamic object point cloud.

In one embodiment, the precision calculation module comprises:

the data acquisition unit is used for acquiring the real-time positioning data and the corresponding offline positioning data;

a difference calculation unit, configured to calculate a difference between each of the real-time positioning data and the corresponding offline positioning data;

and the precision determining unit is used for calculating the average value or the weighted average value or the median value of the difference values, and taking the average value or the weighted average value or the median value as the precision of the positioning algorithm.

A computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the accuracy detection step of the positioning algorithm when executing the computer program:

a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the accuracy detection step of the above-mentioned positioning algorithm:

according to the precision detection method, the device, the computer equipment and the storage medium of the positioning algorithm, through real-time positioning and offline positioning, offline positioning data are used as reference values of real-time positioning data, and precision is calculated according to the reference values and the corresponding real-time positioning data, so that precision detection of the positioning algorithm of the moving object is realized. In addition, the offline positioning data of the moving object is acquired by adopting a higher-precision offline positioning algorithm, so that the precision detection of the positioning algorithm is more accurate.

Drawings

FIG. 1 is a diagram illustrating an exemplary embodiment of a method for accuracy detection of a positioning algorithm;

FIG. 2 is a schematic flow chart of a method for accuracy detection of a positioning algorithm in one embodiment;

FIG. 3 is a schematic flow chart illustrating a manner in which a point cloud map is constructed according to one embodiment;

FIG. 4 is a flowchart illustrating the step of obtaining real-time positioning data according to an embodiment;

FIG. 5 is a flowchart illustrating the step of obtaining offline positioning data according to an embodiment;

FIG. 6 is a schematic flow chart of the accuracy calculation step of the positioning algorithm in one embodiment;

FIG. 7 is a schematic flow chart illustrating a method for accuracy detection of a positioning algorithm in accordance with another embodiment;

FIG. 8 is a block diagram of an exemplary embodiment of an apparatus for accuracy detection of a positioning algorithm;

FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.

Detailed Description

In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.

FIG. 1 is a diagram of an exemplary implementation of the accuracy testing of the positioning algorithm. The method for detecting the accuracy of the positioning algorithm provided by the embodiment of the application can be applied to the application environment shown in fig. 1. The computer device 100 may be a desktop terminal or a mobile terminal, and the mobile terminal may be a mobile phone, a tablet computer, a notebook computer, a wearable device, a personal digital assistant, and the like. The computer device 100 may also be implemented as a stand-alone server or as a server cluster comprising a plurality of servers.

Fig. 2 is a schematic flow chart of a method for detecting accuracy of a positioning algorithm in an embodiment. As shown in fig. 2, a method for detecting the accuracy of a positioning algorithm is provided, which is described by taking the method as an example applied to the computer device 100 in fig. 1, and includes the following steps:

step 202, acquiring a real-time point cloud image of the moving object.

Wherein, the moving object can refer to a moving device which needs to acquire self-positioning data. The real-time point cloud image is a current frame image of a shot scene, which is obtained by a moving object according to a real-time positioning algorithm. The point cloud image is an image including depth information of a scene to be photographed. The point cloud image can be an image generated directly according to the point cloud data, and can also be an image obtained by performing coordinate conversion according to a depth image acquired by a depth camera. The point cloud data refers to laser point information obtained by scanning a shot scene through a laser radar. The depth image is an image in which depth values from a depth camera to respective points in a scene to be photographed are set as pixel values.

Specifically, the computer device 100 acquires a current frame image of the captured scene according to a real-time localization algorithm of the moving object, and takes the current frame image as a real-time point cloud image.

Alternatively, the moving object may be an unmanned vehicle, a drone, a mobile robot, a mobile video monitoring device, or the like.

Optionally, the real-time point cloud image may be a three-dimensional image generated according to the point cloud data, or a three-dimensional image obtained by performing coordinate conversion according to a depth image acquired by a depth camera.

And 204, matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain the real-time positioning data of the moving object.

The real-time positioning algorithm is an algorithm for acquiring the self-positioning data of the moving object in real time, and in this embodiment, the algorithm needs to detect the precision. The real-time location data may be data that includes real-time location information of a moving object. The source point cloud image may be a real-time point cloud image acquired in advance by the moving object, and the point cloud map may be a map formed by superimposing the source point cloud images frame by frame.

The real-time Positioning data may include GPS (Global Positioning System) data, GNSS (Global Navigation Satellite System) data, and may further include IMU (Inertial measurement unit) data. Wherein the IMU is used to measure angular velocity and acceleration of the object in three-dimensional space. Specifically, the computer device 100 acquires a source point cloud image in a constructed point cloud map, matches the real-time point cloud image with the source point cloud image in the constructed point cloud map by using a real-time positioning algorithm, obtains a source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map, acquires positioning information of the source point cloud image, and uses the positioning information as real-time positioning data of the moving object. The matching refers to finding out a source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map according to the comparative analysis of the image feature points.

Step 206, matching the real-time point cloud image with a source point cloud image in the point cloud map by using an offline positioning algorithm to obtain offline positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm.

The offline positioning algorithm may be an algorithm with a positioning accuracy greater than the real-time positioning accuracy. The offline positioning data may be positioning information obtained by an offline positioning algorithm for a moving object, and may include GPS data, GNSS data, and may also include IMU data.

Wherein the IMU is used to measure angular velocity and acceleration of the object in three-dimensional space. The real-time positioning data and the off-line positioning data obtained according to the same frame of real-time point cloud image are in one-to-one correspondence, for example, for the real-time point location data and the off-line positioning data obtained according to the same frame of real-time image, the angular velocity of the object to be measured in the real-time positioning data corresponds to the angular velocity of the object to be measured in the off-line positioning data, and the acceleration of the object to be measured in the real-time positioning data corresponds to the acceleration.

Specifically, the computer device 100 acquires a real-time point cloud image, performs target detection on the real-time point cloud image to filter an obstacle in the real-time image, matches the real-time point cloud image after filtering with a source point cloud image in a point cloud map to obtain a source point cloud image corresponding to the real-time point cloud image in a constructed point cloud map, acquires positioning information of the source point cloud image, and uses the positioning information as offline positioning data of a moving object.

In an embodiment, because a plurality of obstacle point cloud images exist in the acquired real-time point cloud image, a large matching error occurs when the real-time point cloud image is compared with a source point cloud image in a point cloud map, so that the real-time point cloud image can be processed off line, and the obstacle point cloud image in the real-time point cloud image is filtered to obtain the filtered real-time point cloud image. Specifically, the target detection mode may be to input the real-time point cloud image into a trained target detection model, and perform obstacle detection and obstacle filtering to obtain a filtered real-time point cloud image.

In another embodiment, the collected real-time point cloud image may be a local point cloud image, and the shooting angle is incomplete, or there are rotation misalignment, translation misalignment, and the like, and the collected real-time point cloud image needs to be registered offline, and the point cloud images at various angles are converted into the same coordinate system to be spliced into a complete point cloud image. The registration method may be ICP (Iterative Closest Point), NDT (Normal distribution transform), and the like. Specifically, the computer device 100 obtains a plurality of real-time point cloud images, obtains a registered real-time point cloud image by using a registration algorithm, matches the registered real-time point cloud image with a source point cloud image in a point cloud map, obtains a source point cloud image corresponding to the real-time point cloud image in a constructed point cloud map, obtains positioning information of the source point cloud image, and uses the positioning information as offline positioning data of a moving object.

And step 208, taking the offline positioning data as a reference value of the real-time positioning data, and determining an accuracy reference value of the real-time positioning data according to the reference value and the real-time positioning data.

The reference value refers to a real value of the variable that cannot be directly obtained, generally, a reference value is agreed to be used as the real value of the variable, and the embodiment agrees to use offline positioning data as the reference value of the real-time positioning data. Correspondingly, the off-line positioning GPS data is used as a reference value of the real-time positioning GPS data; the off-line positioning GNSS data is used as a reference value of the real-time positioning GNSS data; the IMU angular velocity of the off-line positioning is used as a reference value of the IMU angular velocity in the real-time positioning; and the acceleration of the IMU positioned in an off-line mode is used as a reference value of the acceleration of the IMU in real-time positioning.

Wherein the accuracy of the positioning algorithm may specify the accuracy of the positioning data measured by the positioning algorithm. The accuracy of the positioning algorithm can be obtained by calculating a difference value according to the offline positioning data and the real-time positioning data acquired by the computer device 100. According to the precision detection method of the positioning algorithm, through real-time positioning and off-line positioning, off-line positioning data are used as reference values of the real-time positioning data, and precision is calculated according to the off-line positioning data and the real-time positioning data, so that precision detection of the positioning algorithm of the moving object is realized.

In an embodiment, fig. 3 is a schematic flow chart illustrating a method for constructing a point cloud map according to an embodiment. As shown in fig. 3, there is provided a method of constructing a point cloud map, comprising the steps of:

and 302, acquiring a source point cloud image, and performing target detection to obtain a source obstacle point cloud image.

The source point cloud image can be a real-time point cloud image acquired by a test vehicle in advance, and the source obstacle point cloud image can be a point cloud image of an obstacle in the source point cloud image and can be obtained by performing comparison analysis on obstacle feature points of the source point cloud image.

In an embodiment, the obtaining a source point cloud image and performing target detection to obtain a source obstacle point cloud image includes: acquiring a source point cloud image, and inputting the source point cloud image into a trained target detection model to obtain a source obstacle point cloud image; the target detection model is obtained by training according to a point cloud sample image containing an obstacle. The point cloud sample image can be a pre-constructed sample image containing an obstacle point cloud image.

And 304, filtering the source obstacle point cloud image in the source point cloud image, and constructing a point cloud map according to the filtered source point cloud image.

The point cloud map may be obtained by splicing point clouds at different positions, and splicing may refer to splicing scanned adjacent point cloud data together. The algorithm adopted by the splicing can be an ICP algorithm, and a global matching algorithm or a local matching algorithm can also be adopted.

Optionally, the point cloud map may also be constructed by using a multi-view three-dimensional reconstruction technology, and optimizing source point cloud images shot at different angles in consideration of errors caused by observation at different angles, so as to obtain the point cloud map according to the optimized source point cloud images.

In one embodiment, the point cloud map is constructed in a manner that: and acquiring a source point cloud image, screening the point cloud image, and superposing the screened source point cloud image frame by frame to obtain a point cloud map. Obstacles in the source cloud image can be detected by the target detection model before the point cloud image is screened. The screening of the point cloud images can be implemented by screening the source point cloud images according to the number of obstacles in the source point cloud images so as to obtain the source point cloud images with the obstacles smaller than a threshold value in the images. The point cloud image is screened by adopting filtering modes such as bilateral filtering, Gaussian filtering, conditional filtering, direct filtering, random sampling consistency filtering and the like.

In the point cloud map building step, a source obstacle point cloud image is obtained by performing target detection on the source point cloud image, the source obstacle point cloud image is filtered from the source point cloud image to obtain a filtered source point cloud image, and the point cloud map is obtained by superposing the source point cloud images frame by frame, so that the point cloud map is built, the number of obstacle point cloud images contained in the built point cloud map is small, and the real-time point cloud image is convenient to match with the source point cloud image in the point cloud map.

In an embodiment, fig. 4 is a flowchart illustrating a step of acquiring real-time positioning data, as shown in fig. 4, step 204 includes:

and 402, matching the real-time point cloud image with a source point cloud image in the constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image.

The matching refers to performing comparative analysis on the feature points of the real-time point cloud image and the feature points of all the source point cloud images in the point cloud map, and finding the source point cloud image with the highest similarity or the highest image feature point coincidence degree with the real-time point cloud image as the source point cloud image corresponding to the real-time point cloud image. Step 404, using the position information of the source point cloud image corresponding to the real-time point cloud image as the real-time positioning data of the moving object in the point cloud map.

The real-time positioning data may include GPS data, GNSS data, and IMU data.

In the step of acquiring the real-time positioning data, a real-time positioning algorithm is adopted to match the real-time point cloud image with the source point cloud image in the constructed point cloud map to obtain the source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map, and the positioning information of the source point cloud image is used as the real-time positioning data of the moving object, so that the acquisition of the real-time positioning data of the moving object is realized.

In an embodiment, fig. 5 is a flowchart illustrating a step of obtaining offline positioning data, and step 206 shown in fig. 5 includes:

step 502, acquiring a real-time point cloud image, inputting the real-time point cloud image into a trained target detection model in an offline positioning algorithm, and obtaining a target obstacle point cloud image.

The target detection model can be a trained model for detecting a target obstacle, and can be a neural network model trained according to feature points of a point cloud image of a source obstacle.

Step 504, filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image.

The target obstacle point cloud image can be a point cloud image of an obstacle in the real-time point cloud image, and can be obtained by performing contrast analysis on obstacle feature points of the real-time point cloud image.

Step 506, inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image; the matching model is obtained by training according to the real-time point cloud image and the corresponding source point cloud image.

The matching model can be trained by combining the characteristic points of the sample point cloud image.

And step 508, using the position information of the source point cloud image corresponding to the real-time point cloud image as the offline positioning data of the moving object.

In the step of obtaining the offline positioning data, the real-time point cloud image is obtained, the target detection is performed on the real-time point cloud image to filter out obstacles in the real-time image, the filtered real-time point cloud image is matched with a source point cloud image in the point cloud map, a source point cloud image corresponding to the real-time point cloud image in the constructed point cloud map is obtained, the positioning information of the source point cloud image is obtained, and the positioning information is used as the offline positioning data of the moving object, so that the offline positioning data is obtained.

In another embodiment, the inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image includes: and screening out the real-time point cloud image with the maximum matching value according to the size of the matching value of each real-time point cloud image after filtering, and outputting a source point cloud image corresponding to the real-time point cloud image with the maximum matching value as a matching model. The matching value can reflect the similarity between the filtered real-time point cloud image and the corresponding source point cloud image.

Alternatively, the matching value may be a matching score value, may be an image similarity percentage, and may also be a coincidence probability value of the image feature points.

In another embodiment, the weight corresponding to the matching value of the static object point cloud in the matching model is higher than the weight corresponding to the matching value of the dynamic object point cloud. The static object point cloud refers to a static object point cloud under a world coordinate system, and the dynamic object point cloud refers to a moving object point cloud under the world coordinate system.

In an embodiment, fig. 6 is a schematic flowchart of the step of calculating the accuracy of the positioning algorithm, and as shown in fig. 6, step 208 includes:

step 602, obtaining the real-time positioning data and the corresponding offline positioning data.

The real-time positioning data and the corresponding off-line positioning data are obtained by processing the same frame of real-time point cloud image.

Step 604, calculating a difference between each of the real-time positioning data and the corresponding offline positioning data.

And taking at least one group of real-time positioning data and the corresponding off-line positioning data as calculation samples, and performing difference calculation to obtain at least one difference.

Step 606, calculating an average value, a weighted average value or a median value of the difference values, and taking the average value, the weighted average value or the median value as the precision of the positioning algorithm.

Optionally, each difference value may be subjected to an average value calculation, and the average value is used as the precision of the positioning algorithm; or calculating a weighted average value, and taking the weighted average value as the precision of a positioning algorithm; or carrying out median calculation, and taking the median as the precision of the positioning algorithm; one or more of the average, weighted average, median may also be taken as the accuracy of the positioning algorithm.

In the step of calculating the precision, the precision of the real-time positioning algorithm is determined according to the difference between the off-line positioning data and the real-time positioning data by calculating the difference, so that the precision calculation of the positioning algorithm is realized.

In another embodiment, fig. 7 is a flowchart illustrating a method for detecting accuracy of a positioning algorithm, as shown in fig. 7, the method includes the steps of:

step 702, constructing a point cloud map, and filtering a source obstacle point cloud image in the point cloud map through a target detection module.

The point cloud map can be built according to the laser point cloud data, and further comprises speed data, GPS data, GNSS data and IMU data of the moving object. In the embodiment, the moving object is a test vehicle, and a point cloud map is constructed by acquiring a source point cloud image. The source obstacle point cloud image may be a point cloud image of an obstacle in the source point cloud image, and the obstacle is an object that does not belong to a map framework.

Specifically, the computer device 100 detects the obstacle by using the target detection model to obtain the size and position of each source obstacle point cloud image, deducts the point cloud from the source point cloud image, and then performs map construction by using the filtered point cloud image.

And step 704, obtaining real-time positioning data by adopting a real-time positioning algorithm, and storing the real-time positioning data.

The real-time positioning algorithm can be an algorithm for acquiring the position information of the moving object in a map, and the real-time positioning data is stored in a hard disk or other storage media in a document file mode.

And 706, filtering the target obstacle of the real-time point cloud image to obtain a current frame image after filtering processing.

The target obstacle is a non-building object which is required to be filtered out of the point cloud map and is easy to cause image matching errors. The target obstacle point cloud image is a point cloud image of an obstacle in the real-time point cloud image.

Specifically, the computer device 100 detects a target obstacle by using a target detection model, obtains the size and position of each target obstacle point cloud image, subtracts the point cloud from the source point cloud image, and obtains a filtered current frame image.

And 708, taking the modified NDT matching algorithm as an offline positioning algorithm, and performing offline positioning on the filtered current frame image to obtain offline positioning data.

The NDT (Normal distribution Transform) matching algorithm is an algorithm for performing coordinate transformation on an input point cloud image to perform matching. The mode of modifying the NDT matching algorithm comprises the following steps: and configuring the weight of the real-time point cloud image at each angle in the NDT matching algorithm, so that the point cloud image with the angle deviation larger than a threshold value in the input point cloud image has the score weight smaller than a preset value in the matching value.

And step 710, performing filtering processing on the offline positioning data obtained by the NDT matching algorithm by using an RTS smoothing algorithm to obtain the filtered offline positioning data.

The RTS (volume smoother) algorithm is one of kalman filter algorithms, and can perform filtering processing on offline positioning data to obtain offline positioning data with higher positioning accuracy.

And 712, screening the filtered off-line positioning data according to the matching value of each frame of real-time positioning image, taking the obtained result as a reference value, calculating the difference value between the reference value and the corresponding real-time positioning data, and obtaining the precision of the positioning algorithm according to the difference value.

The matching value of each frame of real-time positioning image reflects the similarity between each frame of real-time positioning image and the corresponding source point cloud image. The real-time positioning image with the highest similarity, namely the largest matching value, can be screened out by the embodiment, and the position data of the source cloud image corresponding to the real-time positioning image is used as the reference value of the real-time positioning data of the real-time positioning image. Wherein, the error of the positioning algorithm can be directly obtained according to the difference, or the average value, the weighted average value or the median of at least one difference can be calculated, and the average value, the weighted average value or the median is used as the precision of the positioning algorithm.

According to the precision detection method of the positioning algorithm, the modified NDT matching algorithm is used as the offline positioning algorithm to obtain offline positioning data, then the RTS smoothing algorithm is used for filtering processing to obtain higher-precision offline positioning data which are used as reference values of real-time positioning data, and finally the reference values are compared with the real-time positioning data to obtain the precision of the positioning algorithm, so that the precision detection of the positioning algorithm is realized.

It should be understood that although the various steps in the flow charts of fig. 1-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 1-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.

In one embodiment, as shown in fig. 8, there is provided an accuracy detection apparatus of a positioning algorithm, including: an image acquisition module 802, a real-time positioning module 804, an offline positioning module 806, and an accuracy calculation module 808, wherein:

an image obtaining module 802, configured to obtain a real-time point cloud image of a moving object.

And the real-time positioning module 804 is configured to match the real-time point cloud image with a source point cloud image in the constructed point cloud map by using a real-time positioning algorithm, so as to obtain real-time positioning data of the moving object.

An offline positioning module 806, configured to match the real-time point cloud image with a source point cloud image in the point cloud map by using an offline positioning algorithm, so as to obtain offline positioning data of the moving object; the positioning precision of the off-line positioning algorithm is greater than that of the real-time positioning algorithm.

And the precision calculation module 808 is configured to use the offline positioning data as a reference value of the real-time positioning data, and determine the precision of the real-time positioning data according to the reference value and the real-time positioning data.

The device further comprises: and the map building module is used for acquiring a source point cloud image and building a point cloud map according to the source point cloud image.

Wherein, the map construction module includes: the first target detection unit is used for acquiring a source point cloud image and carrying out target detection to obtain a source obstacle point cloud image; the first obstacle filtering unit is used for filtering a source obstacle point cloud image in the source point cloud image; and the construction unit is used for constructing a point cloud map according to the filtered source point cloud image. The target detection unit further comprises a model application unit, the model application unit is used for inputting the acquired source point cloud image into a trained target detection model to obtain a source obstacle point cloud image, and the target detection model is obtained by training according to a point cloud sample image containing obstacles.

Wherein, the real-time positioning module 804 includes:

the image matching unit is used for matching the real-time point cloud image with a source point cloud image in a constructed point cloud map by adopting a real-time positioning algorithm to obtain a source point cloud image corresponding to the real-time point cloud image;

and the data acquisition unit is used for taking the position information of the source point cloud image corresponding to the real-time point cloud image as the real-time positioning data of the moving object in the point cloud map.

Wherein, the offline positioning module 806 includes:

the second target detection unit is used for acquiring a real-time point cloud image, inputting the real-time point cloud image into a target detection model trained in an offline positioning algorithm, and obtaining a target obstacle point cloud image;

the second obstacle filtering unit is used for filtering the target obstacle point cloud image from the real-time point cloud image to obtain a filtered real-time point cloud image;

the image matching unit is used for inputting the filtered real-time point cloud image into a trained matching model in an offline positioning algorithm to obtain a source point cloud image corresponding to the filtered real-time point cloud image, and the matching model is obtained by training according to the real-time point cloud image and the corresponding source point cloud image;

and the data acquisition unit is used for taking the position information of the source point cloud image corresponding to the real-time point cloud image as the offline positioning data of the moving object.

The image matching unit is also used for screening out the real-time point cloud image with the maximum matching value according to the size of the matching value of each real-time point cloud image after filtering processing, and outputting the source point cloud image corresponding to the real-time point cloud image with the maximum matching value as a matching model.

The image matching unit is also used for setting the weight corresponding to the matching value of the static object point cloud in the matching model, and the weight is higher than the weight corresponding to the matching value of the dynamic object point cloud.

Wherein, the precision calculation module 808 comprises:

the data acquisition unit is used for acquiring the real-time positioning data and the corresponding offline positioning data;

a difference calculation unit, configured to calculate a difference between each of the real-time positioning data and the corresponding offline positioning data;

and the precision determining unit is used for calculating the average value or the weighted average value or the median value of the difference values, and taking the average value or the weighted average value or the median value as the precision of the positioning algorithm.

For the specific definition of the precision detection device of the positioning algorithm, reference may be made to the above definition of the precision detection method of the positioning algorithm, which is not described herein again. The modules in the precision detection device of the positioning algorithm can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.

In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 9. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of accuracy detection for a positioning algorithm. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.

Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.

In one embodiment, a computer device is provided, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the accuracy detection method steps of the positioning algorithm when executing the computer program.

In an embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which computer program, when being executed by a processor, carries out the accuracy detection method steps of the above-mentioned positioning algorithm.

It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).

The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.

The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

19页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种无源模式下的SINS/USBL组合导航定位方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!