Method, device and equipment for testing automatic driving vehicle perception system and storage medium

文档序号:613792 发布日期:2021-05-07 浏览:5次 中文

阅读说明:本技术 自动驾驶车辆感知系统测试方法、装置、设备及存储介质 (Method, device and equipment for testing automatic driving vehicle perception system and storage medium ) 是由 李丹 李建平 于 2020-12-25 设计创作,主要内容包括:本申请公开了自动驾驶车辆感知系统测试方法、装置、设备及存储介质,涉及自动驾驶、智能交通等人工智能领域。具体实现方案为:获取标注数据和感知数据,标注数据是根据路测数据进行障碍物标注得到的数据,感知数据是感知系统根据路测数据感知得到的数据;对标注数据和感知数据进行匹配处理,得到匹配成功的标注数据和感知数据,及未匹配成功的标注数据和感知数据;对未匹配成功的标注数据和感知数据进行过滤处理,确定与自动驾驶车辆存在交互的障碍物对应的第一标注数据和第一感知数据;根据匹配成功的标注数据和感知数据及第一标注数据和第一感知数据,得到感知系统的测试结果。本申请排除对自动驾驶无影响的障碍物,有助于感知系统的正向迭代。(The application discloses a method, a device, equipment and a storage medium for testing a perception system of an automatic driving vehicle, and relates to the field of artificial intelligence such as automatic driving and intelligent transportation. The specific implementation scheme is as follows: obtaining marking data and perception data, wherein the marking data are data obtained by marking obstacles according to the drive test data, and the perception data are data obtained by perceiving according to the drive test data by a perceiving system; matching the annotation data and the perception data to obtain successfully matched annotation data and perception data and unsuccessfully matched annotation data and perception data; filtering the unsuccessfully matched annotation data and sensing data, and determining first annotation data and first sensing data corresponding to the obstacle interactive with the automatic driving vehicle; and obtaining a test result of the perception system according to the successfully matched marking data and perception data and the first marking data and the first perception data. The method and the device eliminate obstacles which do not influence automatic driving, and are beneficial to forward iteration of a perception system.)

1. An autonomous vehicle perception system testing method, comprising:

obtaining marking data and perception data, wherein the marking data are obtained by marking obstacles according to drive test data, and the perception data are obtained by perceiving according to the drive test data by a perceiving system;

matching the annotation data and the perception data to obtain a matching result, wherein the matching result comprises successfully matched annotation data and perception data, and unsuccessfully matched annotation data and perception data;

filtering the successfully unmatched labeling data and sensing data respectively, and determining first labeling data and first sensing data corresponding to obstacles interacting with the automatic driving vehicle;

and obtaining a test result of the perception system according to the successfully matched marking data and perception data, the first marking data and the first perception data.

2. The method of claim 1, wherein the filtering the successfully unmatched annotation data and perception data to determine first annotation data and first perception data corresponding to an obstacle with which the autonomous vehicle has an interaction comprises:

if the obstacle meets any one of the following conditions, filtering data corresponding to the obstacle from the successfully unmatched labeling data and sensing data respectively to obtain first labeling data and first sensing data corresponding to the obstacle interacting with the automatic driving vehicle:

the distance between the obstacle and the automatic driving vehicle is larger than a first preset distance;

the shielding proportion of the obstacles is greater than a first preset value;

the type of the obstacle is a preset type, and the preset type comprises a non-core type;

the size of the obstacle is smaller than a first preset size;

the number of the pixel points of the barrier is smaller than a first preset number.

3. The method of claim 1, wherein the matching the annotation data and the perception data to obtain a matching result comprises:

respectively filtering the labeled data and the perception data, and determining second labeled data and second perception data corresponding to the obstacles participating in the evaluation of the perception system;

and matching the second labeling data and the second sensing data to obtain a matching result.

4. The method according to claim 3, wherein the filtering the annotation data and the perception data to determine second annotation data and second perception data corresponding to the obstacle participating in the evaluation of the perception system comprises:

if the obstacle meets any one of the following conditions, data corresponding to the obstacle is filtered out from the labeling data and the perception data respectively, and second labeling data and second perception data corresponding to the obstacle participating in evaluation of the perception system are obtained:

the distance between the obstacle and the automatic driving vehicle is larger than a second preset distance;

the obstacle is behind the autonomous vehicle;

the obstacle is not within a road boundary;

the shielding proportion of the obstacles is greater than a second preset value;

the obstacle is not in the same lane as the autonomous vehicle;

the type of the barrier is a preset type, and the preset type comprises water mist, vehicle tail gas, a garbage can and a fence;

the size of the obstacle is smaller than a second preset size;

the number of the pixel points of the barrier is smaller than a second preset number;

the obstacle observation component is a radar observation.

5. The method according to any one of claims 1 to 4, wherein the test result includes identification accuracy, and obtaining the test result of the sensing system according to the successfully matched annotation data and sensing data and the first annotation data and the first sensing data comprises:

determining the number of obstacles successfully matched according to the successfully matched marking data and sensing data;

obtaining the number of the perceived obstacles according to the number of the obstacles successfully matched and the first perception data;

and obtaining the identification accuracy of the sensing system according to the number of the obstacles successfully matched and the number of the sensed obstacles.

6. The method according to any one of claims 1 to 4, wherein the test result comprises a recall rate, and the obtaining of the test result of the perception system according to the successfully matched annotation data and perception data and the first annotation data and the first perception data comprises:

determining the number of obstacles successfully matched according to the successfully matched marking data and sensing data;

obtaining the number of the marked obstacles according to the number of the successfully matched obstacles and the first marking data;

and obtaining the recall rate of the perception system according to the number of the successfully matched obstacles and the number of the marked obstacles.

7. The method according to any one of claims 1 to 4, wherein the test result includes a classification accuracy, and obtaining the test result of the sensing system according to the successfully matched annotation data and sensing data, and the first annotation data and the first sensing data includes:

determining the category number of the obstacles successfully matched according to the successfully matched marking data and sensing data;

obtaining the number of the types of the perceived obstacles according to the number of the types of the obstacles successfully matched and the first perception data;

and obtaining the classification accuracy of the sensing system according to the successfully matched obstacle category number and the sensed obstacle category number.

8. An autonomous vehicle sensing system testing apparatus, comprising:

the acquisition module is used for acquiring marking data and perception data, wherein the marking data are obtained by marking obstacles according to the drive test data, and the perception data are obtained by perceiving the data according to the drive test data by the perceiving system;

the matching module is used for matching the annotation data and the perception data to obtain a matching result, wherein the matching result comprises successfully matched annotation data and perception data and unsuccessfully matched annotation data and perception data;

the filtering module is used for respectively filtering the label data and the perception data which are not successfully matched and determining first label data and first perception data corresponding to the barrier which interacts with the automatic driving vehicle;

and the determining module is used for obtaining a test result of the sensing system according to the successfully matched marking data and sensing data, the first marking data and the first sensing data.

9. The apparatus of claim 8, wherein the filtering module is specifically configured to:

when the obstacle meets any one of the following conditions, filtering data corresponding to the obstacle from the successfully unmatched labeling data and sensing data respectively to obtain first labeling data and first sensing data corresponding to the obstacle interacting with the automatic driving vehicle:

the distance between the obstacle and the automatic driving vehicle is larger than a first preset distance;

the shielding proportion of the obstacles is greater than a first preset value;

the type of the obstacle is a preset type, and the preset type comprises a non-core type;

the size of the obstacle is smaller than a first preset size;

the number of the pixel points of the barrier is smaller than a first preset number.

10. The apparatus of claim 8, wherein the matching module comprises:

the filtering unit is used for respectively filtering the annotation data and the perception data and determining second annotation data and second perception data corresponding to the obstacles participating in the evaluation of the perception system;

and the matching unit is used for matching the second labeling data and the second sensing data to obtain a matching result.

11. The device according to claim 10, wherein the filtering unit is in particular adapted to:

when the obstacle meets any one of the following conditions, data corresponding to the obstacle is filtered out from the labeling data and the perception data respectively, and second labeling data and second perception data corresponding to the obstacle participating in evaluation of the perception system are obtained:

the distance between the obstacle and the automatic driving vehicle is larger than a second preset distance;

the obstacle is behind the autonomous vehicle;

the obstacle is not within a road boundary;

the shielding proportion of the obstacles is greater than a second preset value;

the obstacle is not in the same lane as the autonomous vehicle;

the type of the barrier is a preset type, and the preset type comprises water mist, vehicle tail gas, a garbage can and a fence;

the size of the obstacle is smaller than a second preset size;

the number of the pixel points of the barrier is smaller than a second preset number;

the obstacle observation component is a radar observation.

12. The apparatus of any of claims 8 to 11, the test results comprising identification accuracy, the determining module comprising:

the first determining unit is used for determining the number of the obstacles successfully matched according to the labeling data and the perception data successfully matched;

the second determining unit is used for obtaining the number of the perceived obstacles according to the number of the obstacles successfully matched and the first perception data;

and the third determining unit is used for obtaining the identification accuracy of the sensing system according to the number of the obstacles successfully matched and the number of the sensed obstacles.

13. The apparatus of any of claims 8-11, the test results including a recall rate, the determination module comprising:

the first determining unit is used for determining the number of the obstacles successfully matched according to the labeling data and the perception data successfully matched;

a fourth determining unit, configured to obtain the number of marked obstacles according to the number of successfully matched obstacles and the first marking data;

and the fifth determining unit is used for obtaining the recall rate of the sensing system according to the number of the obstacles successfully matched and the number of the marked obstacles.

14. The apparatus of any of claims 8 to 11, the test results comprising classification accuracy, the determining module comprising:

a sixth determining unit, configured to determine the number of types of successfully matched obstacles according to the successfully matched annotation data and sensing data;

a seventh determining unit, configured to obtain the number of types of perceived obstacles according to the number of types of obstacles successfully matched and the first perception data;

and the eighth determining unit is used for obtaining the classification accuracy of the sensing system according to the successfully matched obstacle category number and the sensed obstacle category number.

15. An electronic device, comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 7.

16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1 to 7.

17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.

18. An autonomous vehicle comprising: a perception system and a control system, the perception system being a perception system tested according to the method of any one of claims 1 to 7; the control system is used for controlling the automatic driving vehicle to run according to the result output by the sensing system.

Technical Field

The application relates to data processing, in particular to a method, a device, equipment and a storage medium for testing an automatic driving vehicle sensing system, which can be used in the fields of artificial intelligence such as automatic driving and intelligent transportation.

Background

The automatic driving vehicle is also called as unmanned vehicle, and is an unmanned intelligent vehicle driven by a comprehensive sensing system and controlled by computer software. The method is an important component of future intelligent traffic and one of the current research hotspots.

Autonomous vehicles typically include a sensing system and a control system. The automatic driving vehicle can sense the surrounding environment including other vehicles, people, animals, road signs and other obstacles in the driving process through the sensing system. The automatic driving vehicle is combined with a control system according to the information of the obstacles on the driving road, so that the unmanned driving effect is achieved. Therefore, in order to ensure the driving safety, the accuracy of the sensing system needs to be tested.

At present, when the sensing system is tested for accuracy, a mode of matching and evaluating obstacles output by the sensing system with labeled data is generally adopted. The method is characterized in that the overall evaluation is carried out on the overall result output by the sensing system and the annotated overall result, and the iteration of the sensing system is carried out based on the evaluation index of the overall evaluation output.

Disclosure of Invention

The application provides a method, a device, equipment and a storage medium for testing an automatic driving vehicle perception system, which are beneficial to forward iteration of the perception system.

According to a first aspect of the application, there is provided a method for testing a perception system of an autonomous vehicle, comprising: .

Obtaining marking data and perception data, wherein the marking data are data obtained by marking obstacles according to the drive test data, and the perception data are data obtained by perceiving according to the drive test data by a perceiving system;

matching the annotation data and the perception data to obtain a matching result, wherein the matching result comprises successfully matched annotation data and perception data, and unsuccessfully matched annotation data and perception data;

filtering the label data and the perception data which are not successfully matched, and determining first label data and first perception data corresponding to the obstacle interactive with the automatic driving vehicle;

and obtaining a test result of the perception system according to the successfully matched marking data and perception data, and the first marking data and the first perception data.

According to a second aspect of the present application, there is provided an autonomous vehicle perception system testing apparatus comprising:

the acquisition module is used for acquiring marking data and sensing data, wherein the marking data is obtained by marking obstacles according to the drive test data, and the sensing data is obtained by sensing the drive test data by the sensing system;

the matching module is used for matching the annotation data and the perception data to obtain a matching result, wherein the matching result comprises successfully matched annotation data and perception data and unsuccessfully matched annotation data and perception data;

the filtering module is used for respectively filtering the label data and the perception data which are not successfully matched and determining first label data and first perception data corresponding to the barrier which interacts with the automatic driving vehicle;

and the determining module is used for obtaining a test result of the sensing system according to the successfully matched marking data and sensing data, and the first marking data and the first sensing data.

According to a third aspect of the present application, there is provided an electronic device comprising:

at least one processor; and

a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,

the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.

According to a fourth aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method as described above.

According to a fifth aspect of the application, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the method as described above.

According to a sixth aspect of the present application, there is provided an autonomous vehicle comprising: the sensing system is obtained by testing according to the method; the control system is used for controlling the running of the automatic driving vehicle according to the result output by the sensing system.

According to the technology of the application, the problem that the perception system cannot accurately reflect the influence of the perception result on the automatic driving behavior due to the fact that the perception system performs iteration on the perception system based on the evaluation index of the overall evaluation output is solved, and the perception system tends to be most beneficial to the stability and safety of automatic driving rather than overall optimal.

It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.

Drawings

The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:

FIG. 1 is a schematic diagram according to a first embodiment of the present application;

FIG. 2 is a schematic diagram according to a second embodiment of the present application;

FIG. 3 is a schematic view of an application scenario of a sensing system obtained by a sensing system testing method for an autonomous vehicle according to an embodiment of the application;

FIG. 4 is a schematic illustration according to a third embodiment of the present application;

FIG. 5 is a schematic illustration according to a fourth embodiment of the present application;

FIG. 6 is a block diagram of an electronic device for implementing a method for testing an autonomous vehicle sensing system according to an embodiment of the present application;

FIG. 7 is a set-up scenario diagram of an autonomous vehicle perception system test that may implement embodiments of the present application.

Detailed Description

The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.

The obstacle mainly refers to an object which is encountered by the automatic driving vehicle during driving and can influence automatic driving, and the obstacle comprises but is not limited to other vehicles, people, animals, road signs and other traffic elements.

When an accuracy test is performed on a perception system of an autonomous vehicle, an evaluation mode of matching perception data with labeled data is generally adopted:

1) when an obstacle exists in the perception data and the marking data at the same time, the perception is considered to be correct;

2) when an obstacle appears in the marked data but does not appear in the perception data, the perception system is considered to have missed detection;

3) when an obstacle appears in the sensing data but does not appear in the annotation data, the sensing system is considered to have false detection.

The above-described evaluation method is to evaluate the entire total amount of results (i.e., all of the sensing data) output from the sensing system and the entire amount of results labeled (i.e., all of the labeling data), but in general, many obstacles evaluated by such an evaluation method include many obstacles that do not actually affect the driving behavior of the autonomous vehicle. In this case, the evaluation indexes of the output, such as the recall rate of the obstacle and the recognition accuracy rate, which cannot correctly reflect the influence of the sensing result output by the sensing system on the driving behavior of the autonomous vehicle, and iteration of the sensing system based on the evaluation indexes may result in that the sensing system tends to be overall optimal rather than most beneficial to driving stability and safety.

For example, for a vehicle parked outside the road boundary, the driving behavior of the autonomous vehicle is not affected, and the detection situation does not need to be concerned; for barriers such as fences, the barriers belonging to the background do not need to be reported by a sensing system under most conditions, and only need to be reported under the condition of invading into the road; for a vehicle seriously shielded, the position accuracy of the true value marking is difficult to determine, and because the shielding sensing system can report unstable conditions, the influence on the driving behavior of the automatic driving vehicle is small and can be ignored; for the water mist sprayed by the sprinkler, such obstacles are usually marked in the marking data, but ideally, the sensing system does not output the water mist and other similar obstacles, so the water mist needs to be filtered from the marking data.

Based on the above 4 types of examples, the complexity of the labeled data and the perception data can be seen, so that in order to establish a set of accurate perception system, the labeled data and the perception data need to be precisely filtered, and obstacles which do not affect the driving behavior of the autonomous vehicle are filtered out, so that obstacles which have real opportunity to interact with the autonomous vehicle are obtained. The influence of the iteration of the sensing system on the automatic driving can be more accurately reflected only by evaluating the sensing system based on the filtered obstacles.

Therefore, based on the above, the present application provides a method, an apparatus, a device and a storage medium for testing an autonomous vehicle sensing system, which are applied to the fields of data processing, such as autonomous driving and intelligent transportation, and filter obstacles that do not affect the driving behavior of an autonomous vehicle, so that the sensing system tends to be most beneficial to the stability and safety of autonomous driving rather than being optimal as a whole.

The following detailed examples are used to illustrate how the present application filters out obstacles that do not affect the driving behavior of an autonomous vehicle.

Example one

Fig. 1 is a schematic diagram according to a first embodiment of the present application. The embodiment of the application provides a method for testing an automatic driving vehicle sensing system, which can be executed by an automatic driving vehicle sensing system testing device, wherein the automatic driving vehicle sensing system testing device can be specifically an electronic device, or the automatic driving vehicle sensing system testing device can be arranged in the electronic device, for example, the automatic driving vehicle sensing system testing device is a chip in the electronic device, and the like. Electronic devices are intended to represent various forms of Digital computers, such as laptops, desktops, workstations, Personal Digital Assistants (PDAs), and may also be referred to as "palmtops"), servers, blade servers, mainframes, and other appropriate computers.

As shown in fig. 1, the method for testing the perception system of the automatic driving vehicle comprises the following steps:

and S101, acquiring the annotation data and the perception data.

The marking data are data obtained by marking obstacles according to the drive test data, and the sensing data are data obtained by sensing the drive test data by the sensing system.

The drive test data is data collected by the automatic driving vehicle in the actual driving process, and includes but is not limited to radar point cloud data, sensor data, positioning data and the like. In practical application, related personnel can manually mark obstacles on the drive test data through a client, so as to obtain marked data; in addition, the sensing system senses the obstacles in the drive test data to obtain sensing data.

For example, the sensing data and the annotation data are stored in a preset position, for example, a certain folder of the client, when the sensing system is tested, a tester can find the sensing data and the annotation data through a folder path and send the sensing data and the annotation data to the server, and the server tests the sensing system of the automatic driving vehicle.

And S102, matching the annotation data and the perception data to obtain a matching result, wherein the matching result comprises successfully matched annotation data and perception data, and unsuccessfully matched annotation data and perception data.

In the step, the marked data is used as a standard basis to carry out matching processing on the marked data and the perception data. Wherein the matched content may include, but is not limited to, the location and type of the obstacle.

At present, it is common to evaluate the sensing system according to the matching result obtained in this step, i.e. the evaluation manner as described above. On the basis, the method filters the obstacles which are not successfully matched through the step S203, the part of the obstacles are not necessarily detected, and the influence on the driving behavior of the automatic driving vehicle is very little, so that the evaluation index can reflect the identification condition of the obstacles possibly interacting with the automatic driving vehicle after the matching.

S103, filtering the label data and the perception data which are not successfully matched, and determining first label data and first perception data corresponding to the obstacle interactive with the automatic driving vehicle.

The first annotation data is partial data in the annotation data which is not successfully matched, and the first perception data is partial data in the perception data which is not successfully matched. The filtering processing of the label data and the perception data which are not successfully matched is independent of each other, and the filtering processing and the perception data can be carried out simultaneously. The conditions under which the filtration process is based are illustrated in the examples that follow.

And S104, obtaining a test result of the sensing system according to the successfully matched annotation data and sensing data, and the first annotation data and the first sensing data.

For example, the test result of the sensing system may include evaluation indexes such as an identification accuracy, a recall rate, and a classification accuracy, and each evaluation index is obtained in the following manner:

first, recognition accuracy

1) Determining the number of obstacles successfully matched according to the successfully matched marking data and sensing data;

2) obtaining the number of the perceived obstacles according to the number of the obstacles successfully matched and the first perception data;

3) and obtaining the identification accuracy of the sensing system according to the number of the obstacles successfully matched and the number of the sensed obstacles.

For example, if the number of obstacles successfully matched is 90 and the number of obstacles sensed is 100, the recognition accuracy is: 90 ÷ 100 × 100% ═ 90%.

Second, recall ratio

1) Determining the number of obstacles successfully matched according to the successfully matched marking data and sensing data;

2) obtaining the number of the marked obstacles according to the number of the obstacles successfully matched and the first marking data;

3) and obtaining the recall rate of the perception system according to the number of the successfully matched obstacles and the number of the marked obstacles.

For example, if the number of successfully matched obstacles is 85 and the number of marked obstacles is 100, the recall rate is: 85 ÷ 100 × 100% ═ 85%.

Third, classification accuracy

1) Determining the category number of the obstacles successfully matched according to the successfully matched marking data and sensing data;

2) obtaining the number of the sensed obstacle categories according to the successfully matched obstacle category number and the first sensing data;

3) and obtaining the classification accuracy of the sensing system according to the successfully matched obstacle category number and the sensed obstacle category number.

For example, if the number of successfully matched obstacles is 12, and the number of sensed obstacles is 15, the classification accuracy is: 12 ÷ 15 × 100% ═ 80%.

The method for testing the perception system of the automatic driving vehicle comprises the steps of firstly obtaining marking data and perception data; then, carrying out matching processing on the annotation data and the perception data to obtain successfully matched annotation data and perception data and unsuccessfully matched annotation data and perception data, carrying out filtering processing on the unsuccessfully matched annotation data and perception data, and determining first annotation data and first perception data corresponding to obstacles with interaction with the automatic driving vehicle; and finally, obtaining a test result of the sensing system according to the successfully matched marking data and sensing data and the first marking data and the first sensing data. Through filtering after matching, first marking data and first perception data corresponding to the obstacles interacting with the automatic driving vehicle are determined, so that the obstacles without influence on automatic driving can be eliminated, the obtained measurement result is more fit with the actual perception effect on the road, the identification condition of the obstacles is more accurately reflected, and forward iteration of a perception system is facilitated.

On the basis of the above embodiment, when the actual filtering processing operation is performed, different filtering rules can be constructed by combining the spatial dimension and the attribute dimension, so as to achieve the specific filtering purpose. Optionally, the spatial dimension may include a position where the obstacle is located, a perspective relationship between the obstacles, and the like; the attribute dimensions may include the category, size, and obstacle observation composition of the obstacle, among others. The obstacle observation composition refers to what kind of observation is performed by the obstacle, such as camera observation, lidar observation, and/or radar observation. For example, an obstacle is observed by both lidar (lidar) observation and camera (camera).

Illustratively, from the perspective of spatial dimensions, specific considerations may include, but are not limited to: distance between the obstacle and the autonomous vehicle; a relative bearing relationship between the obstacle and the autonomous vehicle; calculating whether the barrier is in the road boundary according to the high-precision map; calculating the shielding proportion of the barrier according to the length, width and height of the barrier and the transmission relation; calculating whether the obstacle is in the same lane or an adjacent lane with the autonomous vehicle from the high-precision map, and so on.

In one specific implementation, the step S103 as described above may further include: if the obstacle meets any one of the following conditions, filtering data corresponding to the obstacle from the successfully unmatched labeling data and sensing data respectively to obtain first labeling data and first sensing data corresponding to the obstacle interacting with the automatic driving vehicle:

the distance between the obstacle and the automatic driving vehicle is larger than a first preset distance;

the shielding proportion of the obstacles is greater than a first preset value;

the type of the barrier is a preset type, and the preset type comprises a non-core type;

the size of the obstacle is smaller than a first preset size;

the number of the pixel points of the barrier is smaller than a first preset number.

It should be clear that, the first preset distance, the first preset value, the first preset size and the first preset number need to be set according to actual needs or historical experience, and the present application does not limit the first preset distance, the first preset value, the first preset size and the first preset number.

Optionally, the core category refers to categories of human, vehicle, bicycle, animal, etc., and other categories can be regarded as non-core categories, such as water mist, automobile exhaust, etc.; the size of the obstacle refers to the three-dimensional size of the obstacle; the pixel points of the obstacles refer to the size of the obstacles on the two-dimensional image, namely the number of the pixel points.

For example, for an obstacle having a distance from the autonomous vehicle greater than a first preset distance, or an obstacle having a shielding ratio greater than a first preset value, its influence on the behavior of the autonomous vehicle is negligible, and therefore, such an obstacle can be ignored.

Example two

Fig. 2 is a schematic diagram according to a second embodiment of the present application. Referring to fig. 2, the method for testing the perception system of the autonomous vehicle of the embodiment of the present application may include the steps of:

s201, obtaining the marking data and the perception data.

This step is similar to S101 and will not be described here.

S202, filtering the annotation data and the perception data respectively, and determining second annotation data and second perception data corresponding to the obstacles participating in the evaluation of the perception system.

The step is to filter the marked data and the perception data before matching, and is used for filtering out obstacles which do not participate in the evaluation of the perception system, so that the data amount for matching can be reduced.

And S203, matching the second annotation data and the second perception data to obtain a matching result.

Wherein, the two steps of S202 and S203 are further detailed for the step of S102.

And S204, filtering the label data and the perception data which are not successfully matched, and determining first label data and first perception data corresponding to the obstacle interactive with the automatic driving vehicle.

This step is similar to S103 and will not be described here.

S205, obtaining a test result of the sensing system according to the successfully matched annotation data and sensing data, and the first annotation data and the first sensing data.

This step is similar to S104 and will not be described here.

According to the embodiment, the second annotation data and the second sensing data corresponding to the obstacles participating in the evaluation of the sensing system are determined through filtering before matching, so that the obstacles not participating in the evaluation of the sensing system are filtered, and the data amount for matching is reduced.

In some embodiments, S202, respectively performing filtering processing on the annotation data and the perception data to determine second annotation data and second perception data corresponding to an obstacle participating in evaluation by the perception system, may further include:

if the obstacle meets any one of the following conditions, data corresponding to the obstacle is filtered out from the labeled data and the perception data respectively, and second labeled data and second perception data corresponding to the obstacle participating in evaluation of the perception system are obtained:

the distance between the obstacle and the automatic driving vehicle is larger than a second preset distance;

the obstacle is behind the autonomous vehicle;

the obstacle is not within the road boundary;

the shielding proportion of the obstacles is greater than a second preset value;

the obstacle is not in the same lane as the autonomous vehicle;

the type of the barrier is a preset type, and the preset type comprises water mist, vehicle tail gas, a garbage can and a fence;

the size of the obstacle is smaller than a second preset size;

the number of the pixel points of the barrier is less than a second preset number;

the obstacle observation constitutes a radar observation.

As can be seen from comparison with the filtering conditions in the foregoing embodiment, some of the filtering conditions in the two filtering processes are similar, for example, the distance between the obstacle and the autonomous vehicle is greater than a second preset distance (a first preset distance). For these similar conditions, it can be considered that the set values of the two filtering processes are different, for example, the first preset distance is different from the second preset distance. Optionally, since the first preset distance is used for filtering after matching and the second preset distance is used for filtering before matching, the value of filtering after matching can be stricter than that of filtering before matching, and therefore, the first preset distance is smaller than the second preset distance, the second preset size is larger than the first preset size, and the second preset number is larger than the first preset number.

When the artificial labeling is performed, due to the limitation of a labeling tool, generally, only the point cloud data observed by the laser radar and the obstacle in the image observed by the camera can be labeled, but the obstacle in the data observed by the radar (radar) cannot be labeled. Therefore, if the obstacle of the perception data is observed only by radar (radar), the filtering should be performed.

By the above conditions, the obstacles in the second annotation data or the second perception data satisfy the following conditions through the filtering processing before matching:

the distance between the obstacle and the automatic driving vehicle is smaller than or equal to a second preset distance;

the obstacle is in front of the autonomous vehicle;

the obstacle is within the road boundary;

the shielding proportion of the obstacles is less than or equal to a second preset value;

the obstacle and the autonomous vehicle are in the same lane;

the type of the barrier is a non-preset type;

the size of the obstacle is larger than or equal to a second preset size;

the number of the pixel points of the barrier is greater than or equal to a second preset number;

the obstacle observation is composed of a lidar observation and/or a camera observation.

And (4) filtering after matching, namely filtering unsuccessfully matched annotation data and sensing data after matching the second annotation data and the second sensing data. After the filtering processing after matching, the obstacles in the first annotation data or the first perception data satisfy the following conditions:

the distance between the obstacle and the automatic driving vehicle is smaller than or equal to a first preset distance;

the shielding proportion of the obstacles is less than or equal to a first preset value;

the type of the barrier is a non-preset type, so that the core type is reserved;

the size of the obstacle is larger than or equal to a first preset size;

the number of the pixel points of the barrier is larger than or equal to a first preset number.

The embodiment of the application filters after filtering before matching with, and furthest's filtration does not have the barrier that influences the automatically driven vehicle action, and furthest's reservation probably produces interactive barrier with the automatically driven vehicle, and size, pixel number, sheltering from core type barrier that proportion etc. satisfied corresponding setting condition in the core area promptly.

For example, as shown in FIG. 3, by pre-match filtering and post-match filtering, the obstacles identified as 31, 32, 33 in FIG. 3 may all be filtered out, leaving obstacles, such as obstacles identified as 34, 35, that interact with the autonomous vehicle.

According to the method, the obstacles which do not affect the automatically-driven vehicle are removed through filtering before and after matching, so that the authority of the evaluation index output by the sensing system is improved, the evaluation index is more fit with the actual sensing effect on the road, the identification condition of the obstacles is more accurately reflected, and forward iteration of the sensing system is facilitated.

EXAMPLE III

Fig. 4 is a schematic diagram according to a third embodiment of the present application. The embodiment provides a device for testing a perception system of an automatic driving vehicle. As shown in fig. 4, the autonomous vehicle sensing system testing apparatus 400 includes: an acquisition module 401, a matching module 402, a filtering module 403 and a determination module 404. Wherein:

the obtaining module 401 is configured to obtain labeled data and sensing data, where the labeled data is data obtained by labeling an obstacle according to the drive test data, and the sensing data is data obtained by sensing according to the drive test data by a sensing system.

And the matching module 402 is configured to perform matching processing on the annotation data and the sensing data to obtain a matching result, where the matching result includes successfully matched annotation data and sensing data, and unsuccessfully matched annotation data and sensing data.

And a filtering module 403, configured to filter the unsuccessfully matched annotation data and sensing data, and determine first annotation data and first sensing data corresponding to an obstacle interacting with the autonomous vehicle.

And the determining module 404 is configured to obtain a test result of the sensing system according to the successfully matched annotation data and sensing data, and the first annotation data and the first sensing data.

The device for testing the perception system of the autonomous vehicle provided by the embodiment can be used for executing the embodiment of the method for testing the perception system of the autonomous vehicle, the implementation manner and the technical effect are similar, and the details are not repeated here.

In some embodiments, the filtering module 403 may be specifically configured to: when the obstacle meets any one of the following conditions, data corresponding to the obstacle is filtered out from the labeled data and the perception data which are not successfully matched, and first labeled data and first perception data corresponding to the obstacle interacting with the automatic driving vehicle are obtained:

the distance between the obstacle and the automatic driving vehicle is larger than a first preset distance;

the shielding proportion of the obstacles is greater than a first preset value;

the type of the barrier is a preset type, and the preset type comprises a non-core type;

the size of the obstacle is smaller than a first preset size;

the number of the pixel points of the barrier is smaller than a first preset number.

Example four

Fig. 5 is a schematic diagram according to a fourth embodiment of the present application. Referring to fig. 5, in the autonomous vehicle sensing system testing apparatus 500 based on the structure shown in fig. 4, the matching module 402 may include:

the filtering unit 4021 is configured to filter the annotation data and the sensing data, and determine second annotation data and second sensing data corresponding to the obstacle participating in the evaluation of the sensing system;

the matching unit 4022 is configured to perform matching processing on the second annotation data and the second sensing data to obtain a matching result.

Optionally, the filter unit 4021 may be specifically used for: when the obstacle meets any one of the following conditions, data corresponding to the obstacle are filtered out from the labeled data and the perception data respectively, and second labeled data and second perception data corresponding to the obstacle participating in evaluation of the perception system are obtained:

the distance between the obstacle and the automatic driving vehicle is larger than a second preset distance;

the obstacle is behind the autonomous vehicle;

the obstacle is not within the road boundary;

the shielding proportion of the obstacles is greater than a second preset value;

the obstacle is not in the same lane as the autonomous vehicle;

the type of the barrier is a preset type, and the preset type can comprise water mist, vehicle tail gas, a garbage can, a fence and the like;

the size of the obstacle is smaller than a second preset size;

the number of the pixel points of the barrier is less than a second preset number;

the obstacle observation constitutes a radar observation.

In some embodiments, the test results include identification accuracy. Still referring to fig. 5, the determining module 404 may include:

the first determining unit 4041 is configured to determine the number of successfully matched obstacles according to the successfully matched annotation data and sensing data;

the second determining unit 4042 is configured to obtain the number of perceived obstacles according to the number of obstacles successfully matched and the first perception data;

and a third determining unit 4043, configured to obtain an identification accuracy of the sensing system according to the number of successfully matched obstacles and the number of sensed obstacles.

In some embodiments, the test results include a recall rate. Still referring to fig. 5, the determining module 404 may include:

a first determining unit 4041, configured to determine, according to the successfully matched annotation data and sensing data, the number of successfully matched obstacles;

a fourth determining unit 4044, configured to obtain the number of marked obstacles according to the number of successfully matched obstacles and the first marking data;

a fifth determining unit 4045, configured to obtain a recall rate of the sensing system according to the number of successfully matched obstacles and the number of marked obstacles.

In some embodiments, the test results include classification accuracy. Still referring to fig. 5, the determining module 404 may further include:

a sixth determining unit 4046, configured to determine, according to the successfully matched annotation data and sensing data, the number of categories of the successfully matched obstacles;

a seventh determining unit 4047, configured to obtain the number of types of perceived obstacles according to the number of types of obstacles successfully matched and the first perception data;

an eighth determining unit 4048, configured to obtain the classification accuracy of the sensing system according to the number of successfully matched obstacle categories and the number of sensed obstacle categories.

According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.

There is also provided, in accordance with an embodiment of the present application, a computer program product, including: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.

FIG. 6 illustrates a schematic block diagram of an example electronic device 600 that can be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.

As shown in fig. 6, the electronic device 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.

Various components in the electronic device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the electronic device 600 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.

The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 601 performs the various methods and processes described above, such as the autonomous vehicle perception system testing method. For example, in some embodiments, the autonomous vehicle perception system testing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into RAM 603 and executed by computing unit 601, one or more steps of the autonomous vehicle perception system testing method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the autonomous vehicle perception system testing method by any other suitable means (e.g., by means of firmware).

Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.

Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.

In the context of this application, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.

The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.

The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.

FIG. 7 is a set-up scenario diagram of an autonomous vehicle perception system test that may implement embodiments of the present application. Referring to fig. 7, the scenario includes an autonomous vehicle 71, a server 72, and a client 73, where the autonomous vehicle 71 and the server 72 are connected via a wireless network, and the server 72 and the client 73 are connected via a wired or wireless network.

The automatic driving vehicle 71 is an intelligent vehicle which realizes unmanned driving through a computer system and integrates functions of environmental perception, planning decision and the like. The autonomous vehicle 71 is equipped with a laser radar, a sensor and a monitoring device, and the acquisition of the surrounding environment and traffic conditions is realized.

When the visual perception algorithm is evaluated, the automatic driving vehicle 71 runs on a road, and the laser radar and the monitoring device such as a camera on the automatic driving vehicle 71 collect and store the drive test data and send the drive test data to the server 72. The server 72 calls the visual perception algorithm to perform corresponding processing on the road test data, so as to obtain the obstacles identified by the visual perception algorithm and output the obstacles, namely, perception data. Specifically, the visual perception algorithm may be run in the form of a statically linked library, and the server 72 invokes the visual perception algorithm of the statically linked library through a test program, and compiles the result to generate the result of the identified obstacle.

The client 73 obtains the drive test data, and the marking personnel corrects and marks the identified obstacle through the marking tool to obtain the actual position and type of the obstacle and send the actual position and type to the server 72. Finally, the server 72 performs matching processing on the obstacle identified by the visual perception algorithm and the actual position type of the obstacle to obtain an evaluation result.

In the scenario shown in fig. 7, the automatic driving vehicle sensing system test device is taken as an example for explanation, but the present application is not limited thereto; in addition, the number of the servers, the clients and the automatic driving vehicles in the scene is not limited, and the actual requirements are used as the standard. For example, the servers therein may be replaced with a cluster of servers, and so on.

According to the technical scheme of the embodiment of the application, firstly, annotation data and perception data are obtained; then, carrying out matching processing on the annotation data and the perception data to obtain successfully matched annotation data and perception data and unsuccessfully matched annotation data and perception data, carrying out filtering processing on the unsuccessfully matched annotation data and perception data, and determining first annotation data and first perception data corresponding to obstacles with interaction with the automatic driving vehicle; and finally, obtaining a test result of the sensing system according to the successfully matched marking data and sensing data and the first marking data and the first sensing data. Through filtering after matching, first marking data and first perception data corresponding to the obstacles interacting with the automatic driving vehicle are determined, so that the obstacles without influence on automatic driving can be eliminated, the obtained measurement result is more fit with the actual perception effect on the road, the identification condition of the obstacles is more accurately reflected, and forward iteration of a perception system is facilitated.

There is also provided, in accordance with an embodiment of the present application, an autonomous vehicle, including: the system comprises a perception system and a control system, wherein the perception system is obtained by testing according to the scheme provided by any one of the embodiments; the control system is used for controlling the running of the automatic driving vehicle according to the result output by the sensing system.

It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.

The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

22页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种激光测距仪自动修正控制平台

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!