Techniques for collaborative mapping between unmanned aerial vehicles and ground vehicles

文档序号:1160118 发布日期:2020-09-15 浏览:13次 中文

阅读说明:本技术 用于无人飞行器和地面载运工具之间的协作地图构建的技术 (Techniques for collaborative mapping between unmanned aerial vehicles and ground vehicles ) 是由 王铭钰 于 2019-03-08 设计创作,主要内容包括:公开了用于使用多个载运工具的协作地图构建技术。这样的系统可以包括:包括第一计算装置和第一扫描传感器的地面载运工具,以及包括第二计算装置和第二扫描传感器的飞行器。地面载运工具可以基于使用第一扫描传感器的第一扫描数据来获得第一实时地图,并且将第一实时地图和位置信息发送给飞行器。飞行器可以接收来自第一计算装置的第一实时地图和位置信息,基于使用第二扫描传感器采集的第二扫描数据来获得第二实时地图,并且基于第一实时地图和第二实时地图来获得第三实时地图。(Collaborative mapping techniques for using multiple vehicles are disclosed. Such a system may include: a ground vehicle comprising a first computing device and a first scanning sensor, and an aircraft comprising a second computing device and a second scanning sensor. The ground vehicle may obtain a first real-time map based on the first scan data using the first scan sensor and transmit the first real-time map and the location information to the aerial vehicle. The aircraft may receive a first real-time map and location information from a first computing device, obtain a second real-time map based on second scan data acquired using a second scan sensor, and obtain a third real-time map based on the first real-time map and the second real-time map.)

1. A system for collaborative mapping, comprising:

a ground vehicle comprising a first computing device;

a first scanning sensor coupled to the ground vehicle;

an aircraft comprising a second computing device;

a second scanning sensor coupled to the aerial vehicle;

the first computing device comprises at least one processor and a ground vehicle mapping manager, the ground vehicle mapping manager comprising first instructions that, when executed by the processor, cause the ground vehicle mapping manager to:

obtaining a first real-time map based on first scan data using the first scan sensor; and

sending a first real-time map and location information to the aircraft;

the second computing device includes at least one processor and an aircraft mapping manager, the aircraft mapping manager including second instructions that, when executed by the processor, cause the aircraft mapping manager to:

receiving the first real-time map and the location information from the first computing device;

obtaining a second real-time map based on second scan data acquired using the second scan sensor; and

a third real-time map is obtained based on the first real-time map and the second real-time map.

2. The system of claim 1, wherein the first real-time map is a map of higher precision than the second real-time map.

3. The system of claim 1, wherein to obtain a third real-time map based on the first and second real-time maps, the second instructions, when executed, further cause the aircraft mapping manager to:

determining an overlapping portion of the first real-time map and the second real-time map; and

merging the first real-time map and the second real-time map using the overlapping portion.

4. The system of claim 1, wherein the first scanning sensor comprises a LiDAR sensor and the second scanning sensor comprises a vision sensor.

5. The system of claim 4, wherein the first real-time map is constructed based on point cloud data obtained from the first scanning sensor and the second real-time map is constructed based on visual data obtained from the second scanning sensor.

6. The system of claim 5, wherein to obtain a first real-time map based on first scan data acquired from a perspective of the ground vehicle using the first scan sensor, the first instructions, when executed, further cause the ground vehicle mapping manager to:

obtaining the second real-time map from the aircraft;

converting the coordinates in the second real-time map to a coordinate system to match the first real-time map;

determining an overlapping portion of the first real-time map and the second real-time map in the coordinate system; and

transmitting the overlapping portion to the aircraft.

7. The system of claim 1, wherein the second instructions, when executed, further cause the aircraft mapping manager to:

determining that an available resource associated with the second computing device is below a threshold;

sending a request to the first computing device to generate a third real-time map, the request including the second real-time map; and

receiving the third real-time map from the first computing device.

8. The system of claim 1, further comprising:

a plurality of aerial vehicles in communication with the ground vehicle, wherein each aerial vehicle of the plurality of aerial vehicles provides a corresponding real-time map to the vehicle; and is

Wherein the first instructions, when executed, further cause the ground vehicle drawing manager to:

obtaining each corresponding real-time map from the plurality of aircraft;

generating a fourth real-time map based on the first real-time map and each corresponding real-time map from the plurality of aircraft; and

transmitting the fourth real-time map to each aircraft of the plurality of aircraft.

9. The system of claim 1, wherein the location information comprises a global navigation satellite system ("GNSS") received from the aircraft.

10. The system of claim 1, wherein the position information includes a real-time relative position determined using the first scanning sensor.

11. The system of claim 1, wherein the surface vehicle is an autonomous vehicle.

12. The system of claim 1, wherein to obtain a first real-time map based on first scan data acquired from a perspective of the ground vehicle using the first scan sensor, the first instructions, when executed, further cause the ground vehicle mapping manager to:

obtaining third scan data from a third scan sensor coupled to the ground vehicle, the first scan sensor comprising a LiDAR sensor and the third scan sensor comprising an imaging sensor; and

generating the first real-time map based on the first scan data and the third scan data using a simultaneous localization and mapping (SLAM) algorithm.

13. The system of claim 1, wherein to obtain a first real-time map based on first scan data acquired from a perspective of the ground vehicle using the first scan sensor, the first instructions, when executed, further cause the ground vehicle mapping manager to:

obtaining the second scan data from the aerial vehicle, the first scan sensor comprising a LiDAR sensor and the second scan sensor comprising an imaging sensor; and

generating the first real-time map based on the first scan data and the second scan data using a simultaneous localization and mapping (SLAM) algorithm.

14. The system of claim 1, wherein the first instructions, when executed, further cause the ground vehicle mapping manager to:

receiving a third map from the aircraft;

generating control data to navigate the aircraft based on the third map; and

transmitting the control data to the aircraft.

15. The system of claim 1, wherein the second instructions, when executed, further cause the aircraft mapping manager to:

generating control data to navigate the aircraft based on a third map.

16. A method for collaborative mapping, comprising:

receiving, by an aerial vehicle, a first real-time map and location information from a ground vehicle, wherein the first real-time map is based on first scan data acquired using a first scan sensor coupled to the ground vehicle;

obtaining a second real-time map based on second scan data acquired using a second scan sensor coupled to the aerial vehicle; and

generating a third real-time map based on the first real-time map and the second real-time map.

17. The method of claim 16, wherein the first real-time map is a map of higher precision than the second real-time map.

18. The method of claim 16, wherein generating a third real-time map based on the first real-time map and the second real-time map further comprises:

determining an overlapping portion of the first real-time map and the second real-time map; and

merging the first real-time map and the second real-time map using the overlapping portion.

19. The method of claim 1416, wherein the first scanning sensor comprises a LiDAR sensor and the second scanning sensor comprises a vision sensor.

20. The method of claim 19, wherein the first real-time map is constructed based on point cloud data obtained from the first scanning sensor and the second real-time map is constructed based on visual data obtained from the second scanning sensor.

21. The method of claim 20, receiving, by an aerial vehicle, a first real-time map from a ground vehicle, wherein the first real-time map is based on first scan data acquired using a first scan sensor coupled to the ground vehicle, further comprising:

sending the second real-time map to the surface vehicle, the surface vehicle configured to:

converting the coordinates in the second real-time map to a coordinate system to match the first real-time map;

determining an overlapping portion of the first real-time map and the second real-time map in the coordinate system; and

transmitting the overlapping portion to the aircraft.

22. The method of claim 16, further comprising:

determining that available resources associated with the aircraft are below a threshold;

sending a request to the ground vehicle to generate the third real-time map, the request including the second real-time map; and

receiving the third real-time map from the surface vehicle.

23. The method of claim 16, further comprising:

obtaining a plurality of real-time maps from a plurality of aerial vehicles in communication with the ground vehicle;

generating a fourth real-time map based on the first real-time map and each corresponding real-time map from the plurality of aircraft; and

transmitting the fourth real-time map to each aircraft of the plurality of aircraft.

24. The method of claim 16, wherein receiving, by the aerial vehicle, the first real-time map from the ground vehicle further comprises:

receiving position information for the aircraft from the ground vehicle.

25. The method of claim 16, wherein the location information comprises a global navigation satellite system ("GNSS") received from the aircraft.

26. The method of claim 16, wherein the position information comprises a real-time relative position determined using the first scanning sensor.

27. The method of claim 16, wherein the surface vehicle is an autonomous vehicle.

28. The method of claim 16, wherein the surface vehicle is configured to: obtaining third scan data from a third scan sensor coupled to the ground vehicle, the first scan sensor comprising a LiDAR sensor and the third scan sensor comprising an imaging sensor, and generating the first real-time map based on the first scan data and the third scan data using a simultaneous localization and mapping (SLAM) algorithm.

29. The method of claim 16, further comprising:

transmitting the second scan data to the surface vehicle, wherein the surface vehicle is configured to: generating the first real-time map based on the first scan data and the second scan data using a simultaneous localization and mapping "SLAM" algorithm, wherein the first scan sensor comprises a LiDAR sensor and the second scan sensor comprises an imaging sensor.

30. The method of claim 16, further comprising:

sending a third map to the surface vehicle, the surface vehicle:

generating control data to navigate the aircraft based on the third map; and

transmitting the control data to the aircraft; and

the control data is executed.

31. The system of claim 1, further comprising:

generating control data to navigate the aircraft based on the third map.

32. A non-transitory computer-readable storage medium comprising instructions stored thereon, which when executed by one or more processors, cause the one or more processors to:

receiving, by an aerial vehicle, a first real-time map and location information from a ground vehicle, wherein the first real-time map is based on first scan data acquired using a first scan sensor coupled to the ground vehicle;

obtaining a second real-time map based on second scan data acquired using a second scan sensor coupled to the aerial vehicle; and

generating a third real-time map based on the first real-time map and the second real-time map.

33. The non-transitory computer-readable storage medium of claim 32, wherein the first real-time map is a map of higher precision than the second real-time map.

34. The non-transitory computer-readable storage medium of claim 32, wherein to generate the third real-time map based on the first and second real-time maps, the instructions, when executed, further cause the one or more processors to:

determining an overlapping portion of the first real-time map and the second real-time map; and

merging the first real-time map and the second real-time map using the overlapping portion.

35. The non-transitory computer-readable storage medium of claim 32, wherein the first scanning sensor comprises a LiDAR sensor and the second scanning sensor comprises a vision sensor.

36. The non-transitory computer-readable storage medium of claim 35, wherein the first real-time map is constructed based on point cloud data obtained from the first scanning sensor and the second real-time map is constructed based on visual data obtained from the second scanning sensor.

37. The non-transitory computer readable storage medium of claim 36, wherein to receive, by an aerial vehicle, a first real-time map from a ground vehicle, wherein the first real-time map is based on first scan data acquired using a first scan sensor coupled to the ground vehicle, the instructions, when executed, further cause the one or more processors to:

transmitting the second real-time map to the surface vehicle, wherein the surface vehicle is configured to:

converting the coordinates in the second real-time map to a coordinate system to match the first real-time map;

determining an overlapping portion of the first real-time map and the second real-time map in the coordinate system; and

transmitting the overlapping portion to the aircraft.

38. The non-transitory computer-readable storage medium of claim 32, wherein the instructions, when executed, further cause the one or more processors to:

determining that available resources associated with the aircraft are below a threshold;

sending a request to the ground vehicle to generate the third real-time map, the request including the second real-time map; and

receiving the third real-time map from the surface vehicle.

39. The non-transitory computer-readable storage medium of claim 32, wherein the instructions, when executed, further cause the one or more processors to:

obtaining a plurality of real-time maps from a plurality of aerial vehicles in communication with the ground vehicle;

generating a fourth real-time map based on the first real-time map and each corresponding real-time map from the plurality of aircraft; and

transmitting the fourth real-time map to each aircraft of the plurality of aircraft.

40. The non-transitory computer-readable storage medium of claim 32, wherein the location information comprises a global navigation satellite system ("GNSS") received from the aircraft.

41. The non-transitory computer-readable storage medium of claim 32, wherein the position information includes a real-time relative position determined using the first scanning sensor.

42. The non-transitory computer readable storage medium of claim 32, wherein the ground vehicle is an autonomous vehicle.

43. The non-transitory computer-readable storage medium of claim 32, wherein to obtain a first real-time map based on first scan data acquired from a perspective of the ground vehicle using the first scan sensor, the instructions, when executed, further cause the one or more processors to:

obtaining third scan data from a third scan sensor coupled to the ground vehicle, the first scan sensor comprising a LiDAR sensor and the third scan sensor comprising an imaging sensor; and

generating the first real-time map based on the first scan data and the third scan data using a simultaneous localization and mapping (SLAM) algorithm.

44. The non-transitory computer-readable storage medium of claim 32, wherein to obtain a first real-time map based on first scan data acquired from a perspective of the ground vehicle using the first scan sensor, the instructions, when executed, further cause the one or more processors to:

obtaining the second scan data from the aerial vehicle, the first scan sensor comprising a LiDAR sensor and the second scan sensor comprising an imaging sensor; and

generating the first real-time map based on the first scan data and the second scan data using a simultaneous localization and mapping (SLAM) algorithm.

45. The non-transitory computer-readable storage medium of claim 32, wherein the first instructions, when executed, further cause the one or more processors to:

receiving a third map from the aircraft;

generating control data to navigate the aircraft based on the third map; and

transmitting the control data to the aircraft.

46. The non-transitory computer-readable storage medium of claim 32, wherein the second instructions, when executed, further cause the one or more processors to:

generating control data to navigate the aircraft based on a third map.

Technical Field

The disclosed embodiments relate generally to techniques for mapping and object detection, and more particularly, but not exclusively, to collaborative mapping between unmanned aerial vehicles and ground vehicles.

Background

Collaborative mapping techniques for using multiple vehicles are disclosed. Such a system may include: a ground vehicle comprising a first computing device and a first scanning sensor, and an aircraft comprising a second computing device and a second scanning sensor. The ground vehicle may obtain a first real-time map based on the first scan data using the first scan sensor and transmit the first real-time map and the location information to the aerial vehicle. The aircraft may receive a first real-time map and location information from a first computing device, obtain a second real-time map based on second scan data acquired using a second scan sensor, and obtain a third real-time map based on the first real-time map and the second real-time map.

Disclosure of Invention

Techniques for sharing sensor information between multiple vehicles are disclosed. A system for sharing sensor information between multiple vehicles, the system may comprise: an aircraft comprising a first computing device and a first scanning sensor, and a ground vehicle comprising a second computing device and a second scan. The aircraft may use the first scan sensor to obtain first scan data and send the first scan data to the second computing device. The ground vehicle may receive first scan data from the first computing device, obtain second scan data from the second scan sensor, identify overlapping portions of the first scan data and the second scan data based on at least one reference object in the scan data, and execute navigation control commands based on one or more roadway objects identified in the overlapping portions of the first scan data and the second scan data.

Drawings

FIG. 1 illustrates an example of an aircraft and a ground vehicle, in accordance with various embodiments.

Fig. 2A-2C illustrate examples of scanned data of a roadway environment obtained from an aircraft and a ground vehicle, in accordance with various embodiments.

FIG. 3 illustrates an example of a scan manager and a detection manager, in accordance with various embodiments.

Fig. 4 illustrates an example of a machine learning model for roadway object detection, in accordance with various embodiments.

Fig. 5 illustrates a flow diagram of a method of sharing sensor information between multiple vehicles in a movable object environment, in accordance with various embodiments.

FIG. 6 illustrates an example of an aircraft and a ground vehicle, in accordance with various embodiments.

FIG. 7 illustrates an example of generating a map of a movable object environment using an aerial vehicle and a ground vehicle, in accordance with various embodiments.

FIG. 8 illustrates an alternative example of generating a map of a movable object environment using an aerial vehicle and a ground vehicle, in accordance with various embodiments.

Fig. 9 illustrates an example of collaborative mapping by an aircraft mapping manager and a ground vehicle mapping manager, in accordance with various embodiments.

FIG. 10 illustrates a flow diagram of a method of collaborative mapping in a movable object environment, in accordance with various embodiments.

FIG. 11 illustrates an example of supporting a movable object interface in a software development environment, according to embodiments.

Figure 12 illustrates an example of an unmanned aerial vehicle interface, in accordance with various embodiments.

Fig. 13 illustrates an example of components for an unmanned aerial vehicle in a Software Development Kit (SDK), in accordance with various embodiments.

Detailed Description

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements. It should be noted that: references in the present disclosure to "an embodiment" or "one embodiment" or "some embodiments" do not necessarily refer to the same embodiments, and such references mean at least one embodiment.

The following description of the invention describes target mapping using a movable object. For simplicity of illustration, an Unmanned Aerial Vehicle (UAV) is typically used as an example of a movable object. It will be apparent to those skilled in the art that other types of movable objects may be used without limitation.

The autonomous driving technique may include various sensing, determining, and performing tasks, such as environmental sensing, path planning, behavior determination, and control execution. With respect to sensing, an autonomous vehicle may analyze its surroundings based on data collected by one or more sensors mounted on the vehicle, including, for example, vision sensors, LiDAR sensors, millimeter-wave radar sensors, ultrasonic sensors, and the like. The sensor data may be analyzed using image processing tools, machine learning techniques, and the like to determine depth information and semantic information to assist the vehicle in identifying surrounding people and objects. Additionally, LiDAR sensors provide accurate, longer distance measurements and positioning data for vehicles.

Because the sensor is mounted on the vehicle, there are visual and viewing limitations of the sensor. For example, in analyzing a roadway, image data may be captured by a front-facing camera, and a perspective view of the image data may be transformed by projecting the front perspective view onto a bird's eye view image (e.g., an overhead perspective view). Such projections may introduce distortions, resulting in a decrease in the accuracy of the image data. The perspective effect causes lane lines and other objects represented in the image data to converge at a greater distance than they are from the imaging sensor. Thus, the length of the environment in front of the vehicle that can be clearly identified by the front camera (or other imaging sensor) is limited. Thus, lane markings and other distant objects are often relatively blurred after the perspective transformation. Depending on the type of projection used to obtain the bird's eye view of the image data, portions of the image representing objects that are far from the imaging sensor may become more distorted, making it difficult to reliably apply image processing techniques such as Canny edge detection, binary image analysis, and other techniques to identify lane markers and other objects in the image data.

In addition, the perspective transformation operation requires that a particular camera be active, and even so, the image must be prepared before it can be transformed. Furthermore, the way the camera is mounted on the vehicle and the current roadway conditions, such as the angle of the roadway, will have a significant impact on the reliability of the transformation and any analysis based on the transformed image. Furthermore, techniques such as those used to obtain binary images require gradients and color thresholds, which may not be generally applicable to most roadway conditions (e.g., weather conditions, roads that have been left out of repair over time, and other roadway conditions that may reduce the visibility of some roadway objects such as lane markers). All of these analyses also need to be able to be performed quickly, however conventional techniques can handle approximately 4.5 Frames Per Second (FPS), while the number of frames for onboard cameras may be 30FPS or higher.

In some embodiments, as the car travels over the roadway, it may use a camera to capture images of the roadway environment. These images may include representations of other nearby cars, trees, light poles, signs, and other nearby objects. In existing systems, these images can be transformed from a front perspective view to a top view, which can then be used to generate a local map. However, the camera has a limited field of view due to the way the camera is mounted at the front of the car. When transforming these images into top view, the transformation may introduce inaccuracies, such as blurring or other distortions. Furthermore, as described above, the transformation itself requires additional time and processing resources. The resulting maps generated based on these images are also less accurate and useful due to inaccuracies in the transformed images. This also limits the usefulness and/or reliability of features that rely on these maps, such as lane detection and other driving assistance functions. Instead of relying solely on transformed images, embodiments may use images captured by drones or other Unmanned Aerial Vehicles (UAVs) that may directly capture the overhead images without any transformation or associated inaccuracies. The car may then use its own images and those acquired by the drone to generate a map, reducing or eliminating potential inaccuracies introduced by the transformation. Also, maps can be generated more efficiently without requiring time or resources to transform the images.

42页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种车辆剩余行驶里程的获取方法、电子设备及车辆

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!