Techniques for sharing drawing data between an unmanned aerial vehicle and a ground vehicle

文档序号:1358256 发布日期:2020-07-24 浏览:17次 中文

阅读说明:本技术 在无人飞行器与地面载具之间共享绘图数据的技术 (Techniques for sharing drawing data between an unmanned aerial vehicle and a ground vehicle ) 是由 王铭钰 于 2019-03-08 设计创作,主要内容包括:一种用于在多个载具之间共享传感器信息的系统,包括第一计算设备和第一扫描传感器(118,124,126)在内的飞行器(104);以及包括第二计算设备和第二扫描传感器(130,132,134)在内的地面载具(110)。飞行器(104)可以使用第一扫描传感器(118,124,126)来获取第一扫描数据,并向第二计算设备发送第一扫描数据。地面载具(110)可以从第一计算设备接收第一扫描数据;从第二扫描传感器(130,132,134)获取第二扫描数据;基于扫描数据中的至少一个参考物体(202)来识别第一扫描数据和第二扫描数据的重叠部分;并且基于在第一扫描数据和第二扫描数据的重叠部分中识别出的一个或多个道路物体,执行导航控制命令。公开了一种用于在飞行器中共享传感器信息的方法,一种非暂时性计算机可读存储介质以及一种用于生成地图的系统。(A system for sharing sensor information between a plurality of vehicles, an aircraft (104) including a first computing device and a first scanning sensor (118, 124, 126); and a ground vehicle (110) including a second computing device and a second scanning sensor (130, 132, 134). The aircraft (104) may acquire first scan data using the first scan sensor (118, 124, 126) and transmit the first scan data to the second computing device. The ground vehicle (110) may receive first scan data from a first computing device; acquiring second scan data from a second scan sensor (130, 132, 134); identifying overlapping portions of the first scan data and the second scan data based on at least one reference object (202) in the scan data; and executing the navigation control command based on the one or more road objects identified in the overlapping portion of the first scan data and the second scan data. A method for sharing sensor information in an aircraft, a non-transitory computer-readable storage medium, and a system for generating a map are disclosed.)

1. A system for sharing sensor information between a plurality of vehicles, comprising:

an aircraft comprising a first computing device;

a first scanning sensor coupled to the aerial vehicle;

a ground vehicle comprising a second computing device;

a second scanning sensor coupled to the ground vehicle;

the first computing device includes at least one processor and a scan manager, the scan manager including first instructions that, when executed by the processor, cause the scan manager to:

acquiring first scan data from the first scan sensor; and

sending the first scan data to the second computing device; and

the second computing device includes at least one processor and a detection manager, the detection manager including second instructions that, when executed by the processor, cause the detection manager to:

receiving the first scan data from the first computing device;

acquiring second scan data from the second scan sensor;

identifying overlapping portions of the first scan data and the second scan data based on at least one reference object in the first scan data and the second scan data; and

executing a navigation control command based on one or more road objects identified in an overlapping portion of the first scan data and the second scan data.

2. The system of claim 1, wherein the first instructions, when executed, further cause the scan manager to:

receiving a movement command from the second computing device on the ground vehicle, the movement command comprising a location;

moving the aerial vehicle to the location; and

the first scan data is acquired at the location.

3. The system of claim 2, wherein the location comprises at least one of a location coordinate, a position relative to the ground vehicle, or other object.

4. The system of claim 1, wherein the at least one reference object comprises a representation of at least one of the one or more road objects represented in the first scan data and the second scan data.

5. The system of claim 1, wherein the one or more road objects comprise lanes of travel represented in overlapping portions of the first scan data and the second scan data.

6. The system of claim 5, wherein the second instructions, when executed, further cause the detection manager to:

transforming the first scan data and the second scan data into an intermediate format;

performing object recognition in the intermediate format using a machine learning model to identify the one or more road objects, wherein the machine learning model is trained to identify the one or more road objects; and

determining the navigation control command based on the object identification.

7. The system of claim 6, wherein the input to the machine learning model includes a number of input patches from the intermediate format, a number of channels representing a color and a depth associated with each input patch, and a height and a width of the input patches, and wherein the output of the machine learning model includes one or more confidence scores for each input patch, the one or more confidence scores being associated with the one or more road objects.

8. The system of claim 1, further comprising:

a plurality of ground vehicles in communication with the aerial vehicle, the plurality of ground vehicles each receiving the first scan data from the aerial vehicle.

9. The system of claim 1, further comprising:

a plurality of aerial vehicles in communication with the ground vehicle, wherein each aerial vehicle in the plurality of aerial vehicles acquires third scan data and sends the third scan data to the ground vehicle.

10. The system of claim 1, wherein the first scanning sensor comprises a first L iDAR sensor and the second scanning sensor comprises a second L iDAR sensor.

11. The system of claim 10, wherein the first scan data comprises first mapping data generated based on point cloud data collected by the first scan sensor, and wherein the second scan data comprises second mapping data generated based on point cloud data collected by the second scan sensor.

12. The system of claim 11, wherein the second instructions, when executed, further cause the detection manager to:

combining the first mapping data with the second mapping data to increase a coverage area of a local map maintained by the ground vehicle.

13. The system of claim 11, wherein the second instructions, when executed, further cause the detection manager to:

detecting a traffic condition affecting the ground vehicle; and

in response to the detected traffic condition, sending a command to the first computing device on the aircraft to collect the first mapping data.

14. A method of sharing sensor information in an aircraft environment, comprising:

receiving, by a second computing device included in a ground vehicle, first scan data from a first computing device coupled with an aerial vehicle, wherein the first scan data is acquired using a first scan sensor coupled with the aerial vehicle;

obtaining second scan data from a second scan sensor coupled with the ground vehicle;

identifying overlapping portions of the first scan data and the second scan data based on at least one reference object in the first scan data and the second scan data; and

executing a navigation control command based on one or more road objects identified in an overlapping portion of the first scan data and the second scan data.

15. The method of claim 14, further comprising:

receiving a movement command from a second computing device on the ground vehicle, the movement command comprising a location;

moving the aerial vehicle to the location; and

the first scan data is acquired at the location.

16. The method of claim 15, wherein the location comprises at least one of a location coordinate, a position relative to the ground vehicle, or other object.

17. The method of claim 14, wherein the at least one reference object comprises a representation of at least one of the one or more road objects represented in the first scan data and the second scan data.

18. The method of claim 14, wherein the one or more road objects comprise lanes of travel represented in overlapping portions of the first scan data and the second scan data.

19. The method of claim 18, further comprising:

transforming the first scan data and the second scan data into an intermediate format;

performing object recognition in the intermediate format using a machine learning model to identify the one or more road objects, wherein the machine learning model is trained to identify the one or more road objects; and

determining the navigation control command based on the object identification.

20. The method of claim 19, wherein the input to the machine learning model includes a number of input patches from the intermediate format, a number of channels representing a color and a depth associated with each input patch, and a height and a width of the input patches, and wherein the output of the machine learning model includes one or more confidence scores for each input patch, the one or more confidence scores being associated with the one or more road objects.

21. The method of claim 14, further comprising:

receiving, by a plurality of ground vehicles in communication with the aircraft, the first scan data.

22. The method of claim 14, further comprising:

obtaining third scan data by a plurality of aerial vehicles in communication with the ground vehicle; and

and sending the third scanning data to the ground vehicle.

23. The method of claim 14, wherein the first scanning sensor comprises a first L iDAR sensor and the second scanning sensor comprises a second L iDAR sensor.

24. The method of claim 23, wherein the first scan data comprises first mapping data generated based on point cloud data collected by the first scan sensor, and wherein the second scan data comprises second mapping data generated based on point cloud data collected by the second scan sensor.

25. The method of claim 24, further comprising:

combining the first mapping data with the second mapping data to increase a coverage area of a local map maintained by the ground vehicle.

26. The method of claim 24, further comprising:

detecting a traffic condition affecting the ground vehicle; and

in response to the detected traffic condition, sending a command to the first computing device on the aircraft to collect the first mapping data.

27. A non-transitory computer-readable storage medium comprising instructions stored thereon, which when executed by one or more processors, cause the one or more processors to:

receiving, by a second computing device included in a ground vehicle, first scan data from a first computing device coupled with an aerial vehicle, wherein the first scan data is acquired using a first scan sensor coupled with the aerial vehicle;

obtaining second scan data from a second scan sensor coupled with the ground vehicle;

identifying overlapping portions of the first scan data and the second scan data based on at least one reference object in the first scan data and the second scan data; and

executing a navigation control command based on one or more road objects identified in an overlapping portion of the first scan data and the second scan data.

28. The non-transitory computer-readable storage medium of claim 27, wherein the instructions, when executed, further cause the one or more processors to:

receiving a movement command from the second computing device on the ground vehicle, the movement command comprising a location;

moving the aerial vehicle to the location; and

the first scan data is acquired at the location.

29. The non-transitory computer-readable storage medium of claim 28, wherein the location comprises at least one of a location coordinate, a position relative to the ground vehicle or other object.

30. The non-transitory computer readable storage medium of claim 27, wherein the at least one reference object comprises a representation of at least one of the one or more road objects represented in the first scan data and the second scan data.

31. The non-transitory computer readable storage medium of claim 27, wherein the one or more road objects include a lane of travel represented in an overlapping portion of the first scan data and the second scan data.

32. The non-transitory computer-readable storage medium of claim 31, wherein the instructions, when executed, further cause the one or more processors to:

transforming the first scan data and the second scan data into an intermediate format;

performing object recognition in the intermediate format using a machine learning model to identify the one or more road objects, wherein the machine learning model is trained to identify the one or more road objects; and

determining the navigation control command based on the object identification.

33. The non-transitory computer-readable storage medium of claim 32, wherein the input to the machine learning model includes a number of input patches from the intermediate format, a number of channels, and a height and width of the input patches, the number of channels representing a color and depth associated with each input patch, and wherein the output of the machine learning model includes one or more confidence scores for each input patch, the one or more confidence scores associated with the one or more road objects.

34. The non-transitory computer-readable storage medium of claim 27, wherein the instructions, when executed, further cause the one or more processors to:

receiving, by a plurality of ground vehicles in communication with the aircraft, the first scan data.

35. The non-transitory computer-readable storage medium of claim 27, wherein the instructions, when executed, further cause the one or more processors to:

obtaining third scan data by a plurality of aerial vehicles in communication with the ground vehicle; and

and sending the third scanning data to the ground vehicle.

36. The non-transitory computer readable storage medium of claim 27, wherein the first scanning sensor comprises a first L iDAR sensor and the second scanning sensor comprises a second L iDAR sensor.

37. The non-transitory computer-readable storage medium of claim 36, wherein the first scan data comprises first mapping data generated based on point cloud data collected by the first scan sensor, and wherein the second scan data comprises second mapping data generated based on point cloud data collected by the second scan sensor.

38. The non-transitory computer-readable storage medium of claim 37, wherein the instructions, when executed, further cause the one or more processors to:

combining the first mapping data with the second mapping data to increase a coverage area of a local map maintained by the ground vehicle.

39. The non-transitory computer-readable storage medium of claim 37, wherein the instructions, when executed, further cause the one or more processors to:

detecting a traffic condition affecting the ground vehicle; and

in response to the detected traffic condition, sending a command to the first computing device on the aircraft to collect the first mapping data.

40. A system for generating a map based on sensor information from a plurality of vehicles, comprising:

an aircraft comprising a first computing device;

a first scanning sensor coupled to the aerial vehicle;

a ground vehicle comprising a second computing device;

a second scanning sensor coupled to the ground vehicle;

the first computing device includes at least one processor and a scan manager, the scan manager including first instructions that, when executed by the processor, cause the scan manager to:

acquiring first scanning data from the first scanning sensor under an overlooking visual angle; and

sending the first scan data to the second computing device; and

the second computing device includes at least one processor and a detection manager, the detection manager including second instructions that, when executed by the processor, cause the detection manager to:

receiving the first scan data from the first computing device;

identifying a plurality of road objects in the first scan data;

mapping locations associated with the plurality of road objects in the first scan data to second scan data at a forward perspective acquired from a second scan sensor; and

executing a navigation control command based on the plurality of road objects mapped to the second scan data.

41. The system of claim 40, wherein to map the locations associated with the plurality of road objects in the first scan data to second scan data at a forward perspective acquired from a second scan sensor, the second instructions, when executed, further cause the detection manager to:

transforming the second scan data from the forward perspective to the downward perspective to generate transformed second scan data; and

calibrating the transformed second scan data based on a reference object represented in the transformed second scan data and the first scan data;

identifying locations associated with the plurality of road objects in the transformed second scan data; and

converting the positions associated with the plurality of road objects in the transformed second scan data to the forward perspective by performing a reverse perspective transformation.

42. The system of claim 40, wherein the first scanning sensor comprises an imaging sensor and the second scanning sensor comprises an imaging sensor.

43. The system of claim 42, wherein the first scan data comprises first mapping data generated based on image data collected by the first scan sensor, and wherein the second scan data comprises second mapping data generated based on image data collected by the second scan sensor.

44. The system of claim 43, wherein the second instructions, when executed, further cause the detection manager to:

combining the first mapping data with the second mapping data to increase a coverage area of a local map maintained by the ground vehicle.

45. The system of claim 40, wherein the first scanning sensor comprises a first L iDAR sensor and the second scanning sensor comprises a second L iDAR sensor.

46. The system of claim 45, wherein the first scan data comprises first mapping data generated based on point cloud data collected by the first scan sensor, and wherein the second scan data comprises second mapping data generated based on point cloud data collected by the second scan sensor.

47. The system of claim 46, wherein the second instructions, when executed, further cause the detection manager to:

combining the first mapping data with the second mapping data to increase a coverage area of a local map maintained by the ground vehicle.

48. A system for generating a map based on sensor information from a plurality of vehicles, comprising:

an aircraft comprising a first computing device;

a first scanning sensor coupled to the aerial vehicle;

a ground vehicle comprising a second computing device;

a second scanning sensor coupled to the ground vehicle;

the first computing device includes at least one processor and a scan manager, the scan manager including first instructions that, when executed by the processor, cause the scan manager to:

acquiring first scanning data from the first scanning sensor under an overlooking visual angle; and

sending the first scan data to the second computing device; and

the second computing device includes at least one processor and a detection manager, the detection manager including second instructions that, when executed by the processor, cause the detection manager to:

receiving the first scan data from the first computing device;

generating a local map based at least in part on the first scan data; and

executing a navigation control command based at least in part on the local map.

49. The system of claim 48, wherein the second instructions, when executed, further cause the detection manager to:

acquiring second scan data from the second scan sensor, wherein the second scan data is acquired from the second scan sensor at a forward viewing angle.

50. The system of claim 49, wherein to generate a local map based at least in part on the first scan data, the second instructions, when executed, further cause the detection manager to:

transforming the second scan data from the forward perspective to the overhead perspective based at least in part on the first scan data to obtain transformed second scan data; and

generating the local map based on the transformed second scan data.

51. The system of claim 49, wherein to generate a local map based at least in part on the first scan data, the second instructions, when executed, further cause the detection manager to:

identifying a portion of the first scan data corresponding to a portion of the second scan data, wherein the portion of the first scan data and the portion of the second scan data comprise representations of a same area of a road environment; and

generating the local map based on the identified portion of the first scan data.

Technical Field

The disclosed embodiments relate generally to techniques for mapping and object detection, and more particularly, but not exclusively, to techniques for sharing mapping data between an unmanned aerial vehicle and a ground vehicle.

Background

Movable objects, including aircraft such as Unmanned Aerial Vehicles (UAVs), and ground vehicles such as autonomous vehicles and piloted vehicles, may be used to perform surveillance, reconnaissance, and exploration tasks for various applications. The movable object may include a carry that includes various sensors that enable the movable object to capture sensor data during movement of the movable object. The captured sensor data may be viewed on a client device (e.g., a client device in communication with the movable object via a remote controller, remote server, or other computing device). Due to the manner in which the sensor is mounted to the movable object and the positioning of the movable object, sensor data that may be captured by the sensor may be limited (e.g., in terms of field of view, view angle, etc.). Although perspective transformations may be used to change the perspective of the sensor data, such transformations may require intensive processing and introduce distortion into the transformed data. This limits the reliability of such transformed data when used in various applications.

Disclosure of Invention

A technique for sharing sensor information between multiple vehicles is disclosed. A system for sharing sensor information between a plurality of vehicles may include: an aircraft including a first computing device and a first scan sensor; and a ground vehicle including a second computing device and a second scanning sensor. The aircraft may acquire first scan data using the first scan sensor and transmit the first scan data to the second computing device. The ground vehicle may receive first scan data from a first computing device; acquiring second scanning data from a second scanning sensor; identifying overlapping portions of the first scan data and the second scan data based on at least one reference object in the scan data; and executing the navigation control command based on the one or more road objects identified in the overlapping portion of the first scan data and the second scan data.

Drawings

Fig. 1 shows an example of an aircraft and a ground vehicle according to various embodiments.

Fig. 2A-2C illustrate examples of scan data of a road environment acquired from an aircraft and a ground vehicle, in accordance with various embodiments.

FIG. 3 illustrates an example of a scan manager and a detection manager in accordance with various embodiments.

Fig. 4 illustrates an example of a machine learning model for road object detection, in accordance with various embodiments.

Fig. 5 illustrates a flow diagram of a method of sharing sensor information between multiple vehicles in a movable object environment, in accordance with various embodiments.

Fig. 6 illustrates an example of an aircraft and a ground vehicle, in accordance with various embodiments.

Fig. 7 illustrates an example of generating a map of a movable object environment using an aircraft and a ground vehicle, in accordance with various embodiments.

Fig. 8 illustrates an alternative example of generating a map of a movable object environment using an aircraft and a ground vehicle, in accordance with various embodiments.

Fig. 9 illustrates an example of collaborative mapping by an aircraft mapping manager and a ground vehicle mapping manager, in accordance with various embodiments.

FIG. 10 illustrates a flow diagram of a method of collaborative mapping in a movable object environment, in accordance with various embodiments.

FIG. 11 illustrates an example of supporting a movable object interface in a software development environment, in accordance with various embodiments.

Figure 12 illustrates an example of an unmanned aerial vehicle interface, in accordance with various embodiments.

Fig. 13 illustrates an example of components for an unmanned aerial vehicle in a Software Development Kit (SDK), in accordance with various embodiments.

Detailed Description

The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements. It should be noted that: references in the present disclosure to "an embodiment" or "one embodiment" or "some embodiments" do not necessarily refer to the same embodiments, and such references mean at least one embodiment.

The following description of the invention describes position access management for movable objects. For simplicity of illustration, an Unmanned Aerial Vehicle (UAV) is typically used as an example of a movable object. It will be apparent to those skilled in the art that other types of movable objects may be used without limitation.

With respect to sensing, an autonomous vehicle may analyze its surroundings based on data collected by one or more sensors mounted on the vehicle, including, for example, vision sensors, L iDAR sensors, millimeter wave radar sensors, ultrasound sensors, etc. sensor data may be analyzed using image processing tools, machine learning techniques, etc. to determine depth information and semantic information to help the vehicle identify surrounding people and objects.

Since the sensor is mounted to the vehicle, there are limitations to the field of view and viewing angle of the sensor. For example, when analyzing a driving lane, image data may be captured by a front-facing camera, and a perspective of the image data may be transformed by projecting a front view angle of the image data to a bird's eye view (e.g., a top view perspective). Such projections introduce distortions, resulting in a loss of accuracy of the image data. The perspective effect results in the lane lines and other objects represented in the image data converging the further they are from the imaging sensor. In this way, the length of the environment in front of the vehicle that can be clearly identified by the front camera (or other imaging sensor) is limited. Thus, lane markings and other distant objects are often relatively blurred after the perspective change. Depending on the type of projection used to acquire the bird's eye view of the image data, the portions of the image that represent objects further away from the imaging sensor may become more distorted, making it difficult to reliably apply image processing techniques, such as Canny edge detection, binary image analysis, and other techniques to identify lane markers and other objects in the image data.

Furthermore, the perspective transformation operation requires that a particular camera be active, and even so, the image must be prepared before it can be transformed. Furthermore, the way the camera is mounted on the vehicle and the current road conditions (e.g. road angle) will have a significant impact on the reliability of the transformation and any analysis of the image based on the transformation. In addition, techniques such as those used to acquire binary images require gradients and color thresholds that may not generally be applicable to most road conditions (e.g., weather conditions, roads that have been overhauled, and other road conditions that may reduce the visibility of certain road objects (e.g., lane markings). All such analyses also need to be able to be performed quickly, but conventional techniques can process approximately 4.5 Frames Per Second (FPS), while the number of frames for onboard cameras can be 30FPS or higher. In some embodiments, the vehicle may use a camera to capture images of the road environment as it travels over the road. These images may include representations of other nearby vehicles, trees, light poles, signs, and other nearby objects. In existing systems, these images can be transformed from a forward perspective view to an overhead view, which can then be used to generate a local map. However, the field of view of the camera is limited due to the manner in which the camera is mounted at the front of the vehicle. When transforming these images into top view, the transformation introduces inaccuracies, such as blurring or other distortions. Moreover, as described above, the transformation itself requires additional time and processing resources. Due to inaccuracies in the transformed images, the resulting maps generated based on these images are also less accurate and less usable. This also limits the availability and/or reliability of functions that rely on these maps, such as lane detection and other driving assistance functions. As an alternative to relying solely on transformed images, embodiments may use images captured by a drone or other Unmanned Aerial Vehicle (UAV) (which may directly capture an overhead image) without any transformation or related inaccuracies. The vehicle may then generate a map using its own images as well as the images collected by the drone, thereby reducing or eliminating potential inaccuracies introduced by the transformation. Also, the map can be generated more efficiently without spending time or resources to transform the image.

41页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:材料感测式光成像、检测和测距(LIDAR)系统

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类