System and method for enhanced collision avoidance on logistics floor support equipment using multi-sensor detection fusion
阅读说明:本技术 用于使用多传感器检测融合来在物流地面支持设备上进行增强的碰撞避免的系统和方法 (System and method for enhanced collision avoidance on logistics floor support equipment using multi-sensor detection fusion ) 是由 J·E·巴尔 R·F·布希五世 L·D·卡格莱 C·S·达文波特 J·R·加福德 T·J· 于 2019-02-26 设计创作,主要内容包括:一种用于在移动工业车辆上使用多传感器数据融合来对其周围具有反射性信标的高价值资产进行碰撞避免的增强的系统和方法。该系统具有:带有LiDAR和相机传感器的感测处理系统;以及响应于不同传感器的多处理器模块。感测处理系统融合不同的传感器数据以定位反射性信标。车辆上的模型预测性控制器确定可能的控制解决方案,其中每个控制解决方案基于去往从反射性信标投影的突破点的所估计路径来定义车辆在离散时刻处的阈值可允许速度,并且然后基于性能成本函数来标识控制解决方案中的最优的一个,并且控制解决方案中的该最优的一个与最优阈值可允许速度相关联。该系统具有车辆致动器,该车辆致动器被配置成当车辆超过最优阈值可允许速度时作出响应并且更改车辆移动以避免碰撞。(An enhanced system and method for collision avoidance on a mobile industrial vehicle using multi-sensor data fusion for high value assets with reflective beacons in their surroundings. The system has: a sensing processing system with LiDAR and camera sensors; and a multi-processor module responsive to different sensors. The sensing processing system fuses the different sensor data to locate the reflective beacon. A model predictive controller on the vehicle determines possible control solutions, wherein each control solution defines a threshold allowable speed of the vehicle at a discrete time based on an estimated path to a breakthrough point projected from the reflective beacon, and then identifies an optimal one of the control solutions based on a performance cost function, and the optimal one of the control solutions is associated with an optimal threshold allowable speed. The system has a vehicle actuator configured to respond and alter vehicle movement to avoid a collision when the vehicle exceeds an optimal threshold allowable speed.)
1. A method for enhanced collision avoidance for high value assets based on multi-sensor data fusion by a mobile industrial vehicle, the high value assets having one or more reflective beacons disposed relative to the high value assets, the method comprising the steps of:
(a) detecting one or more reflective beacons relative to a mobile industrial vehicle with a LiDAR sensor on the mobile industrial vehicle;
(b) detecting one or more objects relative to the mobile industrial vehicle with a camera sensor on the mobile industrial vehicle;
(c) fusing, by a sensor processing system on a mobile industrial vehicle, sensor data detected by each of the LiDAR sensors and the camera sensors to identify relative positions of one or more reflective beacons based on a multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data;
(d) determining, by a model predictive controller on a mobile industrial vehicle, a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from a verified relative position of one or more reflective beacons;
(e) identifying, by a model predictive controller, one of the control solutions as an optimal solution based on a performance cost function, wherein one of the control solutions is associated with an optimal threshold allowable speed; and
(f) when the mobile industrial vehicle exceeds the optimal threshold allowable speed, a vehicle speed control element is responsively actuated by a vehicle actuation system on the mobile industrial vehicle to cause the mobile industrial vehicle to alter the mobile operation within the time window and achieve the desired mobile operation relative to a current speed of the mobile industrial vehicle.
2. The method of claim 1, wherein the mobile industrial vehicle comprises a powered vehicle and a plurality of towed vehicles continuously linked with the powered vehicle; and
wherein the step of determining the plurality of control solutions comprises: determining, by a model predictive controller on the mobile industrial vehicle, the plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time in time/space based on an estimated path of the powered vehicle and the towed vehicle to a breakthrough point projected radially from the verified relative position of the one or more reflective beacons.
3. The method of claim 2, wherein the paths of the powering vehicle and the towed vehicle are predicted by the model predictive controller without actively detecting the position of any towed vehicle that follows the powering vehicle.
4. The method of claim 1, wherein the fusing step (c) comprises:
determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected;
determining a mapping space based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;
projecting the determined one or more bounding boxes into the determined mapping space; and
the determined one or more bounding boxes are compared to objects detected in the mapping space to verify the relative position of the one or more reflective beacons.
5. The method of claim 4, wherein the step of projecting the determined one or more bounding boxes into the determined mapping space is performed by a sensor processing system using a convolutional neural network.
6. The method of claim 1, wherein the fusing step (c) comprises:
determining one or more bounding boxes and camera confidence scores based on sensor data generated by a camera sensor when one or more objects are detected;
determining a mapping space and a LiDAR confidence score based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;
projecting the determined one or more bounding boxes into the determined mapping space to identify relative positions of the one or more objects, and determining a final confidence score based on the camera confidence score and the LiDAR confidence score;
when the final confidence score for a particular one of the one or more objects is below the confidence threshold, disregarding the identified relative position of the particular one of the one or more objects; and
the determined one or more bounding boxes are compared to the objects detected in the mapping space to verify the relative positions of the one or more objects that are not ignored based on their respective final confidence scores.
7. The method of claim 6, wherein the step of ignoring the identified relative position of the one or more objects at least when the final confidence score is below the confidence threshold is performed by fuzzy logic within the sensor processing system.
8. The method of claim 1, further comprising the steps of: one or more reflective beacons are deployed relative to high value assets.
9. The method of claim 8, wherein deploying one or more reflective beacons with respect to a high value asset comprises: one or more reflective beacons are placed next to the high value asset.
10. The method of claim 8, wherein deploying one or more reflective beacons with respect to a high value asset comprises: actuating at least one of the one or more reflective beacons from a stowed position to a deployed active position.
11. The method of claim 8, wherein deploying one or more reflective beacons with respect to a high value asset comprises: actuating at least one of the one or more reflective beacons from a stowed position on the high value asset to a deployed active position on the high value asset.
12. The method of claim 1, wherein the step of responsively actuating a vehicle speed control element comprises: actuating a throttle as a vehicle speed control element on a mobile industrial vehicle.
13. The method of claim 1, wherein the step of responsively actuating a vehicle speed control element comprises: brakes are used as vehicle speed control elements in mobile industrial vehicles.
14. An enhanced system for collision avoidance of high value assets based on multi-sensor data fusion by a mobile industrial vehicle, the high value assets having one or more reflective beacons disposed relative to the high value assets, the system comprising:
a sensing processing system disposed on a mobile industrial vehicle, the sensing processing system further comprising:
a LiDAR sensor mounted in a forward orientation to detect one or more reflective beacons in front of a moving industrial vehicle,
a camera sensor mounted in a forward orientation to detect one or more objects in front of the moving industrial vehicle,
a multi-processor module responsive to input from each of the LiDAR sensor and the camera sensor and operable to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more reflective beacons based on a multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data;
a model predictive controller disposed on the mobile industrial vehicle, the model predictive controller configured by being programmatically operable to:
determining a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of one or more reflective beacons;
identifying one of the control solutions as an optimal solution based on a performance cost function, wherein one of the control solutions is associated with an optimal threshold allowable speed; and
a vehicle actuation system comprising at least a vehicle actuator configured to respond when a mobile industrial vehicle exceeds an optimal threshold allowable speed by: causing the mobile industrial vehicle to alter the mobile operation of the mobile industrial vehicle to avoid a collision with the high value asset.
15. The system of claim 14, wherein the mobile industrial vehicle comprises a powered vehicle and a plurality of towed vehicles continuously linked with the powered vehicle; and
wherein the model predictive controller is further configured by being further programmatically operable to: determining the plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at discrete time instances in time/space based on an estimated path of the powered vehicle and the towed vehicle to a breakthrough point projected radially from the verified relative position of the one or more reflective beacons.
16. The system of claim 15, wherein the paths of the powering vehicle and the towed vehicle are predicted by the model predictive controller without actively detecting the position of any towed vehicle that follows the powering vehicle.
17. The system of claim 14, wherein the multi-processor module of the sensing processing system is operatively configured to fuse the sensor data detected by each of the LiDAR sensor and the camera sensor to identify the relative position of the one or more reflective beacons by being programmatically operable to:
determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected;
determining a mapping space based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;
projecting the determined one or more bounding boxes into the determined mapping space; and
the determined one or more bounding boxes are compared to objects detected in the mapping space to verify the relative position of the one or more reflective beacons.
18. The system of claim 17, wherein the multiprocessor module of the sensing processing system is operably configured to: projecting the determined one or more bounding boxes into the determined mapping space using a convolutional neural network.
19. The system of claim 14, wherein the multi-processor module of the sensing processing system is operatively configured to fuse the sensor data detected by each of the LiDAR sensor and the camera sensor to identify the relative position of the one or more reflective beacons by being programmatically operable to:
determining one or more bounding boxes and camera confidence scores based on sensor data generated by a camera sensor when one or more objects are detected;
determining a mapping space and a LiDAR confidence score based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;
projecting the determined one or more bounding boxes into the determined mapping space to identify relative positions of the one or more objects, and determining a final confidence score based on the camera confidence score and the LiDAR confidence score;
when the final confidence score for a particular one of the one or more objects is below the confidence threshold, disregarding the identified relative position of the particular one of the one or more objects; and
the determined one or more bounding boxes are compared to the objects detected in the mapping space to verify the relative positions of the one or more objects that are not ignored based on their respective final confidence scores.
20. The system of claim 14, wherein the multi-processor module of the sensing processing system is operatively configured to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify the relative position of the one or more reflective beacons, the fusing by using fuzzy logic that is programmatically operable to:
determining one or more bounding boxes and camera confidence scores based on sensor data generated by a camera sensor when one or more objects are detected;
determining a mapping space and a LiDAR confidence score based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;
projecting the determined one or more bounding boxes into the determined mapping space to identify relative positions of the one or more objects, and determining a final confidence score based on the camera confidence score and the LiDAR confidence score;
when the final confidence score for a particular one of the one or more objects is below the confidence threshold, disregarding the identified relative position of the particular one of the one or more objects; and
the determined one or more bounding boxes are compared to the objects detected in the mapping space to verify the relative positions of the one or more objects that are not ignored based on their respective final confidence scores.
21. The system of claim 14, wherein each of the one or more reflective beacons includes a base support and a vertical rod attached to the base support, wherein the vertical rod includes a reflective material disposed along a length of the vertical rod.
22. The system of claim 14, wherein at least one of the one or more reflective beacons includes an integrated reflective beacon as part of a high value asset, wherein the integrated reflective beacon is actuated from a stowed position in which the integrated reflective beacon is not visible to a deployed active position in which the integrated reflective beacon is visible.
23. The system of claim 14, wherein the vehicle actuator comprises a throttle disposed on the mobile industrial vehicle.
24. The system of claim 14, wherein the vehicle actuator comprises a brake disposed on the mobile industrial vehicle.
25. The system of claim 14, wherein the vehicle actuation system further comprises:
a vehicle monitor that monitors a current speed of a mobile industrial vehicle;
a feedback control system that responsively actuates the vehicle actuators to cause the mobile industrial vehicle to alter the mobile operation of the mobile industrial vehicle within a predetermined time window when the monitored speed of the mobile industrial vehicle exceeds the optimal threshold allowable speed.
26. An enhanced system for collision avoidance of high value assets based on multi-sensor data fusion by a mobile industrial vehicle, the system comprising:
a plurality of reflective beacons disposed relative to a pre-specified location on the high value asset;
a sensing processing system disposed on a mobile industrial vehicle, the sensing processing system further comprising:
a LiDAR sensor mounted in a forward orientation to detect one or more reflective beacons in front of a moving industrial vehicle,
a camera sensor mounted in a forward orientation to detect one or more objects in front of the moving industrial vehicle,
a multi-processor module responsive to input from each of the LiDAR sensor and the camera sensor and operable to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more reflective beacons based on a multi-sensor fusion data source using the detected LiDAR sensor data and the detected camera sensor data, wherein the multi-processor module of the sensing processing system is operably configured to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more reflective beacons, the fusing by being programmatically operable to:
determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected;
determining a mapping space based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;
projecting the determined one or more bounding boxes into the determined mapping space; and
comparing the determined one or more bounding boxes with objects detected in the mapping space to verify the relative position of the one or more reflective beacons;
a model predictive controller disposed on the mobile industrial vehicle, the model predictive controller configured by being programmatically operable to:
determining a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of one or more reflective beacons;
identifying one of the control solutions as an optimal solution based on a performance cost function, wherein one of the control solutions is associated with an optimal threshold allowable speed; and
a vehicle actuation system comprising at least a vehicle actuator configured to respond when a mobile industrial vehicle exceeds an optimal threshold allowable speed by: causing the mobile industrial vehicle to alter the mobile operation of the mobile industrial vehicle to avoid a collision with the high value asset.
27. A system for enhancement of forward protection collision avoidance for objects in a direction of travel of a mobile industrial vehicle based on multi-sensor data fusion by the mobile industrial vehicle, the system comprising:
a sensing processing system disposed on a mobile industrial vehicle, the sensing processing system further comprising:
a LiDAR sensor mounted in a forward orientation to detect one or more of objects in front of a moving industrial vehicle,
a camera sensor mounted in a forward orientation to detect one or more objects in front of the moving industrial vehicle,
a multi-processor module responsive to input from each of the LiDAR sensor and the camera sensor and operable to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more of the objects based on a multi-sensor fusion data source using the detected LiDAR sensor data and the detected camera sensor data, wherein the multi-processor module of the sensing processing system is operably configured to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more of the objects by being programmatically operable to:
determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected;
determining a mapping space based on sensor data generated by the LiDAR sensor when one or more of the objects are detected;
projecting the determined one or more bounding boxes into the determined mapping space; and
comparing the determined one or more bounding boxes with the objects detected in the mapping space to verify the relative position of one or more of the objects;
a model predictive controller disposed on the mobile industrial vehicle, the model predictive controller configured by being programmatically operable to:
determining a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of one or more of the objects;
identifying one of the control solutions as an optimal solution based on a performance cost function, wherein one of the control solutions is associated with an optimal threshold allowable speed; and
a vehicle actuation system comprising at least a vehicle actuator configured to respond when a mobile industrial vehicle exceeds an optimal threshold allowable speed by: the mobile industrial vehicle is caused to alter a movement operation of the mobile industrial vehicle to avoid a collision with an object.
28. The system of claim 27, wherein the multi-processor module is responsive to input from each of the LiDAR sensor and the camera sensor by being further programmably operable to: an effective field of view of at least one of the LiDAR sensor and the camera sensor considered by the multi-processor module is dynamically adjusted in response to altered movement operations of the mobile industrial vehicle.
29. The system of claim 27, wherein the multi-processor module is responsive to input from each of the LiDAR sensor and the camera sensor by being further programmably operable to: an effective field of view of at least one of the LiDAR sensor and the camera sensor considered by the multi-processor module is dynamically adjusted in response to detecting a change in direction of the moving industrial vehicle.
30. The system of claim 27, wherein the multi-processor module is responsive to input from each of the LiDAR sensor and the camera sensor by being further programmably operable to:
detecting object identification markers using sensor data generated by at least one of a LiDAR sensor and a camera sensor;
identifying the detected object identification marker as a boundary identifier between the first operating region and the second operating region; and
an effective field of view of subsequent sensor data generated by at least one of the LiDAR sensor and the camera sensor is dynamically adjusted in response to the identified boundary identifier.
31. The system of claim 28, wherein the multi-processor module is further programmably operable to dynamically adjust the effective field of view of at least one of the LiDAR sensor and the camera sensor by being further operable to: at least one of detected LiDAR sensor data and detected camera sensor data used in a multi-sensor fusion data source is dynamically limited for identifying a relative position of one or more of the objects.
32. The system of claim 31, wherein at least one of the detected LiDAR sensor data and the detected camera sensor data used in the multi-sensor fused data source that is dynamically limited to effectively adjust where at least one of the LiDAR sensor and the camera sensor is focused.
33. The system of claim 31, wherein at least one of the detected LiDAR sensor data and the detected camera sensor data used in the multi-sensor fused data source that is dynamically limited to effectively adjust a degree of receive field width of at least one of the LiDAR sensor and the camera sensor.
34. A method for enhanced collision avoidance by a mobile industrial vehicle using a multi-mode on-board collision avoidance system having a plurality of sensors, and the mobile industrial vehicle is operable in a plurality of different operating zones, the method comprising the steps of:
operating, by a multi-mode on-board collision avoidance system on a mobile industrial vehicle, in a first collision avoidance mode when the mobile industrial vehicle is operating in a first one of the different operating zones;
detecting, by one of the sensors of the multi-mode on-board collision avoidance system, a first object identification marker;
identifying, by the multi-mode on-board collision avoidance system, the detected first object identifying indicia as an operational boundary identifying indicia;
detecting, by one or more sensors of the multi-mode on-board collision avoidance system, when the mobile industrial vehicle passes a regional boundary associated with the operational boundary identification marker and enters a second of the different operational regions; and
while in a second of the different operating zones, changing operation from a first collision avoidance mode to a second collision avoidance mode by the multi-mode on-board collision avoidance system to govern operation of the multi-mode on-board collision avoidance system, wherein the second collision avoidance mode includes at least one operating parameter that is more restrictive than the at least one operating parameter in the first collision avoidance mode.
35. The method of claim 34, wherein the step of changing operation from a first collision avoidance mode to a second collision avoidance mode comprises: using a second set of operating parameters for the multi-mode on-board collision avoidance system when in the second collision avoidance mode instead of using the first set of operating parameters for the multi-mode on-board collision avoidance system when in the first collision avoidance mode, wherein the at least one operating parameter has a more restrictive value as part of the second set of operating parameters when compared to as part of the first set of operating parameters.
36. The method of claim 34, wherein the at least one operating parameter comprises a speed limit threshold for a mobile industrial vehicle.
37. The method of claim 34, wherein the at least one operating parameter comprises an ingress prevention distance for a moving industrial vehicle.
38. The method of claim 37, wherein the preventing an entry distance for the mobile industrial vehicle comprises: a minimum radial distance from an object detected by a mobile industrial vehicle to a multimode on-board collision avoidance system.
39. The method of claim 37, wherein the preventing an entry distance for the mobile industrial vehicle comprises: a minimum radial distance from a reflective beacon detected by a multimode on-board collision avoidance system of a mobile industrial vehicle.
40. The method of claim 34, wherein the second collision avoidance mode includes at least one additional operational feature of the multi-modal collision avoidance system used in the second collision avoidance mode when compared to the operational feature of the multi-modal collision avoidance system used in the first collision avoidance mode.
41. The method of claim 40, wherein the additional operational features of the multi-modal collision avoidance system include a minimum ingress prevention distance threshold feature for causing the mobile industrial vehicle to not move within a minimum ingress prevention distance threshold from the object detected by the sensor.
42. The method of claim 40, wherein the additional operational features of the multi-modal collision avoidance system include object persistence features for tracking a detected object after the detected object exceeds a field of view of a sensor.
43. The method of claim 40, wherein the additional operational features of the multi-modal collision avoidance system include altered field of view features for changing a field of view of a sensor to enhance collision avoidance when operating in the second collision avoidance mode.
44. The method of claim 40, wherein the additional operational features of the multi-modal collision avoidance system include a special object detection feature for enabling detection of reflective beacons that differ from other objects alone and in addition to other objects alone when operating in the second collision avoidance mode.
45. The method of claim 40, wherein the first object identification marker comprises an ArUco marker encoded to correspond to the representation of the regional boundary and configured to indicate an orientation of the regional boundary.
Technical Field
The present disclosure relates generally to systems, apparatus, and methods in the field of collision avoidance systems, and more particularly to various aspects of systems, apparatus, and methods related to enhanced collision avoidance structures and techniques for use with mobile industrial vehicles (vehicles), such as cargo tractors and associated dolls.
Background
Collision avoidance may be important in many applications, such as Advanced Driver Assistance Systems (ADAS), industrial automation, and robotics. It is well known that conventional collision avoidance systems can reduce the severity or occurrence of a collision or provide advance warning of a collision.
In an industrial automation environment, certain areas often prohibit vehicle (e.g., automated vehicles or non-automated vehicles) from entering for the protection of personnel and high value assets where damage is to be avoided. These areas may be isolated by mapping (e.g., GPS coordinates, geo-fencing, etc.) or defined by outlining the no entry areas. The collision avoidance system may then be used to avoid prohibited access areas or constrained spaces, which protects personnel and/or high value assets.
One common problem with conventional collision avoidance systems may result from the detection and reaction to false positives. For example, collision avoidance systems may suffer from false positives when objects/markers are detected and do not outline the intended markers and unintended reflective surfaces (such as a worker's safety vest). Detection of false positives often results in poor performance, as the control system responds to all detections. Controlling the response to false detections may result in unnecessary actions, resulting in reduced efficiency. The impact of false positive detection on an autonomous/semi-autonomous system is application specific. Tolerance for false positive detection (tolerance) can be integrated into the system design. The ability of a sensing platform to apply can be defined by false positive detection as well as missing detection (misled true detection). Other common problems encountered with collision avoidance systems using certain types of sensors may be the inability to handle different levels of illumination, and the inability to distinguish colors.
To address one or more of these types of issues, a technical solution is needed that can be deployed to enhance the manner in which collisions are avoided that cause damage to logistics vehicles (such as cargo tractors and associated carts), and to do so in an enhanced manner that improves system performance and helps reduce false positives. In particular, described are various exemplary types of contouring methods and systems in which an industrial vehicle may use light detection and ranging (LiDAR) sensors and multiple color cameras to detect beacons as a type of marker, and one or more model predictive control systems are deployed to block vehicles from entering constrained spaces as a way to avoid damage or contact with high value assets and to provide enhanced implementation of object detection and object avoidance.
Disclosure of Invention
Certain aspects and embodiments will become apparent in the following description. It should be understood that these aspects and embodiments, in their broadest sense, could be practiced without having one or more features of these aspects and embodiments. It should be understood that these aspects and embodiments are merely exemplary.
In general, aspects of the invention relate to improved collision avoidance systems, methods, devices, and techniques that help avoid false object detection and improve the ability to avoid collisions involving towed vehicles that do not follow the same path of travel of a towing vehicle, such as a mobile industrial vehicle (e.g., a cargo tractor that can pull multiple carts loaded with transported and moved items as part of one or more logistics operations).
In one aspect of the disclosure, a method for enhanced collision avoidance for high value assets based on multi-sensor data fusion by a mobile industrial vehicle is described. In this aspect, the high value asset has one or more reflective beacons disposed relative to the high value asset. The method starts with: LiDAR sensors on mobile industrial vehicles detect one or more reflective beacons relative to the mobile industrial vehicle. Then, a camera sensor on the mobile industrial vehicle detects one or more objects relative to the mobile industrial vehicle. Then, the method has: a sensor processing system on a mobile industrial vehicle fuses sensor data detected by each of LiDAR and camera sensors to identify relative positions of reflective beacons based on a multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. Then, the method has: a model predictive controller on the mobile industrial vehicle determines a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point (breaking point) projected radially from the verified relative position of the reflective beacon. The method continues with: the model predictive controller identifies one of the control solutions as an optimal solution having an optimal threshold allowable speed based on a performance cost function. Then, the method has: when the mobile industrial vehicle exceeds the optimal threshold allowable speed, a vehicle actuation system on the mobile industrial vehicle responsively actuates the vehicle speed control element to cause the mobile industrial vehicle to alter the movement operations within the time window and achieve the desired movement operations relative to the current speed of the mobile industrial vehicle.
In another aspect of the disclosure, an enhanced system for collision avoidance for high value assets based on multi-sensor data fusion by a mobile industrial vehicle is described. In this additional aspect, the high value asset is provided with one or more reflective beacons in its vicinity. In general, the system in this aspect includes a sensing processing system on the vehicle, LiDAR and camera sensors on the front of the vehicle, a multi-processor module that can fuse sensor data, a model predictive controller, and a vehicle actuation system. The multi-processor module is responsive to input from each of the LiDAR and camera sensors and advantageously fuses sensor data detected by each of these different sensors to identify the relative position of the reflective beacon based on the multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. A model predictive controller on a mobile industrial vehicle is configured by being programmatically operable to: a plurality of control solutions is determined and one control solution is identified as the optimal control solution. Each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at discrete time instants based on an estimated path to a breakthrough point projected radially from the verified relative position of the reflective beacon. The model predictive controller identifies one of the control solutions as an optimal solution associated with an optimal threshold allowable speed based on a performance cost function. The vehicle actuation system (with vehicle actuators) is configured to respond when the vehicle exceeds the optimal threshold allowable speed by: the vehicle is caused to alter the moving operation of the vehicle to avoid a collision with the high value asset.
In yet another aspect, another enhanced system for collision avoidance of high value assets based on multi-sensor data fusion by a mobile industrial vehicle is described. In this further aspect, the enhanced system has a reflective beacon disposed relative to a pre-designated location on the high value asset, a sensing processing system on the vehicle, a model predictive controller on the vehicle, and a vehicle actuation system on the vehicle. The sensing processing system has: a LiDAR sensor mounted in a forward orientation to detect one or more reflective beacons in front of a mobile industrial vehicle; and a camera sensor mounted in a forward orientation to detect one or more objects in front of the moving industrial vehicle. The sensing processing system further includes a multi-processor module responsive to input from the LiDAR and camera sensors and operable to fuse sensor data detected by each of the LiDAR and camera sensors to identify a relative position of the reflective beacon based on the multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. To fuse sensor data detected by each of the LiDAR sensors and the camera sensors to identify the relative position of one or more reflective beacons, a multi-processor module of the sensing processing system is operatively configured and programmatically operable to: determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected; determining a mapping space based on sensor data generated by the LiDAR sensor when a reflective beacon is detected; projecting the determined bounding box into the determined mapping space; and comparing the determined bounding box to objects detected in the mapping space to verify the relative position of the reflective beacon. A model predictive controller disposed on a mobile industrial vehicle is configured by being programmatically operable to: determining a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of the reflective beacon; and determining one of the control solutions as an optimal solution based on the performance cost function, wherein the optimal control solution is associated with an optimal threshold allowable speed. The vehicle actuation system has at least a vehicle actuator configured to respond when the mobile industrial vehicle exceeds an optimal threshold allowable speed by: causing the mobile industrial vehicle to alter the mobile operation of the mobile industrial vehicle to avoid a collision with the high value asset.
In yet another aspect of the present disclosure, an enhanced system for front guard (front guard) collision avoidance of objects in a direction of travel of a mobile industrial vehicle based on multi-sensor data fusion by the mobile industrial vehicle is described. In this further aspect, the system includes a sensing processing system disposed on the mobile industrial vehicle, a model predictive controller on the vehicle, and a vehicle actuation system on the vehicle. The sensing processing system has: a LiDAR sensor mounted in a forward orientation to detect one or more of objects in front of a moving industrial vehicle; and a camera sensor mounted in a forward orientation to detect objects in front of the moving industrial vehicle. The sensing processing system also includes a multi-processor module that is responsive to input from each of the LiDAR sensor and the camera sensor and is operable to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of an object based on the multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. A multi-processor module of the sensing processing system is operatively configured to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more of the objects by being programmatically operable to: determining one or more bounding boxes based on sensor data generated by a camera sensor when an object is detected; determining a mapping space based on sensor data generated by the LiDAR sensor when an object is detected; projecting the determined bounding box into the determined mapping space; and comparing the determined bounding box with the objects detected in the mapping space to verify the relative positions of the objects. The model predictive controller is configured by being programmatically operable to: determining different possible control solutions, wherein each of the possible control solutions defines a threshold allowable speed of the vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of the object; and identifying one of the control solutions as an optimal solution associated with an optimal threshold allowable speed based on a performance cost function. The vehicle actuation system has at least a vehicle actuator configured to respond when the mobile industrial vehicle exceeds an optimal threshold allowable speed by: the mobile industrial vehicle is caused to alter a movement operation of the mobile industrial vehicle to avoid a collision with an object.
In yet another aspect, a method for enhanced collision avoidance by a mobile industrial vehicle using a multi-mode on-board collision avoidance system is described and the mobile industrial vehicle may operate in a plurality of different operating areas. In this respect, the method starts with: the multi-mode on-board collision avoidance system on a mobile industrial vehicle operates in a first collision avoidance mode when the mobile industrial vehicle is operating in a first one of the different operating zones. Next, one of a plurality of sensors on the multimodal on-board collision avoidance system detects an object identification marker (such as an ArUco marker) and identifies a first detected object identification marker as an operational boundary identification marker. The method then continues with: the sensor detects when the mobile industrial vehicle passes a regional boundary associated with the operational boundary identification marker and enters a second of the different operational zones, wherein when in the second of the different operational zones, the multi-mode on-board collision avoidance system automatically and autonomously changes operation from the first collision avoidance mode to the second collision avoidance mode, thereby governing operation of the multi-mode on-board collision avoidance system. In this case, the second collision avoidance mode has at least one operating parameter (e.g., speed limit, etc.) that is more restrictive than the operating parameter in the first collision avoidance mode.
Additional advantages of these and other aspects of the disclosed embodiments and examples will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments in accordance with one or more principles of the invention and, together with the description, serve to explain one or more principles of the invention. In the drawings:
FIG. 1 is a diagram of an operational view of an exemplary tractor collision avoidance system deployed in a logistics environment, in accordance with an embodiment of the present invention;
FIG. 2 is an exemplary high-level functional block diagram of an exemplary collision avoidance system according to an embodiment of the present invention;
FIG. 3 is a more detailed diagram of an exemplary collision avoidance system according to an embodiment of the present invention, with logical segments of different elements and roles within the system shown;
FIG. 4 is a diagram of exemplary implementation details of portions of an exemplary collision avoidance system, according to an embodiment of the invention;
FIG. 5 is a diagram with details on an exemplary passive beacon for use with an exemplary collision avoidance system, according to an embodiment of the present invention;
FIG. 6 is an exemplary image illustrating an exemplary passive beacon as seen by a camera and LiDAR sensor and used as a training input for an exemplary collision avoidance system, in accordance with an embodiment of the present invention;
FIG. 7 is a diagram with further exemplary training inputs for an exemplary collision avoidance system, in accordance with an embodiment of the present invention;
FIG. 8 is a diagram illustrating exemplary training statistics for an exemplary collision avoidance system, according to an embodiment of the present invention;
FIG. 9 is a block diagram of exemplary general processing steps associated with enhanced collision avoidance using an exemplary collision avoidance system in accordance with an embodiment of the present invention;
FIG. 10 is a set of diagrams illustrating an exemplary kinematics model visualization in relation to estimated and predicted movements of an exemplary tractor (industrial vehicle) and a following vehicle (trailer) that may deploy an exemplary collision avoidance system according to an embodiment of the present invention;
FIG. 11 is an exemplary frame diagram of a dynamic modeling frame for determining transient states of a towing vehicle system in accordance with an embodiment of the present invention;
FIG. 12 is a diagram of an exemplary single rigid object model, according to an embodiment of the present invention;
FIG. 13 is a diagram of an exemplary mobile towing vehicle system having four towed units in accordance with an embodiment of the present invention;
FIG. 14 is a diagram of an exemplary geometric model of a towing vehicle system having an exemplary towing vehicle and two towed vehicle units, showing hitch (hold) points, in accordance with an embodiment of the present invention;
FIG. 15 is a diagram of an exemplary towed vehicle and its hitch points and associated vectors, in accordance with an embodiment of the present invention;
FIG. 16 is a pictorial view of an exemplary towing vehicle and one towed vehicle (trailer) illustrating a particular length in a tractor-trailer model, in accordance with an embodiment of the present invention;
FIG. 17 is a diagram of an exemplary scale model of an exemplary towing vehicle and two towed vehicles (trailers) illustrating a particular length and a particular radius defining a series of virtual triangles, in accordance with an embodiment of the present invention;
18A-18C are diagrams illustrating different configuration states of an exemplary towing vehicle and an exemplary towed vehicle, according to embodiments of the present invention;
FIG. 19 is a diagram illustrating a trigonometric relationship between the exemplary towing vehicle and the exemplary towed vehicle from FIGS. 18A-18C, in accordance with an embodiment of the present invention;
FIG. 20 is a diagram illustrating an exemplary system architecture according to an embodiment of the present invention;
FIG. 21 is a schematic diagram showing a vehicle that is positioned and moving relative to different beacons that are placed near a protected area in accordance with an embodiment of the invention;
FIG. 22 is an exemplary block diagram for data fusion, according to an embodiment of the present invention;
FIG. 23 is an exemplary high-level data flow diagram of a processing module for implementing a signal processing system according to an embodiment of the present invention;
fig. 24 is a diagram of an exemplary passive beacon in accordance with an embodiment of the present invention;
FIG. 25 is a diagram of an exemplary LiDAR beam from a LiDAR sensor relative to an exemplary passive beacon disposed in front of the LiDAR sensor, according to an embodiment of the present invention;
FIG. 26 is an illustration of an exemplary scan of a LiDAR point cloud in accordance with an embodiment of the present invention;
fig. 27 is a diagram of an exemplary beacon return, according to an embodiment of the present invention;
FIG. 28 is an exemplary table of features and information regarding such features used as part of the extraction of features of objects from LiDAR information in accordance with embodiments of the present invention;
FIG. 29 is a graph illustrating optimal feature weights selected by an SVM training optimizer according to an embodiment of the present invention;
FIG. 30 is an exemplary illustration of a two-dimensional SVM according to an embodiment of the present invention;
FIG. 31 is a diagram illustrating an exemplary Probability Distribution Function (PDF) for beacon and non-beacon LiDAR discrimination values, according to an embodiment of the invention;
FIG. 32 is a diagram of a data flow starting with a bounding box from a camera projected into range/distance and angle estimates in a LiDAR coordinate system in accordance with an embodiment of the present invention;
FIG. 33 is a more detailed diagram of a data flow starting with a bounding box from a camera projected into range/distance and angle estimates in a LiDAR coordinate system using an exemplary neural network structure for mapping such information, in accordance with an embodiment of the present invention;
FIG. 34, having parts (a) and (b), illustrates a diagram of two different exemplary data streams and a fusion process block, according to an embodiment of the present invention;
FIG. 35, having parts (a) - (d), illustrates various exemplary fuzzy membership functions and graphical representations of fuzzy logic outputs when performing data fusion using fuzzy logic, in accordance with embodiments of the present invention;
FIG. 36 is a diagram of the data flow and processing of LiDAR and camera information using different processing techniques and with fusion of confidence scores using hyper-parameters, according to an embodiment of the present invention;
FIG. 37 is a series of tables that illustrate LiDAR training and testing confusion matrix information, according to an embodiment of the present invention;
FIG. 38 is a flow diagram of an exemplary method for enhanced collision avoidance for high value assets based on multi-sensor data fusion by a mobile industrial vehicle, in accordance with an embodiment of the present invention;
FIG. 39 is a diagram of another exemplary tractor collision avoidance system operational view deployed in another exemplary logistics environment in accordance with an embodiment of the present invention;
FIG. 40 is a diagram of another exemplary tractor collision avoidance system operational diagram deployed in another exemplary logistics environment, wherein the exemplary tractor collision avoidance system operates in an exemplary driving lane mode, in accordance with an embodiment of the present invention;
FIG. 41 is a diagram of another exemplary tractor collision avoidance system operational diagram deployed in another exemplary logistics environment, wherein the exemplary tractor collision avoidance system operates in an exemplary aircraft gate area mode in accordance with an embodiment of the present invention; and
FIG. 42 is a flow diagram of an exemplary method for enhanced collision avoidance by a mobile industrial vehicle that may operate in a plurality of different operating regions using a multi-mode on-board collision avoidance system, in accordance with an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to various exemplary embodiments. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts. However, those skilled in the art will appreciate that the specific portions may be implemented in different ways for different embodiments depending on the anticipated deployment and operating environment needs of the respective embodiments.
The following describes various embodiments of different systems, devices, and methods of application deployed and used to improve how collisions with objects and personnel (e.g., high value assets) are prevented and avoided during operation of various mobile industrial vehicles, such as cargo tractors that pull one or more carts or trailers. Moreover, those skilled in the art will appreciate that additional embodiments may combine some of these otherwise independent solutions to provide an even more robust system for avoiding collisions with high value assets by mobile industrial vehicles (such as cargo tractors and associated carts). As described in more detail below.
Those skilled in the art will appreciate that the following description includes detailed exemplary information about an exemplary dynamic path-following or kinematic model that may be deployed as part of an applied and enhanced system, apparatus, and method embodiment that relates to predicting movement and path of a multi-element mobile industrial vehicle (such as a cargo tractor with a towed vehicle or trailer) as part of avoiding collisions. The following description also includes detailed exemplary information regarding the use of multiple sensors to generate different types of data (e.g., camera data and LiDAR data) and deploy detailed embodiments of innovative, inventive, and advantageous processes that fuse such different types of data to improve detection of objects (e.g., physical structures, aircraft, personnel, etc.) as part of applied and enhanced system, apparatus, and method embodiments that improve how collisions are avoided.
In general, exemplary local systems having "front guard" features are described herein that can be used for collision avoidance as sensors available, feasible real-time control, and novel fusion of actuators and sensors on mobile industrial vehicles, such as cargo tractors and associated carts/trailers that can transport items (e.g., unpackaged goods, packaged goods, and containers that can be used to transport goods). The general system may use passive beacon detection based aircraft collision avoidance methods and general object detection based front protection to better reduce the incidence of frontal collisions with any object. Further, such systems may use a warning cone as a platform for a passive "beacon" that the cargo tractor sensors may use for local situational awareness and orientation with respect to the vulnerable aircraft portion to be protected. In more detail, such an exemplary system may integrate a sensing and sensor processing system on a cargo tractor type mobile industrial vehicle, a cargo tractor/cart model used by the system, a model predictive controller, and a vehicle actuation system to avoid high value assets. One or more beacons may be placed in strategic locations to allow highly robust detection and avoidance of high value assets. Further use of such a system may be implemented to enable object detection and object avoidance, which takes advantage of data fusion of different sources of detected potential objects, and reacts in time using a vehicle actuation system.
FIG. 1 is a diagram of an operational view of an exemplary tractor collision avoidance system deployed in a logistics environment, in accordance with an embodiment of the present invention. As shown in FIG. 1, the logistics environment includes an
In fig. 1, an
Fig. 1 also illustrates a
In this type of logistics environment, several different embodiments incorporating novel and innovative aspects can be explained, which present technical solutions to the technical problem of avoiding collisions in logistics operations by the
In another example, an embodiment may include a passive beacon using retro-effective (retro-effective) surfaces (e.g., taped or painted surfaces of the reflective beacons 105 a-d). Generally, such beacons may be used with enhanced collision avoidance systems on cargo tractors to improve the weather capability of the system, reduce system complexity, and facilitate advantageous low impact integration with existing standard logistics operational processes with respect to protecting high value assets (such as aircraft).
In yet another example, embodiments may deploy novel kinematic models and predictive calculations with respect to a cargo tractor and its towed vehicle (e.g., a truck/trailer) to facilitate collision prevention by the cargo tractor and associated towed truck, even without active detection mechanisms deployed on the truck. Typically, the kinematic model is used to inform calculations performed on a processing system deployed on the cargo tractor of possible future states of the mobile vehicle system (i.e., the powered vehicle and the linked towed vehicle that follows). Thus, when using such models and calculations as part of an exemplary system,
In yet another example, a "front guard" type embodiment may use different sensors, such as a light detection and ranging (LiDAR) sensor, and one or more cameras (e.g., stereo cameras or two monocular cameras) to detect objects (including humans, skids (chocks), cones, boxes, etc., but not necessarily limited to reflective beacons) in the direction of
In examples where other types of sensors are utilized to find beacons, such as LiDAR and monocular camera sensors, the system may also fuse different types of sensor data via camera detection bounding boxes into the LiDAR space, utilize predictability of the state of the
In general, the exemplary mobile
Fig. 2 is an exemplary high-level functional block diagram of an exemplary collision avoidance system according to an embodiment of the present invention, illustrating the general operational flow of such a system 200. Referring now to FIG. 2, the sensing module 205 generally receives and detects information about the environment of the cargo tractor (e.g., camera images, LiDAR sensed input, etc.). In sensing, the sensor processing system of the system utilizes different types of sensing information and is operable to identify objects and learn about the sensed scene (e.g., whether there are reflective passive beacons detected based on different sensor inputs). Next, in the predictive control block 210, a plurality of real-time "look-ahead" control solutions may be generated using the example kinematic model and the state estimates 225. These solutions are then fed to a feedback control system of actuator control 215, vehicle actuation (via throttle and brake) 220, vehicle dynamics database 230 (e.g., characteristics of various vehicle parameters such as vehicle mass, braking force, etc.), and feedback compensator 235, such that the system responds to the identified objects and utilizes the predictive control solution to apply the optimal control solution to improve and enhance how the system avoids collisions for the particular vehicle involved.
Fig. 3 and 4 illustrate further details of the different elements of such an exemplary enhanced collision avoidance system. In particular, FIG. 3 is a more detailed diagram of an exemplary collision avoidance system 300 according to an embodiment of the present invention, in which logical segments of different elements and roles within the system are shown. Referring now to fig. 3, elements of an exemplary enhanced collision avoidance system 300 are shown that are classified into five different types of system segments: sensors 305, sensor processing 310, collision avoidance control 315, vehicle motion control 320, and actuators 325. In the exemplary embodiment, sensor segment 305 portion of exemplary system 300 includes: proprioceptive sensors, such as brake pressure sensor 305a, ECU related sensors for wheel speed and throttle percentage 305b, and position sensors 305c, such as inertial measurement based sensors (accelerometers, gyroscopes, magnetometers) and receiver based positioning systems (GPS, wireless cellular telephone positioning circuitry, etc.), and exosensory sensors, such as cameras 305d, e or LiDAR sensors 305f the signal processing section 310 portion of the exemplary system includes software based modules 310a, 310b running on a processing platform that performs signal processing 310a on sensor inputs from the exosensory sensors (e.g., convolutional neural network processing for camera data, data clustering (cluster) for LiDAR data, and Support Vector Machine (SVM) processing) and uses a database of object and map information 310c to perform data processing at each processed sensor input fusion And 310 b.
The collision avoidance control segment 315 portion of the exemplary system 300 includes software-based modules running on other process platforms (separate from the sensor process segment 310) that implement a Model Predictive Controller (MPC) 315 a. Generally, the MPC315a determines a control solution to determine the maximum allowable speed at discrete time instances in time/space. More particularly, embodiments of MPC315a employ a look-ahead strategy and are applicable to discrete event management using supervisory control. In operation, the MPC315a in the collision avoidance control segment 315 calculates the possible control outcomes for the set of control inputs within a limited prediction horizon. With the performance evaluation function (also referred to as a "cost" function related to the performance index), the MPC315a predicts and evaluates all reachable system states within the prediction horizon described above so that an optimal result can be found, and a corresponding system control input can be selected and transmitted to the vehicle controller (i.e., the vehicle motion control segment 320). For example, in one embodiment, the use of "optimal" may mean a predicted control solution along the most likely realized path for collision avoidance that results in the least limit on vehicle speed while ensuring collision avoidance. This process is repeated until a predetermined target is reached, such as the system operating in a safe area away from passive beacons and other obstacles. As such, MPC315a may be used for collision avoidance, forward protection, spatial awareness, solutions focused on local options, system kinematics, vehicle dynamics, false positive mitigation, and beacon/object persistence deployed on mobile industrial vehicles, as well as solutions using such enhanced collision avoidance systems on cargo tractor type vehicles as mentioned herein. In addition, MPC315a has access to one or more databases 315b, database 315b having stored thereon predictive kinematic model information as well as vehicle dynamics information. The vehicle motion control section 320 portion of the exemplary system 300 includes a software module running on yet another processor (e.g., a microcontroller) that implements a vehicle brake feedback control system 320a, which vehicle brake feedback control system 320a accesses vehicle dynamics information from a database 320b and operates as a feedback compensator to provide input to a vehicle actuator 325, such as a throttle and/or brake system of a cargo tractor, and/or a gear selector on a cargo tractor.
While fig. 3 provides implementation details related to embodiments more from a data processing and data flow perspective, fig. 4 is a diagram illustrating exemplary hardware implementation details of elements of such an exemplary enhanced collision avoidance system 400, according to an embodiment of the present invention. Referring now to FIG. 4, the exemplary hardware integration diagram illustrates three different core processing systems or controllers, namely a sensor data processor 405, a collision avoidance controller 410, and a vehicle feedback actuation controller 415. Those skilled in the art will appreciate that each of these processors/controllers may be implemented using one or more different processor-based systems (e.g., general purpose graphics processing units (GP-GPUs), Central Processing Units (CPUs), microprocessors, microcontrollers, multiple processors, multi-core processors, systems on a chip (socs), or other discrete processing-based devices) that may include on-board communication interfaces, I/O docking circuitry, and associated peripheral circuitry for interfacing with cameras, LiDAR, network switching components, ECUs, IMUs, and vehicle actuators and vehicle sensor elements, as required by the described use application.
For example, as shown in fig. 4, the exemplary sensor data processor 405 receives input from different cameras (e.g., camera 1305 d and camera 2305 e, each of which may be implemented as a front view infrared camera sensor) using a USB 3.1 connection that connects embedded multiprocessors (e.g., 2 NVIDIA Jetson TX2 embedded AI computing devices that are essentially different AI supercomputers on a module for use in edge applications, each with CPU and GPU architecture and various standard interfacing hardware) for faster and more robust transfer of information from the sensors to the sensor data processor 405. The sensor data processor 405 also receives LiDAR input from the LiDAR sensor 305f over an Ethernet connection (e.g., via the Ethernet switch 420). With these inputs, the sensor data processor 405 (which may be implemented with both a CPU and a GPU processor) is operable to detect beacons and objects using a novel fusion of camera and LiDAR data. More specifically, the LiDAR sensor 305f detects the beacon and distinguishes the beacon from other objects (such as people, vehicles, cargo tractors, etc.). The cameras 305d, e detect a plurality of objects (such as people, beacons, skids, vehicles, etc.) and provide the camera data to the sensor data processor 405, where such objects can be identified using a learning neural network (e.g., a convolutional neural network trained for such identification). The data fusion software module 310b running on the sensor data processor 405 then fuses these different types of data by projecting the camera detection bounding box into the LiDAR space. Fusing these two distinct and distinct data sources into a multi-sensor fused data source provides a level of enhancement and improved performance in avoiding collisions.
The exemplary collision avoidance controller 410 shown in fig. 4 has an ethernet connection to the sensor data processor 405, and the collision avoidance controller 410 may be implemented, for example, using an NVIDIA Jetson TX2 module having GPGPU/CPU hardware and on-board docking hardware. The collision avoidance controller 410 runs a model predictive control software module 315a, which model predictive control software module 315a is used as a type of look-ahead controller that predicts the shortest possible path to a breakthrough point in space projected radially from the beacon location.
In one embodiment, Model Predictive Control (MPC) software 315a running on the collision avoidance controller 410 incorporates a system kinematics model of the tractor and the vehicle (exemplary embodiments of which are described in detail below) to predict potential collisions between any portion of the tractor/vehicle fleet and high value assets. As noted above, the MPC software 315a computes the control solution to determine the maximum allowable speed at any discrete time in time/space. The timeliness of the collision avoidance problem means that the calculation of the MPC solution is performed in real-time or substantially real-time. In particular, those skilled in the art will appreciate that the control solution determined by the MPC software 315a running on the collision avoidance controller 410 involves a large set of possible solutions, where the cost of each solution to prevent a collision is calculated. The cost function compares the cost of each possible solution determined by the MPC and the MPC software 315a can select the optimal one of the possible solutions based on criteria defined by the cost function. Since each possible solution may be computed independently, embodiments of MPC software 315a may compute such solutions in parallel using the real-time operating system used by collision avoidance controller 410, and in some embodiments using the multi-core/multi-threading capabilities of collision avoidance controller 410 itself (e.g., the 256 CUDA enabled core parallel computing platform and NVIDIA Pascal GP-GPU processing complex used in the NVIDIA Jetson TX2 computation module).
As described herein, the MPC software 315a running on the collision avoidance controller 410 is further used for speed governing (e.g., calculating a control solution to determine maximum allowable speeds at discrete times in time/space). The collision avoidance controller 410 may receive information from positioning circuitry and elements 3005c, such as a GPS, or inertial measurement unit, or other position sensors on the cargo tractor (as shown in fig. 3). As part of an embodiment of the enhanced collision avoidance system 400, the collision avoidance controller 410 provides an output on a Controller Area Network (CAN) bus to a vehicle feedback actuation controller 415. Such a CAN bus provides a standard mechanism for vehicle communication and car interfacing with portions of the cargo tractor, such as the brakes and throttle, the ECU 3005b on the cargo tractor.
The example vehicle feedback actuation controller 415 shown in fig. 4 has a CAN connection to the collision avoidance controller 410, as well as other docking circuitry (e.g., analog, Pulse Width Modulation (PWM) or other parallel, serial, digital or other sense/actuation line interfaces) to control portions of the cargo tractor (e.g., brakes and throttle). The exemplary vehicle feedback actuation controller 415 may be implemented, for example, using an Arduino Due single board 32-bit ARM core microcontroller module with hardware and onboard docking hardware. Vehicle actuation feedback control software 320a running on the example vehicle feedback actuation controller 415 typically calculates the deceleration rate to achieve the desired speed within a particular time window. A feedback control system implemented by vehicle actuation feedback control software 320a running on the controller 415 actuates the tractor and throttle controls 325 to achieve the calculated acceleration or deceleration. The feedback system can bring the vehicle to a complete stop if desired (e.g., if the mobile industrial vehicle is approaching a no
Accordingly, fig. 3 and 4 provide exemplary functional, software, and hardware implementation details related to embodiments of an enhanced collision avoidance system, and methods of operating such an enhanced collision avoidance system. Additionally, those skilled in the art will appreciate that while fig. 4 illustrates three distinct and distinct processor/controller hardware devices, further embodiments of the exemplary enhanced collision avoidance system may implement these utilizing single or other multi-processor or logic based solutions having various software-based modules operative to perform the described enhanced collision avoidance functions.
Fig. 5 is a diagram illustrating details on an exemplary passive beacon 500 for use with an exemplary collision avoidance system according to an embodiment of the present invention. In general, such an exemplary passive beacon 500 may have a characteristic shape so as to produce a specifically identifiable return that may be more easily identified as a beacon, as opposed to and in contrast to other objects. For example, in a particular embodiment, the exemplary passive beacon 500 may be a tall and thin object with highly reflective material thereon so as to stand out as a tall and bright object, while other objects (e.g., cargo tractors, personnel, etc.) may have bright returns, but are typically much wider in comparison.
Referring now to fig. 5, an example of such an exemplary passive beacon 500 is shown, the beacon 500 integrating a base support 505 (e.g., a traffic cone) with a
However, some sensors on the cargo tractor, such as monocular cameras, may use the color and/or shape of the base support (e.g., traffic cone) as a distinguishing feature for enhanced detection of the beacon in conjunction with the return captured by the LiDAR. In one embodiment, the beacon may be passive and unpowered. However, those skilled in the art will appreciate that other embodiments of the beacon may be powered to provide more visibility to the sensor suite of the cargo tractor (e.g., illuminated with lights or flashing lights that may be recognized, etc.).
Further, while the embodiment of beacon 500 shown in fig. 5 is a separately located passive beacon structure as opposed to a high value asset (such as an aircraft), another embodiment of the beacon may be implemented with reflective symbols or materials affixed to or integral parts of the high value asset. In this way, such an embodiment of the beacon may be part of, for example, an edge of an aircraft wing, an engine, a nose cone, a tail structure, or other protruding portion of an aircraft (which may have more risk of collision than other portions of the aircraft). Further embodiments may be implemented with extendable structures from such high value assets that may be selectively deployed or actuated from a stowed position or a stowed position to a deployed active position in which they may be detected when there is opportunity for a mobile industrial vehicle (such as a cargo tractor) to be present within the vicinity of the high value asset. For example, an exemplary high value asset (such as an airplane or tractor/trailer) may have an extendable reflective beacon that may be actuated to become visible in a deployed active position on the high value asset.
FIG. 6 is an
In more detail, for the example shown in fig. 7, there are 1752 samples collected for the beacon and 1597 samples collected for the cone. In this particular example, the samples cover a camera field of view of about +/-20 degrees from left to right, and the region of interest is about 5 to 20 meters ahead. Thus, the graph 700 shown in fig. 7 illustrates sample points showing the range (meters along the y-axis) and degrees from left to right from the center (along the x-axis) for each sample point.
FIG. 8 is a graphical diagram 800 illustrating exemplary training statistics for an exemplary collision avoidance system, according to an embodiment of the present invention. Referring now to FIG. 8, once the system is trained, an estimate of the relationship between the beacon's camera bounding box and the LiDAR range and angle measurements has been learned. For example, diagram 800 in FIG. 8 shows a predicted position (with an "o" symbol) versus a true position (with a "Δ" symbol), and in this example characterizes the system error rate in terms of error rate, mean of errors, and variance of errors. These are standard indicators for evaluating errors.
In view of the above description of an exemplary enhanced collision avoidance system for mobile industrial vehicles, such as powered cargo tractors and following linked/towed carts, fig. 9 is a block diagram of exemplary general data fusion process steps 900 related to enhanced collision avoidance in accordance with an embodiment of the present invention. Referring now to FIG. 9, the sensor input flow begins on either side of the diagram with the camera input on the left side and the LiDAR input on the right side. For example, on the left side of fig. 9 is block 905, block 905 representing operations performed on camera input data (e.g., camera images) captured by one of the cameras, such as the cameras shown in fig. 3 or fig. 4. The data fusion software module 310b of fig. 3 operates to acquire camera input, recognize and detect objects (such as beacons/cones in an image), and create a bounding box that represents the image coordinates of the beacons/cones. The camera object recognition deep learning
As discussed above, embodiments of the MPC software module 315a running on the collision avoidance controller 410 may utilize kinematic models for spatially sensing and estimating not only the position of the cargo tractor, but also the position of the cart without having sensors on the cart that provide real-time feedback. A more detailed description of such an exemplary kinematic model (also referred to as a dynamic path following model) appears below as part of this detailed description. When using kinematic models as part of the MPC software module 315a running on the collision avoidance controller 410, the controller 410 has access to Inertial Measurement Unit (IMU) 305c position information (such as heading and accelerometer data), as well as ECU 305b information from the cargo tractor (such as wheel speed data). In one embodiment (e.g., a local option), only the position of the cargo tractor relative to the detected object may be known. In some embodiments, the movement of the system may be interpolated or extrapolated from a combination of heading, acceleration, and wheel speed data.
Using this data, a kinematic model implementation run by the MPC software module 315a can estimate the position and orientation of the cart following the cargo tractor relative to the cargo tractor. FIG. 10 is a set of diagrams 1005 and 1020 illustrating exemplary kinematics model visualizations in connection with estimated and predicted movements of an exemplary tractor (industrial vehicle) and a following vehicle (trailer) in which an exemplary collision avoidance system may be deployed, according to embodiments of the present invention. Referring now to FIG. 10, the left-most visualization 1005 shows a tractor pulling two exemplary carts in a line. In the next visualization 1010-. Since the rotation of the cart lags behind the movement of the cargo tractor, the cart movement is calculated and estimated using a kinematic model. This may be shown in the rightmost visualization 1020, where the red track of the cargo tractor is different from the blue track of the forwardmost cart and the yellow track of the next cart — none of them is shown in line as the cargo tractor makes a turn and the carts follow.
In view of the above description of an exemplary enhanced collision avoidance system and how embodiments of the system may be implemented using hardware and software elements, the following is a description of an exemplary method for enhanced collision avoidance that may utilize and use such a system in accordance with embodiments of the present invention that focuses on avoiding high value assets, such as a portion of an aircraft, particular equipment, or areas in which personnel or equipment may be located. For example, the system may be implemented in an embodiment that integrates: sensing and sensor processing systems (e.g., signal processing software and data fusion software running on a sensor data processor module) that detect and identify objects and beacons using distinct and different types of sensor data that are advantageously fused to improve detection; a model predictive controller (e.g., model predictive control software running on a collision avoidance controller module operating with real-time capabilities) that utilizes cargo tractor/trolley kinematics and vehicle dynamics models for collision avoidance and speed management; and a vehicle actuation system (e.g., vehicle actuation feedback control software running on a vehicle feedback actuation controller module) that interfaces with vehicle controls to help move the industrial vehicle and its towed vehicle away from high value assets. One or more beacons are placed in strategic locations to allow highly robust detection and avoidance of high value assets.
In general operation, an embodiment of the method begins with: a first sensor (LiDAR) detects the beacon and distinguishes the beacon from other objects, such as people, vehicles, cargo tractors, and the like. The method continues with: the second sensor (camera (s)) detects one or more objects (such as people, beacons, vehicles, etc.). Advantageously, these data may be fused by first determining a bounding box based on sensor data captured by the second sensor (camera) and determining a mapping space based on sensor data captured by the first sensor (LiDAR). The determined bounding box is then projected into the determined mapping space and then compared to improve how beacons that indicate position relative to the high value asset are identified and in some cases also to distinguish the identification of other objects that may pose a risk to the predicted movement of the goods tractor relative to the high value asset. In this way, the method utilizes the fusion of two data sources to provide improved, enhanced and more robust performance of the collision avoidance system.
Embodiments of the method next use the controller to estimate the shortest possible path to a breakthrough point in space projected radially from the beacon location. This may be accomplished, for example, by Model Predictive Control (MPC) software 315a running on a collision avoidance controller module 410 operating with real-time capabilities, where the MPC software 315a receives information (e.g., from the sensor data processor 405) about locating the beacon and may determine the cargo tractor trajectory relative to the beacon and tractor speed. The collision avoidance controller 410 with the MPC software 315a enabled operates as a type of limited-look-ahead controller. Thus, the MPC software 315a predicts the shortest possible path to a breakthrough point in space projected radially from the beacon location (and in the case of utilizing a determined cargo tractor trajectory relative to the beacon and tractor speeds from the IMU information), and the MPC software 315a, in the case of referencing and utilizing a system kinematics model of the tractor and the cart (such as the model described above and the models referenced in greater detail in the embodiments described below), can also predict potential collisions between any part of the
The exemplary method continues with: the MPC software 315a generates a plurality of control solutions to determine the maximum allowable speed at discrete times in time/space. Those skilled in the art will appreciate the necessity of: such control solutions are generated in real-time or near real-time given a large set of possible solutions and time constraints for enabling rapid decisions to be made based on such generated control solutions as the cargo tractor and its towed group of vehicles continue to move. In more detail, each of the control solutions generated by the MPC software 315a running on the collision avoidance controller module 410 may be implemented with a cost function that compares the cost of each control solution, wherein the optimal solution may be selected based on criteria defined by the cost function. For example, a control solution that decelerates the cargo tractor quickly and then travels the same distance may result in higher performance costs relative to another control solution that gradually decelerates the cargo tractor over a longer distance (which results in lower performance costs) while remaining within speed regulatory limits relative to the area near the beacon or within other speed limits to accommodate the particular item to be transported, the number of carts towed, the quality of what is being transported on the carts, etc.
In the event that the
Such general method embodiments are consistent with the exemplary embodiment of the method described in the flow chart of fig. 38, in accordance with an embodiment of the present invention. Referring now to FIG. 38, a
At
In yet another embodiment of
At
At
In a further embodiment of the
Those skilled in the art will appreciate that such method embodiments as disclosed and explained above may be implemented with devices or systems such as the exemplary enhanced collision avoidance system described at least with reference to fig. 2-4 (or embodiments of such systems as described in more detail below), and with the sensor suite described above, as well as with different processor/controller modules, and different software modules running on different processor/controller modules as described above. Such software modules may be stored on a non-transitory computer readable medium in each processor/controller module. Thus, when executing such software modules, the collective processor/controller module of the enhanced system for collision avoidance may be operable to perform operations or steps from the exemplary method disclosed above, including variations of the method.
In another embodiment, a further method for enhanced collision avoidance may utilize and use a similar system according to an embodiment of the invention that focuses on object detection and object avoidance. For example, such a system may be implemented in another embodiment that also integrates the following: sensing and sensor processing systems (e.g., signal processing software and data fusion software running on a sensor data processor module) that detect and identify objects and beacons using distinct and different types of sensor data, which are advantageously fused to improve detection; a model predictive controller (e.g., model predictive control software running on a collision avoidance controller module operating with real-time capabilities) that utilizes cargo tractor/trolley kinematics and vehicle dynamics models for collision avoidance and speed management; and a vehicle actuation system (e.g., vehicle actuation feedback control software running on a vehicle feedback actuation controller module) that interfaces with vehicle controls to assist in moving the industrial vehicle and its towed vehicle from collisions with the detected plurality of objects.
In general operation, this particular method embodiment begins with: a first sensor (LiDAR) detects any object in a geometrically defined area projected in the direction of travel of the cargo tractor vehicle as part of a mapping space in the direction of travel of the cargo tractor. The method continues with: the second sensor (camera (s)) detects one or more objects (such as people, beacons, vehicles, etc.). Advantageously, these data may be fused by first determining a bounding box based on sensor data captured by the second sensor (camera) and determining a mapping space based on sensor data captured by the first sensor (LiDAR). The determined bounding box is then projected into the determined mapping space and then compared to improve on how to identify objects in the path of the cargo tractor. In this manner, the method utilizes the fusion of the two data sources to provide improved, enhanced, and more robust performance of the collision avoidance system relative to objects detected in the path of the cargo tractor.
Similar to prior methods, this method embodiment also uses MPC software running on the collision avoidance controller to calculate the maximum vehicle speed, which will allow the system to stop before colliding with an object within the constrained space in the vehicle's direction of travel as detected by the sensor platform. In the event that the cargo tractor has exceeded the maximum allowable speed calculated by the MPC software running on the collision avoidance controller, a feedback control system embodied in vehicle actuation feedback control software operates by actuating the brake and/or throttle controls of the cargo tractor to achieve the calculated deceleration or acceleration. Those skilled in the art will further appreciate that as part of this further method embodiment, the vehicle feedback actuation controller can also bring the cargo tractor to a complete stop if desired.
Those skilled in the art will appreciate that such additional method embodiments as disclosed and explained above may be implemented with a device or system such as the exemplary enhanced collision avoidance system described with at least reference to fig. 2-4, and with the sensor suite described above, as well as with different processor/controller modules, and as described above, with different software modules running on different processor/controller modules. Such software modules may be stored on a non-transitory computer readable medium in each processor/controller module. Thus, when executing such software modules, the collective processor/controller module of the enhanced system for collision avoidance may be operable to perform operations or steps from the exemplary method disclosed above, including variations of the method.
New materials-further enhancement
Object persistence
As noted above, the example model predictive control 315a may track the persistence of detected objects (such as reflective beacons) within the state model. Those skilled in the art will appreciate that embodiments may implement object persistence as a software function within the exemplary collision avoidance system 300 that tracks and updates the position of identified objects (such as reflective beacons) relative to the cargo tractor as the cargo tractor moves through space. This functionality enables improved, enhanced and more accurate collision avoidance calculations for: these objects may have moved beyond the current field of view (FOV) of the sensors on the cargo tractor or have become occluded. In other words, embodiments may implement object persistence as part of the model predictive control 315a to enhance and improve how the example collision avoidance system 300 interprets and tracks detected objects (such as reflective beacons) and avoids collisions with detected objects after the sensor package in front of the mobile industrial vehicle (e.g., the cargo tractor 115) has moved past the detected object (e.g., a reflective beacon) and no longer has the detected object in the FOV of the sensor package.
Thus, in such embodiments, detected objects (such as detected reflective beacons) may persist within the system and be viewed by the model predictor control 315a as part of its functional collision avoidance and spatial awareness aspects, so the system may actually keep track of the vehicles (e.g., the
Boundary identification
Further embodiments may use boundary types to support various applications such as positioning, entry rejection, and automated mode selection as involved in exemplary collision avoidance systems and methods. In general, exemplary boundaries in the context of an exemplary collision avoidance system used on a mobile industrial vehicle (e.g., cargo tractor 115) may be identified by identifying markers placed in a physical environment. The virtual boundaries may be defined in software that includes a geo-locating instrumentation as part of an exemplary collision avoidance system (e.g., using GPS location sensor 305c and location sensor data provided to model predictive control software module 315 a). Thus, a geo-reference (e.g., a geofence using GPS coordinate location data) can be used in mode selection and regional boundaries, as discussed in more detail below with respect to multi-mode operation of the system and triggers that can change operation between modes based on boundaries, as well as denying areas of entry through geo-reference parameter selection.
Boundary and object identification using marker identification
In further embodiments, predetermined types/shapes of object identifiers (also referred to as markers) may be strategically placed outside of the vehicle in the environment to enable sensors on the vehicle to see, detect, recognize, and generate sensor data that allows the exemplary collision avoidance system to react accordingly. In more detail, such object identifier markers may have a symbology of a shape and type as part of the marker to uniquely identify the object or bounding region (and orientation in some embodiments) by, for example, a predetermined code and/or size. An example of such object identifier tagging may be implemented by ArUco tagging, which allows a camera-based system to quickly and reliably identify unique tags and estimate range, angle, and orientation. Embodiments may use the ArUco marker to identify the boundaries of the moving
Exemplary operating modes-Driving Lane and aircraft boarding Port area
In still further embodiments, the example collision avoidance system may be programmatically configured to operate in different operating modes (e.g., using different operating parameters for vehicle operation, such as speed, what sensors to use, sensor settings, distance constraints, etc.) depending on the operating zone in which the vehicle having the collision avoidance system is operating. Additionally, embodiments may enable an exemplary collision avoidance system on a vehicle to independently switch between modes without communicating with a larger network.
In more detail, embodiments may have an airplane boarding gate area (AGA) mode and a Driving Lane (DL) mode. Each of the two exemplary operating modes AGA and DL function within different operating parameters and uniquely utilize system features during operation of a vehicle having the exemplary collision avoidance system 300 configured to operate in such different modes. For example, an exemplary DL mode is defined by the following boundaries: this boundary separates the driving lane (e.g., the area where the
Multi-mode operation switching based on tag identification
Embodiments described herein may use the above-described exemplary object identifier tags (e.g., ArUco tags) to designate different regional boundaries for different operating modes of a collision avoidance system of a vehicle. The detection of these markers may provide input to an embodiment of the exemplary collision avoidance system 300 so that the system can detect such markers and identify such markers as related objects or boundaries (e.g., based on the encoding of a particular ArUco marker). The system may then responsively identify when to switch from a "forward protection detection mode" (e.g., DL mode), which is to be used outside of the restricted area (e.g., the boarding gate area associated with loading the aircraft), to a "boarding gate area type detection mode" (e.g., AGA mode) when entering the more restricted boarding gate area. In this manner, the example collision avoidance system 300 on the
In such embodiments, an exemplary object identifier marking (e.g., a specific ArUco marking) may be used to indicate where an object is located, such as where the
In these further embodiments, these system triggers may allow for further enhanced and improved collision avoidance responses in both environments, while minimizing situations where certain detections may unnecessarily stop the tractor/vehicle train. Preventing unnecessary system triggered stops/responses and allowing these changes in the operating mode (gate zone collision avoidance, which focuses primarily on beacons, and then outside gate zones, which focuses primarily on front protection collision avoidance) provides technical solutions and practical applications of the above-described system elements to even further improve collision avoidance and enhance safe logistics operations involving logistics vehicles, such as
Fig. 39 is a diagram of another exemplary tractor collision avoidance system operational view deployed in another exemplary logistics environment in accordance with an embodiment of the present invention. As shown in fig. 39, another embodiment having an
Dynamic field of view
In a front guard embodiment (e.g., when the exemplary collision avoidance system 300 is operating in a less restrictive DL mode), the collision avoidance portion of the overall system 300 may use sensors (e.g., sensors 305d, e, f) and sensor data generated by such sensors in an improved and enhanced manner. In more detail, the example collision avoidance system 300 may dynamically adjust a field of view (FOV) of interest to the system 300 that effectively changes where the sensor may be focused and/or the extent to which the sensor receives the width of the field. In such embodiments, this allows the exemplary collision avoidance system 300 to change, refine, and dynamically adjust for changes in the operating mode of the system. For example, the example collision avoidance system 300 (e.g., the multi-processor module 405 running the signal processing software module 310 a) may make changes to prioritize a portion of the sensor data generated by the sensors, which effectively draws the system 300 more focus on sensor data based on the direction of travel and/or the direction in which the
Minimum keep-out distance lock (lockout)
As noted above, different operating modes of the example collision avoidance system 300 may have different operating parameters, and system features are utilized in different ways during operation of a vehicle having the example collision avoidance system 300 configured to operate in such different modes. For example, in addition to speed limiting operating parameters and field of view parameters for the particular sensor used in a given operating mode, further exemplary operating parameters/characteristics that may be relevant to a particular operating mode of the exemplary collision avoidance system 300 may include: a minimum access prevention distance (KoD). Generally, the minimum KoD is the radial distance from the object at which the example collision avoidance system 300 may be implemented using a vehicle actuation system and cause a complete immediate vehicle stop. Thus, the exemplary minimum KoD lock allows for a full brake-to-stop response for objects that may enter the sensor FOV monitored by the exemplary collision avoidance system. Such an exemplary minimum KoD may be different for different operating modes, as the speeds involved in different operating modes may be different, which may require that minimum KoD be higher to account for higher potential speeds in a particular operating region (e.g., under speed limit parameters associated with the operating mode of that region). However, other areas may have a desired minimum KoD that provides more distance to the detected object for reasons other than braking to a stop (e.g., an object in which the area has a dangerous nature, which may reasonably require a larger minimum KoD, etc.).
Local temporary system override (override)
A further feature of the example collision avoidance system 300 may be a local temporary system override. The local temporary system override feature of the exemplary system 300 allows a tractor operator to disable the system 300 on a time limited basis. This may be accomplished by interacting with a gear selector 325 (i.e., one of the vehicle actuators controlled by the vehicle actuation feedback control 320 a). For example, placing the
Further embodiments of the exemplary collision avoidance system 300 may include further software-based modules in certain logical segments of further roles within the system that enhance the operation of the system 300, including a user interface with which to enter information (such as cart count information) and monitor the status of the system 300, in accordance with embodiments of the present invention. In more detail, further embodiments may include an exemplary sensor platform management software-based module as part of the sensor processing segment 310 shown in FIG. 3, while a system management software-based module may be part of the collision avoidance control segment 315 shown in FIG. 3. Embodiments of an exemplary sensor platform management and system management module may include the following: these aspects provide for system initiation functionality, user input for mode selection, communication features, and generation of different types of user interfaces for the exemplary collision avoidance system 300.
For example, such exemplary modules may include an auto-start feature for the exemplary system 300, wherein upon system power-up, the vehicle actuation feedback controller 415 initiates a start-up cycle for the remaining system components in the system 300, which results in the automatic enablement of the exemplary collision avoidance system 300 upon completion of system initialization.
In another example, such exemplary modules may include a software-based network connection for the exemplary system 300. While embodiments of the exemplary collision avoidance system 300 have numerous features and operational scenarios in which the system 300 operates in an autonomous or semi-autonomous mode that does not require connection to a larger network and to systems over such networks, the inclusion of network connections (e.g., over Wi-Fi, cellular, or other wireless technologies) includes the ability to allow remote system monitoring and manual commands to various system states and parameters of the system 300, as well as to receive updated information regarding a particular operating environment (e.g., identification information regarding particular object identification tags used within a particular aircraft environment, etc.).
From a user interface perspective, such exemplary modules may enable system 300 to present one or more different graphical user interfaces, including remote visualizers and/or status indicators. An exemplary graphical user interface generated by such modules as part of the exemplary collision avoidance system 300 may provide an intuitive interface for user input of adjustment system parameters (such as car count information, etc.). An exemplary remote system visualizer may provide a graphical representation of, for example, MPC calculations and control response solutions. Further, the exemplary status indicator module (implemented as part of such an exemplary sensor platform management and system management module) may communicate the current system status and high-level actions of the exemplary collision avoidance system 300 to a driver of the
Further example embodiments are illustrated in fig. 40-41, in which different operating regions are shown, and in which an example vehicle and its onboard collision avoidance system may be switched from DL mode to AGA mode, which is engaged in different collision avoidance system operating parameters and functions in an autonomous and automatic manner that enhances the collision avoidance capabilities of such an example vehicle. Fig. 40 is a diagram of another exemplary tractor collision avoidance system operational diagram deployed in another exemplary logistics environment, wherein the exemplary tractor collision avoidance system operates in an exemplary Driving Lane (DL) mode in accordance with an embodiment of the present invention. Referring now to fig. 40, exemplary
Fig. 41 is a diagram of the exemplary logistics environment of fig. 40, but wherein the exemplary tractor collision avoidance system on
FIG. 42 is a flow diagram of an exemplary method for enhanced collision avoidance by a mobile industrial vehicle using a multi-mode on-board collision avoidance system, and which may operate in a plurality of different operating regions, in accordance with an embodiment of the present invention. Referring now to FIG. 42,
At
At
In more detail, the second collision avoidance mode and the first collision avoidance mode may be different with respect to what operational features are used on the multi-mode collision avoidance system in the different modes. For example, at least one additional operational feature of the multi-modal collision avoidance system may be used in the second collision avoidance mode (e.g., the AGA mode) when compared to the operational feature of the multi-modal collision avoidance system used in the first collision avoidance mode (e.g., the DL mode). Such additional operational features (or different operational features) may include: for example, a minimum ingress prevention distance threshold feature for preventing the mobile industrial vehicle from moving within a minimum ingress prevention distance threshold from the object detected by the sensor; an object persistence feature for tracking a detected object after the detected object exceeds a field of view of the sensor; an altered field of view characteristic for altering the field of view of the sensor to enhance collision avoidance when operating in the second collision avoidance mode; and/or a dedicated object detection feature for enabling detection of a reflective beacon that differs from other objects alone and in addition to detecting other objects alone when operating in the second collision avoidance mode.
Termination of new materials
Additional details regarding exemplary dynamic path following or kinematic models
As noted above, embodiments may use dynamic path following or kinematic models as part of the applied and enhanced system, apparatus and method embodiments that relate to predicting future states (e.g., of movement and path) of a multi-element mobile industrial vehicle, such as a cargo tractor with a towed vehicle or trailer, as part of an improved embodiment for avoiding collisions with high value assets by the mobile industrial vehicle.
In this particular description of embodiments of such exemplary dynamic path following or kinematic models that may be deployed as part of the applied and enhanced system, apparatus and method embodiments, the following abbreviations are used and are explained as follows:
t: the current time; Δ t: step of time
u (0): initial displacement; u (t): current displacement at time t
u (t + Δ t): the next displacement at t + Δ t; v (0): initial linear velocity
v (t): the current linear velocity at time t; v (t + Δ t): linear velocity at t + Δ t
a (0): an initial linear acceleration; a (t): current linear acceleration at t
a (t + Δ t): linear acceleration at t + Δ t;
θ (0): an initial orientation angle; θ (t): current direction angle at time t
θ (t + Δ t): the next bearing angle at t + Δ t; ω (0): initial angular velocity
ω (t): angular velocity at time t; ω (t + Δ t): angular velocity at t + Δ t
α (0): an initial angular acceleration; α (t): angular acceleration at time t
α (t + Δ t): angular acceleration at t + Δ t;
w: a vehicle width; l: length of vehicle
Lf: hitch length on vehicle front
Lr: hitch length on rear of vehicle
Beta: a steering angle; WB: wheelbase
La: distance from rear axle of towing vehicle to its hitch point
Lb: distance from previous hitch point to rear axle of towed vehicle
Lc: distance from rear axle of towed vehicle to next hitch point
Rra0: rear axle radius of towing vehicle
Rrai: rear axle radius of ith towed vehicle
Rh0: hitch radius of towing vehicle
Rhi: hitch radius of i-th towed vehicle
Subscripts x, y: in the X and Y directions
Subscript d: towed vehicle
Subscript i: the ith vehicle, i = 0 is the towing vehicle, i = 1 to 4 represents the unit being towed.
In general, the embodiments described below of an exemplary dynamic path following or kinematic model (including fig. 11-19) that may be deployed as part of the applied and enhanced system, apparatus, and method embodiments predict continuous motion of a towing vehicle system and follow its trajectory. The exemplary model addresses off-track effects that occur when a towing vehicle system makes a turn. The framework of the exemplary model includes: (1) a state space model that describes the relationship between the moving elements (linear and angular position, velocity and acceleration) of a towing vehicle and its towed vehicle (e.g., a cart and/or trailer); (2) a geometric model that locates the instantaneous position of these vehicles (including the instantaneous position of the hitch point); (3) an Ackerman (Ackerman) steering model that outlines the shape of the entire towing vehicle system at any time by taking into account off-track effects; and (4) a hitch return (back) model that calculates a history of the towed vehicle's heading angle based on the towing vehicle's inputs, and thus captures continuous motion of the towing vehicle system.
In previous attempts to solve the problem of more accurately tracking the continuous motion of a traction vehicle system (a component of a mobile industrial vehicle), considerable errors have been found when comparing the path predicted from the model with the true path of the traction vehicle system. Previous attempts to model this behavior assumed that the following towed vehicle followed the same path as the towing vehicle and ignored off-track effects. While others have attempted to solve this problem using, for example, king pin slipping (kingpin slipping) techniques and movable junction (movable junction) techniques to eliminate off-track deviations of vehicles such as queues, the implementation of these techniques is too costly and most towed vehicle systems still suffer from off-track problems. Therefore, to improve prediction accuracy, an improved dynamic model is developed that accounts for off-track effects, as described in more detail below.
In vehicle systems like queues, off-track effects mean: compared to towing vehicles, towed vehicles always follow a tighter path around corners, and the more units (trailers) that are towed, each subsequent trailer will follow a tighter path than the trailer that passed before it. As shown in FIG. 11, the example dynamic modeling framework 1105-: a state space model is employed to calculate the instantaneous position and speed of the towing vehicle based on newton's second law and thereby estimate the position of the subsequent towed vehicle by assuming that each towed unit follows the same path (sequence of heading angles) as the towing vehicle. Equation (1) shown below lists a state space model that calculates its instantaneous position and velocity based on the initial conditions of the towing vehicle collected from the IMU.
Equation (1)
。
FIG. 12 is a diagram of an exemplary single rigid object model, according to an embodiment of the present invention. Assume that the towing vehicle and towed unit are rigid objects (such as object 1205) with three degrees of freedom: translating in the X and Y directions and rotating about Z (as shown in fig. 12), the state space model can be represented as equation (1). The position calculated according to equation (1) represents the position of a reference point at the towing vehicle and the real-time shape of the towing vehicle will be determined based on the coordinates of this point and its dimensions. The same method is then applied to determine the instantaneous shape of the following vehicle.
In equation (1), uxAnd uyRespectively representing the X and Y positions of a reference point of the rigid object (e.g., the center of the front end of the towing vehicle or towed unit). Based on the reference point, the positions of other points within the rigid object can be easily determined following a geometric relationship. As a rigid object, any point on the towing vehicle or each towed unit should have the same direction, speed, and acceleration. The linear velocity and acceleration in the X and Y directions may be related to the direction angle θ, which is expressed in equation (2) shown below:
equation (2)
。
The real-time heading angle of the towing vehicle calculated according to equation (1) is then used to predict the heading angle of the following towed unit to fully determine the shape of the entire towing vehicle system at any time. In estimating the angle of the towed vehicle, previous models assumed that the towed vehicle followed the same history of angular positions as the towing vehicle. In other words, the instantaneous heading angle of the towing vehicle is transmitted to the following vehicle with a suitable time delay that depends on the stiffness of the connection between two adjacent vehicles.
FIG. 13 is a diagram of an exemplary mobile towing vehicle system having four towed units in accordance with an embodiment of the present invention. Referring now to fig. 13, an exemplary mobile towing vehicle system 1300 is shown as a polygon representing a towing vehicle 1305, and a series of towed units (e.g., carts or trailers) 1310a-1310d linked with hitches 1315a-1315 d. Based on the calculated or estimated positions of the towing vehicle 1305 and its sequence by the towing units 1310a-1310d, a polygon model is developed to predict the instantaneous shape of the exemplary mobile towing vehicle system 1300 at any time. The beacon system is used to update the instantaneous position of a reference point on the towing vehicle (e.g., the middle of its front end), which will be an input parameter to the model. In the developed polygon model, the exemplary towing vehicle and each exemplary towed unit are assumed to be rectangular with four vertices, and then the shape of the entire towing vehicle system may be represented as a polygon formed by line segments connecting all vertices. Equation (3) below explains how to calculate the global coordinates (related to reference point O) of the four vertices of the i-th towed element.
Equation (3)
。
However, by assuming that each towed unit follows the same path (sequence of heading angles) as the towing vehicle, the off-track effect is ignored. In fact, due to off-track effects, the towed vehicle follows a tighter path around the corner than the towing vehicle. When using previously known models to predict the shape of a mobile towing vehicle system having more than two towed units, the omission of this effect contributes to significant errors.
Embodiments of the improved model may track the path of a towing vehicle system. FIG. 14 is a diagram of an exemplary geometric model of a towing vehicle system having an exemplary towing vehicle 1405 and two towed vehicle units 1410a, 1410b, showing hitch point H, according to an embodiment of the present invention1、H2. As part of such an embodiment, the geometric model determines the coordinates of all vertices of the mobile towing vehicle system at any time, based on which the instantaneous shape of the system can be easily mapped. The model allows the connection between the towing vehicle and the towed vehicle, and between any two adjacent towed vehicles, to be represented as: a rigid link (e.g., 1415a, 1415 b) from the rear end mid-section of the towing vehicle (or the towed vehicle in front) to the hitch point; and another rigid link (e.g., links 1420a, 1420 b) from the hitch point to the front end mid-portion of the towed vehicle (or a towed vehicle behind). This modeling of the connection allows a steering model to be implemented to predict vehicle systemsOff-track effects are captured.
Referring to the exemplary geographic model shown in FIG. 14, various labels and abbreviations are used. For example, wtFor indicating the width of the towing vehicle,/tDenotes its length, wdIndicates the width of the unit to be towed,/dDenotes its length, LrIndicating the length of the hitch attached to the rear end of the towing vehicle (from 5 to H)1),Lf toIndicating the length of the hitch attached to the front end of the first towed unit (from H)1To O'). Top 1-5 and hanging point H of traction vehicle model1Can be calculated as:
equation (4)
。
1 'to 5' and H can be represented in the same manner2Coordinates with respect to the local reference point O':
equation (5)
And
。
similarly, for the ith towed unit, its five vertices and HiThe relative coordinates with respect to its local reference point can be easily expressed as:
equation (6)
。
Next, the relative coordinates of the four vertices of the first towed unit are referenced(equation (4)) are mapped back to the global reference point O in order to obtain their global coordinates. This operation may be performed by transforming the reference point from O' to O. To find the mapping relationship, three vectors may be used
、Andto construct a triangle delta OH1O' as illustrated in fig. 15 with an exemplary towed unit 1505. Two vectorsAndis OH in length1= lt+ LrAnd H1O'= Lf(ii) a The two vectors are oriented by an angle theta1And theta0Are indicated. According to the cosine and sine laws, the triangle delta OH can be completely matched1O' is solved and can be followed asAndto easily map the coordinates of O' to the coordinates of O. Thus, the global coordinates of the four vertices (1 'to 4') of the first towed vehicle can be calculated as:equation (7)
。
Examining equation (7) and combining it with equation (6), the global coordinates of the four vertices of the i-th towed unit (related to reference point O) can be obtained as:
equation (8)
。
In order to correctly calculate the turning radii of the towing and towed vehicles when the towing vehicle system makes a turn, an ackerman steering model may be used. Those skilled in the art will appreciate that ackermann steering principles define geometry (geometry) applied to all vehicles in a towing vehicle system with respect to the turning angle of the steering wheel, referred to as steering angle β. By this principle, the radius of several key points of the vehicle system can be determined, on the basis of which the position of the towed unit in relation to the towing vehicle can be determined and the path of the entire system can be simulated very well. By using ackermann steering principles as part of this embodiment of the new path-following model, an improved and enhanced description of the instantaneous position of the towing vehicle and each towed vehicle, which takes into account the maximum deviation trajectory, can be achieved. Embodiments of such a model are further explained below.
Fig. 16 illustrates a simplified exemplary vehicle system having one towing vehicle (tractor) 1600 and one towed vehicle (trailer) 1605 and various distance reference lengths in a tractor-trailer model according to embodiments of the present invention. Fig. 17 is a diagram of an exemplary scale model of an
Equation (9)
。
Wherein, γ1Indicating the difference between the directions of the towing vehicle and the first towed vehicle, following a relation
(FIG. 14). It should be mentioned that with the steering model presented, the radial position of any point on the vehicle system can be calculated in a similar way to equation (9). We show only the equations used to calculate the rear axle radius and hitch radius for testing and verification purposes. The front axle radius of the towing vehicle and its position are completely determined by a kinematic model or a state space model as shown in equation (1) and need not be estimated based on trigonometric relations.Furthermore, equation (9) can be easily modified by simply replacing the steering angle and size of the towing vehicle with that of the towed vehicle and applied to calculate the radius of any towed vehicle. Equation (10) shows a general formula for calculating the rear axle and hitch radius of the ith towed vehicle, assuming that subsequent towed vehicles are of the same sizeAnd have the same LbAnd Lc。
Equation (10)
。
Hitching return method for path following simulation
The ackermann steering model helps predict the instantaneous shape of the towing vehicle system, but lacks the ability to simulate continuous motion of the towing vehicle system by presenting its intermittent steps in sequence. For example, if the towing vehicle system is traveling in a straight line when the steering wheel of the towing vehicle is quickly turned 10 ° away from the forward direction, the towing vehicle and all towed units will immediately adjust themselves into the appropriate radial positions as calculated according to equations (9) and (10) rather than gradually moving into those positions.
To more accurately simulate the continuous motion of a towed vehicle system, a "hitch return method" was developed that employed instantaneous shapes calculated from an ackermann steering model as reference points while continuously following the path of the towed vehicle system with high accuracy. Referring now to fig. 18A-18C, the method begins with a simple one tractor 1800 (with rear hitch 1805) and one trailer 1815 (with front hitch 1810) model in three states: (1) an initial state (fig. 18A) when the model is traveling in a straight line; (2) a neutral state (fig. 18B) when the tractor begins to make a turn while the trailer is still traveling in this straight line; and (3) a final state (fig. 18C) when the tractor's angle input has been transferred to the trailer.
Since the heading angle of the towing vehicle is fully determined based on the IMU data, we need only develop an angular increment Δ θ to estimate the towed vehicle from the initial state to the final stated1The process of (1). Following the trigonometric relationship illustrated in FIG. 19, the angle may be varied according to the X and Y offsets between its forward hitch point (Point 1) and its rear axle center point (Point 2)Increment of delta thetad1The calculation is as follows:
equation (12)
。
An exemplary procedure is generated based on the developed hook return model (equations (11-12)) and implemented into a simulation software package using C + + programming. Continuous motion of an exemplary towing vehicle system having two towed units is successfully simulated using a simulation tool, and the simulated path of the towing vehicle system model closely matches the real path measured from the scale model, and off-track effects when the towing vehicle makes a turn are properly accounted for. It is worth mentioning that in the simulation the speed of the towing vehicle and its steering angle are input variables, based on which the angular speed of the towing vehicle can be calculated asWherein the rear axle radius R is due to the center of rotation of the towing vehicle being located at the center point of the rear axleraThe center point of the rear axle is assumed to be the center of rotation of the vehicle because the front wheels are free wheels, which generate steering angles for the vehicle body to followraThe kinematics of the towed vehicle, including its distance, velocity, acceleration, angle of rotation, and angular velocity and acceleration, follow the relationships described in Newton's second Law, and may be calculated using a state space model (equation (1)).
Additional details regarding multi-sensor detection system embodiments and operation thereof
Further exemplary embodiments may include systems, apparatuses, and methods in which an industrial vehicle (such as a freight tractor) utilizes LiDAR and monochrome cameras to detect passive beacons, and utilizes model predictive controls to stop the vehicle from entering a constrained space. In such embodiments, the beacon may be implemented solely with a standard orange traffic cone (depending on the desired elevation), or may be deployed with a highly reflective vertical pole attached. LiDAR may detect these beacons, but may suffer from false positives due to other reflective surfaces, such as a worker's safety vest within the LiDAR's visual environment. As noted above, the embodiments described herein and as follows help reduce false positive detection from LiDAR by: beacons are projected in the camera image via a depth learning method, and the neural network learned projection from the camera to the LiDAR space is used to verify the detection.
In more detail, further embodiments described below (and illustrated with reference to the diagrams in fig. 20-37) provide and utilize a substantially real-time industrial collision avoidance sensor system designed to not impact obstacles or personnel and to protect high value equipment. In general, such embodiments may utilize scanning LiDAR and one or more RGB cameras. Passive beacons are used to mark isolated areas where industrial vehicles are not allowed to enter, thereby preventing collisions with high value devices. The forward guard processing mode prevents a collision with an object directly in front of the vehicle.
To provide a robust system, the sensing processing system of such embodiments may use a LiDAR sensor (e.g., a quinergy eight beam LiDAR) and a camera sensor (e.g., a single RGB camera). LiDAR sensors are active sensors that can work regardless of natural lighting. It can accurately locate objects via its 3D reflections. However, LiDAR is monochromatic and cannot distinguish objects based on color. Moreover, for objects that are far away, LiDAR may only have one to two beams that intersect the object, making reliable detection problematic. However, unlike LiDAR, RGB cameras can make detection decisions based on texture, shape, and color. An RGB stereo camera may be used to detect objects and estimate 3D positions. Although embodiments may use more than one camera, the use of stereo cameras typically requires a significant amount of additional processing and may result in difficulty estimating depth when objects lack texture cues. In another aspect, a single RGB camera may be used to accurately locate objects in the image itself (e.g., determine bounding boxes and classify objects). However, the resulting localization of the projection into 3D space is poor compared to LiDAR. Furthermore, cameras will degrade in foggy or rainy environments, while LiDAR may still operate effectively.
In the description of further embodiments that follows below, embodiments may use both LiDAR sensors and RGB camera sensors to accurately detect (e.g., identify) and locate objects using a data fusion process that allows for both types of data to be used when detecting or identifying object locations. Such an embodiment better addresses collision avoidance using, for example: a fast and efficient method of learning projections from camera space to LiDAR space and providing camera output in the form of LiDAR detection (distance and angle); a multi-sensor detection system that fuses both cameras and LiDAR detection to obtain more accurate and robust beacon detection; and/or technical solutions that have been implemented using a single Jetson TX2 board (dual CPU and GPU) board (a type of multi-processor module) to run sensor processing, and a separate second controller (TX 2) for a Model Predictive Control (MPC) system to help achieve substantially near real-time operation and avoid lag time (which may lead to collisions). In the context of the description of further embodiments below, context information regarding certain types of sensor detection (e.g., camera detection, LiDAR detection) and their use for subsequent object detection, as well as context information regarding fuzzy logic, may be applied in embodiments to combine data from different sensors and obtain detection scores for use in such embodiments.
Those skilled in the art will appreciate that object detection from camera images may involve both classification and localization of each object of interest. Information about how many objects are expected to be found in each image may not be available or unknown, which means that there is a different number of outputs for each input image. Additionally, the locations in the image where these objects may appear, or how much of their size may be unavailable or unknown. Those skilled in the art will further appreciate that with the advent of deep learning (referred to herein as "DL" -also referred to as deep structured learning or layered machine learning), existing object detection methods using DL have outperformed many traditional methods in terms of both accuracy and speed. Those skilled in the art will further appreciate that there are systems that improve detection results in a computationally intelligent manner based on such existing DL detection methods.
Generally, image object detection using DL and camera images may use two known methods. One approach is based on regional proposals. A faster region with a convolutional neural network (R-CNN) is an example. The method first runs the entire input image through several convolutional layers to obtain a feature map. Then, there is a separate area proposal network that uses these convolution features to propose possible detection areas. Finally, the rest of the network will give a classification of these proposed areas. This kind of architecture may significantly reduce the processing speed, since there are two parts in the network, one for predicting the bounding box and the other for classification. Another type of approach uses one network for both predicting potential areas and for label classification, such as with the "look-once-you-see (YOLO)" approach. Given an input image, the YOLO method first divides the image into a coarse mesh. For each mesh, there is a set of basic bounding boxes. For each basic bounding box, if YOLO considers an object to be present in the grid location, YOLO predicts an offset from the true location, a confidence score, and a classification score. YOLO is fast, but sometimes small objects in the image may not be detected.
The method may use LiDAR detection as opposed to object detection based on camera sensor data. For LiDAR detection-based approaches, one difficult part may involve classifying points based only on sparse 3D point clouds. One skilled in the art will appreciate that one approach may use Eigen-feature analysis (Eigen-feature analysis) with a weighted covariance matrix of a Support Vector Machine (SVM) classifier. However, this method is directed to dense airborne (airborne) LiDAR point clouds. In another known approach, one skilled in the art will appreciate that the feature vector is classified for each candidate object with respect to a training set of manually labeled object positions.
Those skilled in the art will further appreciate that DL has also been used on 3D object classification. Many existing DL-based 3D object classification problems involve two steps: a data representation to be used for the 3D object is determined, and a Convolutional Neural Network (CNN) is trained on the representation of the object. VoxNet is a 3D CNN architecture that can be used for efficient and accurate object detection from LiDAR and RGBD point clouds. An example of DL for volume shapes is the Princeton model net dataset, which proposes a volume representation of a 3D model and a 3D volume CNN for classification. However, these solutions also rely on high density (high beamcount) LiDAR, so they would not be suitable for systems with an eight-beam Quatergy M8 LiDAR sensor, which is an economically viable LiDAR sensor for deployment on mobile industrial vehicles such as cargo tractors.
In systems that fuse different data for object detection, as in the embodiments described herein, different sensors for object detection have their advantages and disadvantages. Embodiments of sensor fusion may integrate different sensors for more accurate and robust detection. For example, in object detection, a camera may provide rich texture-based and color-based information that is typically lacking in LiDAR. In another aspect, LiDAR may be operated in low visibility, such as at night or in mid-fog or rainy days. Camera processing in severe weather conditions may degrade or even fail completely. Also, to detect object position relative to the sensor, LiDAR may provide a much more accurate estimate of spatial coordinates than a camera. Since both cameras and LiDAR have their advantages and disadvantages, embodiments that improve and enhance object detection based on fused data may take advantage of their advantages and eliminate their disadvantages when they are fused together. One method for camera and LiDAR fusion uses external calibration (e.g., another method that uses various checkerboard (checkerboard) patterns, or finds corresponding points or edges in both the LiDAR and camera images in order to perform external calibration). However, this known approach requires that the LiDAR sensor be expensive, with a relatively high vertical specific resolution (e.g., based on 32 or 64 beams). Another method estimates a transformation matrix between LiDAR and cameras. These methods are limited and only suitable for modeling indoor and short-range environments.
Another approach uses a similarity metric that automatically registers LiDAR and optical images. However, this method also uses dense LiDAR measurements. A third method uses stereo cameras and LiDAR for fusion, which fuses sparse 3D LiDAR and dense stereo image point clouds. However, matching of corresponding points in a stereo image is computationally complex and prone to errors if there is little texture in the image. Both of these methods require dense point clouds and will not be effective with smaller LiDAR such as Quanergy M8. In contrast to previous approaches, the embodiments described herein differ in a unique and inventive manner, such as by using a single camera and relatively inexpensive eight beam LiDAR for an outdoor collision avoidance system, which avoids hysteresis that may be intolerable for substantially real-time collision avoidance systems.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:用于车辆的传感器组件和用于监控传感器的方法