System and method for enhanced collision avoidance on logistics floor support equipment using multi-sensor detection fusion

文档序号:1047599 发布日期:2020-10-09 浏览:17次 中文

阅读说明:本技术 用于使用多传感器检测融合来在物流地面支持设备上进行增强的碰撞避免的系统和方法 (System and method for enhanced collision avoidance on logistics floor support equipment using multi-sensor detection fusion ) 是由 J·E·巴尔 R·F·布希五世 L·D·卡格莱 C·S·达文波特 J·R·加福德 T·J· 于 2019-02-26 设计创作,主要内容包括:一种用于在移动工业车辆上使用多传感器数据融合来对其周围具有反射性信标的高价值资产进行碰撞避免的增强的系统和方法。该系统具有:带有LiDAR和相机传感器的感测处理系统;以及响应于不同传感器的多处理器模块。感测处理系统融合不同的传感器数据以定位反射性信标。车辆上的模型预测性控制器确定可能的控制解决方案,其中每个控制解决方案基于去往从反射性信标投影的突破点的所估计路径来定义车辆在离散时刻处的阈值可允许速度,并且然后基于性能成本函数来标识控制解决方案中的最优的一个,并且控制解决方案中的该最优的一个与最优阈值可允许速度相关联。该系统具有车辆致动器,该车辆致动器被配置成当车辆超过最优阈值可允许速度时作出响应并且更改车辆移动以避免碰撞。(An enhanced system and method for collision avoidance on a mobile industrial vehicle using multi-sensor data fusion for high value assets with reflective beacons in their surroundings. The system has: a sensing processing system with LiDAR and camera sensors; and a multi-processor module responsive to different sensors. The sensing processing system fuses the different sensor data to locate the reflective beacon. A model predictive controller on the vehicle determines possible control solutions, wherein each control solution defines a threshold allowable speed of the vehicle at a discrete time based on an estimated path to a breakthrough point projected from the reflective beacon, and then identifies an optimal one of the control solutions based on a performance cost function, and the optimal one of the control solutions is associated with an optimal threshold allowable speed. The system has a vehicle actuator configured to respond and alter vehicle movement to avoid a collision when the vehicle exceeds an optimal threshold allowable speed.)

1. A method for enhanced collision avoidance for high value assets based on multi-sensor data fusion by a mobile industrial vehicle, the high value assets having one or more reflective beacons disposed relative to the high value assets, the method comprising the steps of:

(a) detecting one or more reflective beacons relative to a mobile industrial vehicle with a LiDAR sensor on the mobile industrial vehicle;

(b) detecting one or more objects relative to the mobile industrial vehicle with a camera sensor on the mobile industrial vehicle;

(c) fusing, by a sensor processing system on a mobile industrial vehicle, sensor data detected by each of the LiDAR sensors and the camera sensors to identify relative positions of one or more reflective beacons based on a multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data;

(d) determining, by a model predictive controller on a mobile industrial vehicle, a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from a verified relative position of one or more reflective beacons;

(e) identifying, by a model predictive controller, one of the control solutions as an optimal solution based on a performance cost function, wherein one of the control solutions is associated with an optimal threshold allowable speed; and

(f) when the mobile industrial vehicle exceeds the optimal threshold allowable speed, a vehicle speed control element is responsively actuated by a vehicle actuation system on the mobile industrial vehicle to cause the mobile industrial vehicle to alter the mobile operation within the time window and achieve the desired mobile operation relative to a current speed of the mobile industrial vehicle.

2. The method of claim 1, wherein the mobile industrial vehicle comprises a powered vehicle and a plurality of towed vehicles continuously linked with the powered vehicle; and

wherein the step of determining the plurality of control solutions comprises: determining, by a model predictive controller on the mobile industrial vehicle, the plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time in time/space based on an estimated path of the powered vehicle and the towed vehicle to a breakthrough point projected radially from the verified relative position of the one or more reflective beacons.

3. The method of claim 2, wherein the paths of the powering vehicle and the towed vehicle are predicted by the model predictive controller without actively detecting the position of any towed vehicle that follows the powering vehicle.

4. The method of claim 1, wherein the fusing step (c) comprises:

determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected;

determining a mapping space based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;

projecting the determined one or more bounding boxes into the determined mapping space; and

the determined one or more bounding boxes are compared to objects detected in the mapping space to verify the relative position of the one or more reflective beacons.

5. The method of claim 4, wherein the step of projecting the determined one or more bounding boxes into the determined mapping space is performed by a sensor processing system using a convolutional neural network.

6. The method of claim 1, wherein the fusing step (c) comprises:

determining one or more bounding boxes and camera confidence scores based on sensor data generated by a camera sensor when one or more objects are detected;

determining a mapping space and a LiDAR confidence score based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;

projecting the determined one or more bounding boxes into the determined mapping space to identify relative positions of the one or more objects, and determining a final confidence score based on the camera confidence score and the LiDAR confidence score;

when the final confidence score for a particular one of the one or more objects is below the confidence threshold, disregarding the identified relative position of the particular one of the one or more objects; and

the determined one or more bounding boxes are compared to the objects detected in the mapping space to verify the relative positions of the one or more objects that are not ignored based on their respective final confidence scores.

7. The method of claim 6, wherein the step of ignoring the identified relative position of the one or more objects at least when the final confidence score is below the confidence threshold is performed by fuzzy logic within the sensor processing system.

8. The method of claim 1, further comprising the steps of: one or more reflective beacons are deployed relative to high value assets.

9. The method of claim 8, wherein deploying one or more reflective beacons with respect to a high value asset comprises: one or more reflective beacons are placed next to the high value asset.

10. The method of claim 8, wherein deploying one or more reflective beacons with respect to a high value asset comprises: actuating at least one of the one or more reflective beacons from a stowed position to a deployed active position.

11. The method of claim 8, wherein deploying one or more reflective beacons with respect to a high value asset comprises: actuating at least one of the one or more reflective beacons from a stowed position on the high value asset to a deployed active position on the high value asset.

12. The method of claim 1, wherein the step of responsively actuating a vehicle speed control element comprises: actuating a throttle as a vehicle speed control element on a mobile industrial vehicle.

13. The method of claim 1, wherein the step of responsively actuating a vehicle speed control element comprises: brakes are used as vehicle speed control elements in mobile industrial vehicles.

14. An enhanced system for collision avoidance of high value assets based on multi-sensor data fusion by a mobile industrial vehicle, the high value assets having one or more reflective beacons disposed relative to the high value assets, the system comprising:

a sensing processing system disposed on a mobile industrial vehicle, the sensing processing system further comprising:

a LiDAR sensor mounted in a forward orientation to detect one or more reflective beacons in front of a moving industrial vehicle,

a camera sensor mounted in a forward orientation to detect one or more objects in front of the moving industrial vehicle,

a multi-processor module responsive to input from each of the LiDAR sensor and the camera sensor and operable to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more reflective beacons based on a multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data;

a model predictive controller disposed on the mobile industrial vehicle, the model predictive controller configured by being programmatically operable to:

determining a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of one or more reflective beacons;

identifying one of the control solutions as an optimal solution based on a performance cost function, wherein one of the control solutions is associated with an optimal threshold allowable speed; and

a vehicle actuation system comprising at least a vehicle actuator configured to respond when a mobile industrial vehicle exceeds an optimal threshold allowable speed by: causing the mobile industrial vehicle to alter the mobile operation of the mobile industrial vehicle to avoid a collision with the high value asset.

15. The system of claim 14, wherein the mobile industrial vehicle comprises a powered vehicle and a plurality of towed vehicles continuously linked with the powered vehicle; and

wherein the model predictive controller is further configured by being further programmatically operable to: determining the plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at discrete time instances in time/space based on an estimated path of the powered vehicle and the towed vehicle to a breakthrough point projected radially from the verified relative position of the one or more reflective beacons.

16. The system of claim 15, wherein the paths of the powering vehicle and the towed vehicle are predicted by the model predictive controller without actively detecting the position of any towed vehicle that follows the powering vehicle.

17. The system of claim 14, wherein the multi-processor module of the sensing processing system is operatively configured to fuse the sensor data detected by each of the LiDAR sensor and the camera sensor to identify the relative position of the one or more reflective beacons by being programmatically operable to:

determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected;

determining a mapping space based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;

projecting the determined one or more bounding boxes into the determined mapping space; and

the determined one or more bounding boxes are compared to objects detected in the mapping space to verify the relative position of the one or more reflective beacons.

18. The system of claim 17, wherein the multiprocessor module of the sensing processing system is operably configured to: projecting the determined one or more bounding boxes into the determined mapping space using a convolutional neural network.

19. The system of claim 14, wherein the multi-processor module of the sensing processing system is operatively configured to fuse the sensor data detected by each of the LiDAR sensor and the camera sensor to identify the relative position of the one or more reflective beacons by being programmatically operable to:

determining one or more bounding boxes and camera confidence scores based on sensor data generated by a camera sensor when one or more objects are detected;

determining a mapping space and a LiDAR confidence score based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;

projecting the determined one or more bounding boxes into the determined mapping space to identify relative positions of the one or more objects, and determining a final confidence score based on the camera confidence score and the LiDAR confidence score;

when the final confidence score for a particular one of the one or more objects is below the confidence threshold, disregarding the identified relative position of the particular one of the one or more objects; and

the determined one or more bounding boxes are compared to the objects detected in the mapping space to verify the relative positions of the one or more objects that are not ignored based on their respective final confidence scores.

20. The system of claim 14, wherein the multi-processor module of the sensing processing system is operatively configured to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify the relative position of the one or more reflective beacons, the fusing by using fuzzy logic that is programmatically operable to:

determining one or more bounding boxes and camera confidence scores based on sensor data generated by a camera sensor when one or more objects are detected;

determining a mapping space and a LiDAR confidence score based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;

projecting the determined one or more bounding boxes into the determined mapping space to identify relative positions of the one or more objects, and determining a final confidence score based on the camera confidence score and the LiDAR confidence score;

when the final confidence score for a particular one of the one or more objects is below the confidence threshold, disregarding the identified relative position of the particular one of the one or more objects; and

the determined one or more bounding boxes are compared to the objects detected in the mapping space to verify the relative positions of the one or more objects that are not ignored based on their respective final confidence scores.

21. The system of claim 14, wherein each of the one or more reflective beacons includes a base support and a vertical rod attached to the base support, wherein the vertical rod includes a reflective material disposed along a length of the vertical rod.

22. The system of claim 14, wherein at least one of the one or more reflective beacons includes an integrated reflective beacon as part of a high value asset, wherein the integrated reflective beacon is actuated from a stowed position in which the integrated reflective beacon is not visible to a deployed active position in which the integrated reflective beacon is visible.

23. The system of claim 14, wherein the vehicle actuator comprises a throttle disposed on the mobile industrial vehicle.

24. The system of claim 14, wherein the vehicle actuator comprises a brake disposed on the mobile industrial vehicle.

25. The system of claim 14, wherein the vehicle actuation system further comprises:

a vehicle monitor that monitors a current speed of a mobile industrial vehicle;

a feedback control system that responsively actuates the vehicle actuators to cause the mobile industrial vehicle to alter the mobile operation of the mobile industrial vehicle within a predetermined time window when the monitored speed of the mobile industrial vehicle exceeds the optimal threshold allowable speed.

26. An enhanced system for collision avoidance of high value assets based on multi-sensor data fusion by a mobile industrial vehicle, the system comprising:

a plurality of reflective beacons disposed relative to a pre-specified location on the high value asset;

a sensing processing system disposed on a mobile industrial vehicle, the sensing processing system further comprising:

a LiDAR sensor mounted in a forward orientation to detect one or more reflective beacons in front of a moving industrial vehicle,

a camera sensor mounted in a forward orientation to detect one or more objects in front of the moving industrial vehicle,

a multi-processor module responsive to input from each of the LiDAR sensor and the camera sensor and operable to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more reflective beacons based on a multi-sensor fusion data source using the detected LiDAR sensor data and the detected camera sensor data, wherein the multi-processor module of the sensing processing system is operably configured to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more reflective beacons, the fusing by being programmatically operable to:

determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected;

determining a mapping space based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected;

projecting the determined one or more bounding boxes into the determined mapping space; and

comparing the determined one or more bounding boxes with objects detected in the mapping space to verify the relative position of the one or more reflective beacons;

a model predictive controller disposed on the mobile industrial vehicle, the model predictive controller configured by being programmatically operable to:

determining a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of one or more reflective beacons;

identifying one of the control solutions as an optimal solution based on a performance cost function, wherein one of the control solutions is associated with an optimal threshold allowable speed; and

a vehicle actuation system comprising at least a vehicle actuator configured to respond when a mobile industrial vehicle exceeds an optimal threshold allowable speed by: causing the mobile industrial vehicle to alter the mobile operation of the mobile industrial vehicle to avoid a collision with the high value asset.

27. A system for enhancement of forward protection collision avoidance for objects in a direction of travel of a mobile industrial vehicle based on multi-sensor data fusion by the mobile industrial vehicle, the system comprising:

a sensing processing system disposed on a mobile industrial vehicle, the sensing processing system further comprising:

a LiDAR sensor mounted in a forward orientation to detect one or more of objects in front of a moving industrial vehicle,

a camera sensor mounted in a forward orientation to detect one or more objects in front of the moving industrial vehicle,

a multi-processor module responsive to input from each of the LiDAR sensor and the camera sensor and operable to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more of the objects based on a multi-sensor fusion data source using the detected LiDAR sensor data and the detected camera sensor data, wherein the multi-processor module of the sensing processing system is operably configured to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more of the objects by being programmatically operable to:

determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected;

determining a mapping space based on sensor data generated by the LiDAR sensor when one or more of the objects are detected;

projecting the determined one or more bounding boxes into the determined mapping space; and

comparing the determined one or more bounding boxes with the objects detected in the mapping space to verify the relative position of one or more of the objects;

a model predictive controller disposed on the mobile industrial vehicle, the model predictive controller configured by being programmatically operable to:

determining a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of one or more of the objects;

identifying one of the control solutions as an optimal solution based on a performance cost function, wherein one of the control solutions is associated with an optimal threshold allowable speed; and

a vehicle actuation system comprising at least a vehicle actuator configured to respond when a mobile industrial vehicle exceeds an optimal threshold allowable speed by: the mobile industrial vehicle is caused to alter a movement operation of the mobile industrial vehicle to avoid a collision with an object.

28. The system of claim 27, wherein the multi-processor module is responsive to input from each of the LiDAR sensor and the camera sensor by being further programmably operable to: an effective field of view of at least one of the LiDAR sensor and the camera sensor considered by the multi-processor module is dynamically adjusted in response to altered movement operations of the mobile industrial vehicle.

29. The system of claim 27, wherein the multi-processor module is responsive to input from each of the LiDAR sensor and the camera sensor by being further programmably operable to: an effective field of view of at least one of the LiDAR sensor and the camera sensor considered by the multi-processor module is dynamically adjusted in response to detecting a change in direction of the moving industrial vehicle.

30. The system of claim 27, wherein the multi-processor module is responsive to input from each of the LiDAR sensor and the camera sensor by being further programmably operable to:

detecting object identification markers using sensor data generated by at least one of a LiDAR sensor and a camera sensor;

identifying the detected object identification marker as a boundary identifier between the first operating region and the second operating region; and

an effective field of view of subsequent sensor data generated by at least one of the LiDAR sensor and the camera sensor is dynamically adjusted in response to the identified boundary identifier.

31. The system of claim 28, wherein the multi-processor module is further programmably operable to dynamically adjust the effective field of view of at least one of the LiDAR sensor and the camera sensor by being further operable to: at least one of detected LiDAR sensor data and detected camera sensor data used in a multi-sensor fusion data source is dynamically limited for identifying a relative position of one or more of the objects.

32. The system of claim 31, wherein at least one of the detected LiDAR sensor data and the detected camera sensor data used in the multi-sensor fused data source that is dynamically limited to effectively adjust where at least one of the LiDAR sensor and the camera sensor is focused.

33. The system of claim 31, wherein at least one of the detected LiDAR sensor data and the detected camera sensor data used in the multi-sensor fused data source that is dynamically limited to effectively adjust a degree of receive field width of at least one of the LiDAR sensor and the camera sensor.

34. A method for enhanced collision avoidance by a mobile industrial vehicle using a multi-mode on-board collision avoidance system having a plurality of sensors, and the mobile industrial vehicle is operable in a plurality of different operating zones, the method comprising the steps of:

operating, by a multi-mode on-board collision avoidance system on a mobile industrial vehicle, in a first collision avoidance mode when the mobile industrial vehicle is operating in a first one of the different operating zones;

detecting, by one of the sensors of the multi-mode on-board collision avoidance system, a first object identification marker;

identifying, by the multi-mode on-board collision avoidance system, the detected first object identifying indicia as an operational boundary identifying indicia;

detecting, by one or more sensors of the multi-mode on-board collision avoidance system, when the mobile industrial vehicle passes a regional boundary associated with the operational boundary identification marker and enters a second of the different operational regions; and

while in a second of the different operating zones, changing operation from a first collision avoidance mode to a second collision avoidance mode by the multi-mode on-board collision avoidance system to govern operation of the multi-mode on-board collision avoidance system, wherein the second collision avoidance mode includes at least one operating parameter that is more restrictive than the at least one operating parameter in the first collision avoidance mode.

35. The method of claim 34, wherein the step of changing operation from a first collision avoidance mode to a second collision avoidance mode comprises: using a second set of operating parameters for the multi-mode on-board collision avoidance system when in the second collision avoidance mode instead of using the first set of operating parameters for the multi-mode on-board collision avoidance system when in the first collision avoidance mode, wherein the at least one operating parameter has a more restrictive value as part of the second set of operating parameters when compared to as part of the first set of operating parameters.

36. The method of claim 34, wherein the at least one operating parameter comprises a speed limit threshold for a mobile industrial vehicle.

37. The method of claim 34, wherein the at least one operating parameter comprises an ingress prevention distance for a moving industrial vehicle.

38. The method of claim 37, wherein the preventing an entry distance for the mobile industrial vehicle comprises: a minimum radial distance from an object detected by a mobile industrial vehicle to a multimode on-board collision avoidance system.

39. The method of claim 37, wherein the preventing an entry distance for the mobile industrial vehicle comprises: a minimum radial distance from a reflective beacon detected by a multimode on-board collision avoidance system of a mobile industrial vehicle.

40. The method of claim 34, wherein the second collision avoidance mode includes at least one additional operational feature of the multi-modal collision avoidance system used in the second collision avoidance mode when compared to the operational feature of the multi-modal collision avoidance system used in the first collision avoidance mode.

41. The method of claim 40, wherein the additional operational features of the multi-modal collision avoidance system include a minimum ingress prevention distance threshold feature for causing the mobile industrial vehicle to not move within a minimum ingress prevention distance threshold from the object detected by the sensor.

42. The method of claim 40, wherein the additional operational features of the multi-modal collision avoidance system include object persistence features for tracking a detected object after the detected object exceeds a field of view of a sensor.

43. The method of claim 40, wherein the additional operational features of the multi-modal collision avoidance system include altered field of view features for changing a field of view of a sensor to enhance collision avoidance when operating in the second collision avoidance mode.

44. The method of claim 40, wherein the additional operational features of the multi-modal collision avoidance system include a special object detection feature for enabling detection of reflective beacons that differ from other objects alone and in addition to other objects alone when operating in the second collision avoidance mode.

45. The method of claim 40, wherein the first object identification marker comprises an ArUco marker encoded to correspond to the representation of the regional boundary and configured to indicate an orientation of the regional boundary.

Technical Field

The present disclosure relates generally to systems, apparatus, and methods in the field of collision avoidance systems, and more particularly to various aspects of systems, apparatus, and methods related to enhanced collision avoidance structures and techniques for use with mobile industrial vehicles (vehicles), such as cargo tractors and associated dolls.

Background

Collision avoidance may be important in many applications, such as Advanced Driver Assistance Systems (ADAS), industrial automation, and robotics. It is well known that conventional collision avoidance systems can reduce the severity or occurrence of a collision or provide advance warning of a collision.

In an industrial automation environment, certain areas often prohibit vehicle (e.g., automated vehicles or non-automated vehicles) from entering for the protection of personnel and high value assets where damage is to be avoided. These areas may be isolated by mapping (e.g., GPS coordinates, geo-fencing, etc.) or defined by outlining the no entry areas. The collision avoidance system may then be used to avoid prohibited access areas or constrained spaces, which protects personnel and/or high value assets.

One common problem with conventional collision avoidance systems may result from the detection and reaction to false positives. For example, collision avoidance systems may suffer from false positives when objects/markers are detected and do not outline the intended markers and unintended reflective surfaces (such as a worker's safety vest). Detection of false positives often results in poor performance, as the control system responds to all detections. Controlling the response to false detections may result in unnecessary actions, resulting in reduced efficiency. The impact of false positive detection on an autonomous/semi-autonomous system is application specific. Tolerance for false positive detection (tolerance) can be integrated into the system design. The ability of a sensing platform to apply can be defined by false positive detection as well as missing detection (misled true detection). Other common problems encountered with collision avoidance systems using certain types of sensors may be the inability to handle different levels of illumination, and the inability to distinguish colors.

To address one or more of these types of issues, a technical solution is needed that can be deployed to enhance the manner in which collisions are avoided that cause damage to logistics vehicles (such as cargo tractors and associated carts), and to do so in an enhanced manner that improves system performance and helps reduce false positives. In particular, described are various exemplary types of contouring methods and systems in which an industrial vehicle may use light detection and ranging (LiDAR) sensors and multiple color cameras to detect beacons as a type of marker, and one or more model predictive control systems are deployed to block vehicles from entering constrained spaces as a way to avoid damage or contact with high value assets and to provide enhanced implementation of object detection and object avoidance.

Disclosure of Invention

Certain aspects and embodiments will become apparent in the following description. It should be understood that these aspects and embodiments, in their broadest sense, could be practiced without having one or more features of these aspects and embodiments. It should be understood that these aspects and embodiments are merely exemplary.

In general, aspects of the invention relate to improved collision avoidance systems, methods, devices, and techniques that help avoid false object detection and improve the ability to avoid collisions involving towed vehicles that do not follow the same path of travel of a towing vehicle, such as a mobile industrial vehicle (e.g., a cargo tractor that can pull multiple carts loaded with transported and moved items as part of one or more logistics operations).

In one aspect of the disclosure, a method for enhanced collision avoidance for high value assets based on multi-sensor data fusion by a mobile industrial vehicle is described. In this aspect, the high value asset has one or more reflective beacons disposed relative to the high value asset. The method starts with: LiDAR sensors on mobile industrial vehicles detect one or more reflective beacons relative to the mobile industrial vehicle. Then, a camera sensor on the mobile industrial vehicle detects one or more objects relative to the mobile industrial vehicle. Then, the method has: a sensor processing system on a mobile industrial vehicle fuses sensor data detected by each of LiDAR and camera sensors to identify relative positions of reflective beacons based on a multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. Then, the method has: a model predictive controller on the mobile industrial vehicle determines a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point (breaking point) projected radially from the verified relative position of the reflective beacon. The method continues with: the model predictive controller identifies one of the control solutions as an optimal solution having an optimal threshold allowable speed based on a performance cost function. Then, the method has: when the mobile industrial vehicle exceeds the optimal threshold allowable speed, a vehicle actuation system on the mobile industrial vehicle responsively actuates the vehicle speed control element to cause the mobile industrial vehicle to alter the movement operations within the time window and achieve the desired movement operations relative to the current speed of the mobile industrial vehicle.

In another aspect of the disclosure, an enhanced system for collision avoidance for high value assets based on multi-sensor data fusion by a mobile industrial vehicle is described. In this additional aspect, the high value asset is provided with one or more reflective beacons in its vicinity. In general, the system in this aspect includes a sensing processing system on the vehicle, LiDAR and camera sensors on the front of the vehicle, a multi-processor module that can fuse sensor data, a model predictive controller, and a vehicle actuation system. The multi-processor module is responsive to input from each of the LiDAR and camera sensors and advantageously fuses sensor data detected by each of these different sensors to identify the relative position of the reflective beacon based on the multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. A model predictive controller on a mobile industrial vehicle is configured by being programmatically operable to: a plurality of control solutions is determined and one control solution is identified as the optimal control solution. Each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at discrete time instants based on an estimated path to a breakthrough point projected radially from the verified relative position of the reflective beacon. The model predictive controller identifies one of the control solutions as an optimal solution associated with an optimal threshold allowable speed based on a performance cost function. The vehicle actuation system (with vehicle actuators) is configured to respond when the vehicle exceeds the optimal threshold allowable speed by: the vehicle is caused to alter the moving operation of the vehicle to avoid a collision with the high value asset.

In yet another aspect, another enhanced system for collision avoidance of high value assets based on multi-sensor data fusion by a mobile industrial vehicle is described. In this further aspect, the enhanced system has a reflective beacon disposed relative to a pre-designated location on the high value asset, a sensing processing system on the vehicle, a model predictive controller on the vehicle, and a vehicle actuation system on the vehicle. The sensing processing system has: a LiDAR sensor mounted in a forward orientation to detect one or more reflective beacons in front of a mobile industrial vehicle; and a camera sensor mounted in a forward orientation to detect one or more objects in front of the moving industrial vehicle. The sensing processing system further includes a multi-processor module responsive to input from the LiDAR and camera sensors and operable to fuse sensor data detected by each of the LiDAR and camera sensors to identify a relative position of the reflective beacon based on the multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. To fuse sensor data detected by each of the LiDAR sensors and the camera sensors to identify the relative position of one or more reflective beacons, a multi-processor module of the sensing processing system is operatively configured and programmatically operable to: determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected; determining a mapping space based on sensor data generated by the LiDAR sensor when a reflective beacon is detected; projecting the determined bounding box into the determined mapping space; and comparing the determined bounding box to objects detected in the mapping space to verify the relative position of the reflective beacon. A model predictive controller disposed on a mobile industrial vehicle is configured by being programmatically operable to: determining a plurality of control solutions, wherein each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of the reflective beacon; and determining one of the control solutions as an optimal solution based on the performance cost function, wherein the optimal control solution is associated with an optimal threshold allowable speed. The vehicle actuation system has at least a vehicle actuator configured to respond when the mobile industrial vehicle exceeds an optimal threshold allowable speed by: causing the mobile industrial vehicle to alter the mobile operation of the mobile industrial vehicle to avoid a collision with the high value asset.

In yet another aspect of the present disclosure, an enhanced system for front guard (front guard) collision avoidance of objects in a direction of travel of a mobile industrial vehicle based on multi-sensor data fusion by the mobile industrial vehicle is described. In this further aspect, the system includes a sensing processing system disposed on the mobile industrial vehicle, a model predictive controller on the vehicle, and a vehicle actuation system on the vehicle. The sensing processing system has: a LiDAR sensor mounted in a forward orientation to detect one or more of objects in front of a moving industrial vehicle; and a camera sensor mounted in a forward orientation to detect objects in front of the moving industrial vehicle. The sensing processing system also includes a multi-processor module that is responsive to input from each of the LiDAR sensor and the camera sensor and is operable to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of an object based on the multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. A multi-processor module of the sensing processing system is operatively configured to fuse sensor data detected by each of the LiDAR sensor and the camera sensor to identify a relative position of one or more of the objects by being programmatically operable to: determining one or more bounding boxes based on sensor data generated by a camera sensor when an object is detected; determining a mapping space based on sensor data generated by the LiDAR sensor when an object is detected; projecting the determined bounding box into the determined mapping space; and comparing the determined bounding box with the objects detected in the mapping space to verify the relative positions of the objects. The model predictive controller is configured by being programmatically operable to: determining different possible control solutions, wherein each of the possible control solutions defines a threshold allowable speed of the vehicle at a discrete time based on an estimated path to a breakthrough point projected radially from the verified relative position of the object; and identifying one of the control solutions as an optimal solution associated with an optimal threshold allowable speed based on a performance cost function. The vehicle actuation system has at least a vehicle actuator configured to respond when the mobile industrial vehicle exceeds an optimal threshold allowable speed by: the mobile industrial vehicle is caused to alter a movement operation of the mobile industrial vehicle to avoid a collision with an object.

In yet another aspect, a method for enhanced collision avoidance by a mobile industrial vehicle using a multi-mode on-board collision avoidance system is described and the mobile industrial vehicle may operate in a plurality of different operating areas. In this respect, the method starts with: the multi-mode on-board collision avoidance system on a mobile industrial vehicle operates in a first collision avoidance mode when the mobile industrial vehicle is operating in a first one of the different operating zones. Next, one of a plurality of sensors on the multimodal on-board collision avoidance system detects an object identification marker (such as an ArUco marker) and identifies a first detected object identification marker as an operational boundary identification marker. The method then continues with: the sensor detects when the mobile industrial vehicle passes a regional boundary associated with the operational boundary identification marker and enters a second of the different operational zones, wherein when in the second of the different operational zones, the multi-mode on-board collision avoidance system automatically and autonomously changes operation from the first collision avoidance mode to the second collision avoidance mode, thereby governing operation of the multi-mode on-board collision avoidance system. In this case, the second collision avoidance mode has at least one operating parameter (e.g., speed limit, etc.) that is more restrictive than the operating parameter in the first collision avoidance mode.

Additional advantages of these and other aspects of the disclosed embodiments and examples will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.

Drawings

The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments in accordance with one or more principles of the invention and, together with the description, serve to explain one or more principles of the invention. In the drawings:

FIG. 1 is a diagram of an operational view of an exemplary tractor collision avoidance system deployed in a logistics environment, in accordance with an embodiment of the present invention;

FIG. 2 is an exemplary high-level functional block diagram of an exemplary collision avoidance system according to an embodiment of the present invention;

FIG. 3 is a more detailed diagram of an exemplary collision avoidance system according to an embodiment of the present invention, with logical segments of different elements and roles within the system shown;

FIG. 4 is a diagram of exemplary implementation details of portions of an exemplary collision avoidance system, according to an embodiment of the invention;

FIG. 5 is a diagram with details on an exemplary passive beacon for use with an exemplary collision avoidance system, according to an embodiment of the present invention;

FIG. 6 is an exemplary image illustrating an exemplary passive beacon as seen by a camera and LiDAR sensor and used as a training input for an exemplary collision avoidance system, in accordance with an embodiment of the present invention;

FIG. 7 is a diagram with further exemplary training inputs for an exemplary collision avoidance system, in accordance with an embodiment of the present invention;

FIG. 8 is a diagram illustrating exemplary training statistics for an exemplary collision avoidance system, according to an embodiment of the present invention;

FIG. 9 is a block diagram of exemplary general processing steps associated with enhanced collision avoidance using an exemplary collision avoidance system in accordance with an embodiment of the present invention;

FIG. 10 is a set of diagrams illustrating an exemplary kinematics model visualization in relation to estimated and predicted movements of an exemplary tractor (industrial vehicle) and a following vehicle (trailer) that may deploy an exemplary collision avoidance system according to an embodiment of the present invention;

FIG. 11 is an exemplary frame diagram of a dynamic modeling frame for determining transient states of a towing vehicle system in accordance with an embodiment of the present invention;

FIG. 12 is a diagram of an exemplary single rigid object model, according to an embodiment of the present invention;

FIG. 13 is a diagram of an exemplary mobile towing vehicle system having four towed units in accordance with an embodiment of the present invention;

FIG. 14 is a diagram of an exemplary geometric model of a towing vehicle system having an exemplary towing vehicle and two towed vehicle units, showing hitch (hold) points, in accordance with an embodiment of the present invention;

FIG. 15 is a diagram of an exemplary towed vehicle and its hitch points and associated vectors, in accordance with an embodiment of the present invention;

FIG. 16 is a pictorial view of an exemplary towing vehicle and one towed vehicle (trailer) illustrating a particular length in a tractor-trailer model, in accordance with an embodiment of the present invention;

FIG. 17 is a diagram of an exemplary scale model of an exemplary towing vehicle and two towed vehicles (trailers) illustrating a particular length and a particular radius defining a series of virtual triangles, in accordance with an embodiment of the present invention;

18A-18C are diagrams illustrating different configuration states of an exemplary towing vehicle and an exemplary towed vehicle, according to embodiments of the present invention;

FIG. 19 is a diagram illustrating a trigonometric relationship between the exemplary towing vehicle and the exemplary towed vehicle from FIGS. 18A-18C, in accordance with an embodiment of the present invention;

FIG. 20 is a diagram illustrating an exemplary system architecture according to an embodiment of the present invention;

FIG. 21 is a schematic diagram showing a vehicle that is positioned and moving relative to different beacons that are placed near a protected area in accordance with an embodiment of the invention;

FIG. 22 is an exemplary block diagram for data fusion, according to an embodiment of the present invention;

FIG. 23 is an exemplary high-level data flow diagram of a processing module for implementing a signal processing system according to an embodiment of the present invention;

fig. 24 is a diagram of an exemplary passive beacon in accordance with an embodiment of the present invention;

FIG. 25 is a diagram of an exemplary LiDAR beam from a LiDAR sensor relative to an exemplary passive beacon disposed in front of the LiDAR sensor, according to an embodiment of the present invention;

FIG. 26 is an illustration of an exemplary scan of a LiDAR point cloud in accordance with an embodiment of the present invention;

fig. 27 is a diagram of an exemplary beacon return, according to an embodiment of the present invention;

FIG. 28 is an exemplary table of features and information regarding such features used as part of the extraction of features of objects from LiDAR information in accordance with embodiments of the present invention;

FIG. 29 is a graph illustrating optimal feature weights selected by an SVM training optimizer according to an embodiment of the present invention;

FIG. 30 is an exemplary illustration of a two-dimensional SVM according to an embodiment of the present invention;

FIG. 31 is a diagram illustrating an exemplary Probability Distribution Function (PDF) for beacon and non-beacon LiDAR discrimination values, according to an embodiment of the invention;

FIG. 32 is a diagram of a data flow starting with a bounding box from a camera projected into range/distance and angle estimates in a LiDAR coordinate system in accordance with an embodiment of the present invention;

FIG. 33 is a more detailed diagram of a data flow starting with a bounding box from a camera projected into range/distance and angle estimates in a LiDAR coordinate system using an exemplary neural network structure for mapping such information, in accordance with an embodiment of the present invention;

FIG. 34, having parts (a) and (b), illustrates a diagram of two different exemplary data streams and a fusion process block, according to an embodiment of the present invention;

FIG. 35, having parts (a) - (d), illustrates various exemplary fuzzy membership functions and graphical representations of fuzzy logic outputs when performing data fusion using fuzzy logic, in accordance with embodiments of the present invention;

FIG. 36 is a diagram of the data flow and processing of LiDAR and camera information using different processing techniques and with fusion of confidence scores using hyper-parameters, according to an embodiment of the present invention;

FIG. 37 is a series of tables that illustrate LiDAR training and testing confusion matrix information, according to an embodiment of the present invention;

FIG. 38 is a flow diagram of an exemplary method for enhanced collision avoidance for high value assets based on multi-sensor data fusion by a mobile industrial vehicle, in accordance with an embodiment of the present invention;

FIG. 39 is a diagram of another exemplary tractor collision avoidance system operational view deployed in another exemplary logistics environment in accordance with an embodiment of the present invention;

FIG. 40 is a diagram of another exemplary tractor collision avoidance system operational diagram deployed in another exemplary logistics environment, wherein the exemplary tractor collision avoidance system operates in an exemplary driving lane mode, in accordance with an embodiment of the present invention;

FIG. 41 is a diagram of another exemplary tractor collision avoidance system operational diagram deployed in another exemplary logistics environment, wherein the exemplary tractor collision avoidance system operates in an exemplary aircraft gate area mode in accordance with an embodiment of the present invention; and

FIG. 42 is a flow diagram of an exemplary method for enhanced collision avoidance by a mobile industrial vehicle that may operate in a plurality of different operating regions using a multi-mode on-board collision avoidance system, in accordance with an embodiment of the present invention.

Detailed Description

Reference will now be made in detail to various exemplary embodiments. Wherever possible, the same reference numbers will be used throughout the drawings and the description to refer to the same or like parts. However, those skilled in the art will appreciate that the specific portions may be implemented in different ways for different embodiments depending on the anticipated deployment and operating environment needs of the respective embodiments.

The following describes various embodiments of different systems, devices, and methods of application deployed and used to improve how collisions with objects and personnel (e.g., high value assets) are prevented and avoided during operation of various mobile industrial vehicles, such as cargo tractors that pull one or more carts or trailers. Moreover, those skilled in the art will appreciate that additional embodiments may combine some of these otherwise independent solutions to provide an even more robust system for avoiding collisions with high value assets by mobile industrial vehicles (such as cargo tractors and associated carts). As described in more detail below.

Those skilled in the art will appreciate that the following description includes detailed exemplary information about an exemplary dynamic path-following or kinematic model that may be deployed as part of an applied and enhanced system, apparatus, and method embodiment that relates to predicting movement and path of a multi-element mobile industrial vehicle (such as a cargo tractor with a towed vehicle or trailer) as part of avoiding collisions. The following description also includes detailed exemplary information regarding the use of multiple sensors to generate different types of data (e.g., camera data and LiDAR data) and deploy detailed embodiments of innovative, inventive, and advantageous processes that fuse such different types of data to improve detection of objects (e.g., physical structures, aircraft, personnel, etc.) as part of applied and enhanced system, apparatus, and method embodiments that improve how collisions are avoided.

In general, exemplary local systems having "front guard" features are described herein that can be used for collision avoidance as sensors available, feasible real-time control, and novel fusion of actuators and sensors on mobile industrial vehicles, such as cargo tractors and associated carts/trailers that can transport items (e.g., unpackaged goods, packaged goods, and containers that can be used to transport goods). The general system may use passive beacon detection based aircraft collision avoidance methods and general object detection based front protection to better reduce the incidence of frontal collisions with any object. Further, such systems may use a warning cone as a platform for a passive "beacon" that the cargo tractor sensors may use for local situational awareness and orientation with respect to the vulnerable aircraft portion to be protected. In more detail, such an exemplary system may integrate a sensing and sensor processing system on a cargo tractor type mobile industrial vehicle, a cargo tractor/cart model used by the system, a model predictive controller, and a vehicle actuation system to avoid high value assets. One or more beacons may be placed in strategic locations to allow highly robust detection and avoidance of high value assets. Further use of such a system may be implemented to enable object detection and object avoidance, which takes advantage of data fusion of different sources of detected potential objects, and reacts in time using a vehicle actuation system.

FIG. 1 is a diagram of an operational view of an exemplary tractor collision avoidance system deployed in a logistics environment, in accordance with an embodiment of the present invention. As shown in FIG. 1, the logistics environment includes an exemplary aircraft 100 as one type of high value asset. In general, a high value asset may be considered to be a device, structure, and/or person in which a mobile industrial vehicle (such as a cargo tractor 115 and its linked cart 120) is expected to be restrained when approaching or moving around or near such an asset. Indeed, certain areas may be considered high value assets as they may have such equipment, structures and/or personnel, but not necessarily must be currently occupied. In various embodiments described herein, an out-of-range (or constrained movement) area associated with such a high-value asset may be established or determined as a boundary for protecting the high-value asset.

In fig. 1, an exemplary aircraft 100 shown from above may be one type of high value asset used to transport items as part of a logistics pick-up and/or delivery operation. As shown in FIG. 1, an exemplary aircraft 100 has a nose cone structure protruding from a forward end of the aircraft, a protruding engine on each wing, and a tail structure protruding from an aft end of the aircraft 100. Those skilled in the art will appreciate that such a protrusion is an example of the following points on the aircraft: these points have a greater risk of collision with a mobile industrial vehicle operating in the vicinity of the aircraft. Thus, in the example shown in fig. 1, exemplary reflective beacons (e.g., 105a-105 d) may be placed adjacent to each of such protrusions and used during operation of the exemplary collision avoidance systems described herein. Further information regarding such exemplary passive beacons is discussed below with respect to fig. 5,6, and 25.

Fig. 1 also illustrates a cargo loading structure 110 shown alongside the fuselage of an exemplary aircraft 100 in which items (e.g., unpackaged cargo, packaged cargo, and containers that may be used to transport cargo) may be loaded into the aircraft 100 from a cargo tractor 115 and its associated cart 120, or in which such items may be unloaded from the aircraft 100 into the cargo tractor 115 and its associated cart 120 as part of different logistics operations. Generally, the cargo tractor 115 may move along a path 125 (for purposes of discussion, the path 125 is indicated with solid lines for the actual path in fig. 1 and dashed lines for the possible paths in fig. 1) about the aircraft 100 and its cargo loading structure 115 to facilitate picking and/or delivering items from its associated cart 120 to the aircraft 100 and from the aircraft 100. The concentric rings around the beacon 105 in fig. 1 indicate: sensing beacons 105a-d (e.g., pointed out bumps on the airplane) near high value assets and a protected area around each beach (beach) (e.g., area 106, which is constrained for prohibited entry; area 107, which is constrained for slowing down when entering such an area). Missing beacons outside the sensor scan range 108 of the sensor suite (suite) on the cargo tractor are not tracked by the sensor suite.

In this type of logistics environment, several different embodiments incorporating novel and innovative aspects can be explained, which present technical solutions to the technical problem of avoiding collisions in logistics operations by the mobile vehicle 115 and its towed vehicles (e.g. the trolley and the trailer 120). For example, a "local option" embodiment may utilize the use of a self-contained autonomous (or semi-autonomous) control system, where sensing, calculations, decisions, and actions are local to the individual cargo tractors. In this manner, such an embodiment enhances a collision avoidance system for the cargo tractor 115 that does not require communication with other vehicles or additional infrastructure.

In another example, an embodiment may include a passive beacon using retro-effective (retro-effective) surfaces (e.g., taped or painted surfaces of the reflective beacons 105 a-d). Generally, such beacons may be used with enhanced collision avoidance systems on cargo tractors to improve the weather capability of the system, reduce system complexity, and facilitate advantageous low impact integration with existing standard logistics operational processes with respect to protecting high value assets (such as aircraft).

In yet another example, embodiments may deploy novel kinematic models and predictive calculations with respect to a cargo tractor and its towed vehicle (e.g., a truck/trailer) to facilitate collision prevention by the cargo tractor and associated towed truck, even without active detection mechanisms deployed on the truck. Typically, the kinematic model is used to inform calculations performed on a processing system deployed on the cargo tractor of possible future states of the mobile vehicle system (i.e., the powered vehicle and the linked towed vehicle that follows). Thus, when using such models and calculations as part of an exemplary system, virtual perimeters 130 along the sides of a fleet of vehicles may effectively constrain the system to prevent collisions from off-tracking an associated towed vehicle 120. As shown in fig. 1, the widened trajectory profile of the perimeter 130 indicates a probabilistically determined cart position that the exemplary collision avoidance system can use without deploying actual position sensors on the cart (e.g., via tracking and accounting for the persistence of detected objects without continuing to detect such objects on the sides of the queue of towed carts 120 with additional sensors as part of the exemplary model and calculations).

In yet another example, a "front guard" type embodiment may use different sensors, such as a light detection and ranging (LiDAR) sensor, and one or more cameras (e.g., stereo cameras or two monocular cameras) to detect objects (including humans, skids (chocks), cones, boxes, etc., but not necessarily limited to reflective beacons) in the direction of travel 135 of the mobile cargo tractor 115. This embodiment of the fusion-based type system enhances collision detection, where vehicle actuation control may be involved and automatically applied to prevent a collision (e.g., speed control applied with throttle and/or braking). In some front guard embodiments, the sensor data may be filtered to only view the space defined by the path of the moving cargo tractor 115. As described in more detail below, further front guard embodiments may use a dynamically adjustable field of view (FOV) of sensor data from the LiDAR/camera(s), allowing the embodiments to adaptively respond to objects in the path of the moving cargo tractor 115, and/or changes in the movement of the moving cargo tractor 115 (e.g., responsively changing the FOV due to the angular velocity of the moving cargo tractor 115).

In examples where other types of sensors are utilized to find beacons, such as LiDAR and monocular camera sensors, the system may also fuse different types of sensor data via camera detection bounding boxes into the LiDAR space, utilize predictability of the state of the cargo tractor 115 and associated cart 120 in computing the control solutions, and implement a cost function for each potential control solution to determine the maximum allowable speed at a particular moment in time (e.g., a moment in time/space) in real-time or near real-time. With the determined speed limit, the system may monitor the actual speed and may use a feedback control system that actuates the brakes and/or throttle of the cargo tractor vehicle to effect a responsive deceleration action within a time window to achieve the desired speed. Thus, the system embodiments utilize model-based calculations to determine the shortest achievable path for collisions with detected beacons, and utilize a predictive controller portion of the system that can continuously or periodically update the maximum allowable speed along such a path. As shown in fig. 1, area 106 represents a speed regulation zone of the sensing beacon for any cargo tractor traversing the zone, while area 107 represents a virtual barrier representing a no-travel or entry-into-incursion zone.

In general, the exemplary mobile industrial vehicle 115 may be implemented using a cargo tractor type vehicle capable of pulling one or more carts or trailers 120, the exemplary mobile industrial vehicle 115 may be deployed as part of an exemplary enhanced collision avoidance system and method of operation thereof, the carts or trailers 120 loaded with items to be transported (e.g., transported between locations, loaded onto a logistics transport, or unloaded from a logistics transport). Such an exemplary cargo tractor 115 may be enhanced to use various on-board sensing devices (e.g., LiDAR, stereo cameras, monocular cameras, ultrasonic sensors, laser rangefinders, radar, etc.). Such board-mounted sensing devices may be secured to a cargo tractor using one or more brackets. Sensor alignment tools may also be secured to the front grill plate (front grill plate) of the cargo tractor for aiding in the alignment of such on-board sensing devices. The cargo tractor may further include a weatherproof housing that protects the electrical and electronic package assemblies that make up the elements of such an exemplary enhanced collision avoidance system of the cargo tractor from unwanted water, chemicals, and other debris. In a general embodiment, such electrical and electronic components may include: a system interface printed circuit board that manages power and signal interfaces for a plurality of devices in the enhanced collision avoidance system, a sensor processing system, a model predictive controller that incorporates and utilizes a predictive kinematic model, and a vehicle actuation system as described in more detail below.

Fig. 2 is an exemplary high-level functional block diagram of an exemplary collision avoidance system according to an embodiment of the present invention, illustrating the general operational flow of such a system 200. Referring now to FIG. 2, the sensing module 205 generally receives and detects information about the environment of the cargo tractor (e.g., camera images, LiDAR sensed input, etc.). In sensing, the sensor processing system of the system utilizes different types of sensing information and is operable to identify objects and learn about the sensed scene (e.g., whether there are reflective passive beacons detected based on different sensor inputs). Next, in the predictive control block 210, a plurality of real-time "look-ahead" control solutions may be generated using the example kinematic model and the state estimates 225. These solutions are then fed to a feedback control system of actuator control 215, vehicle actuation (via throttle and brake) 220, vehicle dynamics database 230 (e.g., characteristics of various vehicle parameters such as vehicle mass, braking force, etc.), and feedback compensator 235, such that the system responds to the identified objects and utilizes the predictive control solution to apply the optimal control solution to improve and enhance how the system avoids collisions for the particular vehicle involved.

Fig. 3 and 4 illustrate further details of the different elements of such an exemplary enhanced collision avoidance system. In particular, FIG. 3 is a more detailed diagram of an exemplary collision avoidance system 300 according to an embodiment of the present invention, in which logical segments of different elements and roles within the system are shown. Referring now to fig. 3, elements of an exemplary enhanced collision avoidance system 300 are shown that are classified into five different types of system segments: sensors 305, sensor processing 310, collision avoidance control 315, vehicle motion control 320, and actuators 325. In the exemplary embodiment, sensor segment 305 portion of exemplary system 300 includes: proprioceptive sensors, such as brake pressure sensor 305a, ECU related sensors for wheel speed and throttle percentage 305b, and position sensors 305c, such as inertial measurement based sensors (accelerometers, gyroscopes, magnetometers) and receiver based positioning systems (GPS, wireless cellular telephone positioning circuitry, etc.), and exosensory sensors, such as cameras 305d, e or LiDAR sensors 305f the signal processing section 310 portion of the exemplary system includes software based modules 310a, 310b running on a processing platform that performs signal processing 310a on sensor inputs from the exosensory sensors (e.g., convolutional neural network processing for camera data, data clustering (cluster) for LiDAR data, and Support Vector Machine (SVM) processing) and uses a database of object and map information 310c to perform data processing at each processed sensor input fusion And 310 b.

The collision avoidance control segment 315 portion of the exemplary system 300 includes software-based modules running on other process platforms (separate from the sensor process segment 310) that implement a Model Predictive Controller (MPC) 315 a. Generally, the MPC315a determines a control solution to determine the maximum allowable speed at discrete time instances in time/space. More particularly, embodiments of MPC315a employ a look-ahead strategy and are applicable to discrete event management using supervisory control. In operation, the MPC315a in the collision avoidance control segment 315 calculates the possible control outcomes for the set of control inputs within a limited prediction horizon. With the performance evaluation function (also referred to as a "cost" function related to the performance index), the MPC315a predicts and evaluates all reachable system states within the prediction horizon described above so that an optimal result can be found, and a corresponding system control input can be selected and transmitted to the vehicle controller (i.e., the vehicle motion control segment 320). For example, in one embodiment, the use of "optimal" may mean a predicted control solution along the most likely realized path for collision avoidance that results in the least limit on vehicle speed while ensuring collision avoidance. This process is repeated until a predetermined target is reached, such as the system operating in a safe area away from passive beacons and other obstacles. As such, MPC315a may be used for collision avoidance, forward protection, spatial awareness, solutions focused on local options, system kinematics, vehicle dynamics, false positive mitigation, and beacon/object persistence deployed on mobile industrial vehicles, as well as solutions using such enhanced collision avoidance systems on cargo tractor type vehicles as mentioned herein. In addition, MPC315a has access to one or more databases 315b, database 315b having stored thereon predictive kinematic model information as well as vehicle dynamics information. The vehicle motion control section 320 portion of the exemplary system 300 includes a software module running on yet another processor (e.g., a microcontroller) that implements a vehicle brake feedback control system 320a, which vehicle brake feedback control system 320a accesses vehicle dynamics information from a database 320b and operates as a feedback compensator to provide input to a vehicle actuator 325, such as a throttle and/or brake system of a cargo tractor, and/or a gear selector on a cargo tractor.

While fig. 3 provides implementation details related to embodiments more from a data processing and data flow perspective, fig. 4 is a diagram illustrating exemplary hardware implementation details of elements of such an exemplary enhanced collision avoidance system 400, according to an embodiment of the present invention. Referring now to FIG. 4, the exemplary hardware integration diagram illustrates three different core processing systems or controllers, namely a sensor data processor 405, a collision avoidance controller 410, and a vehicle feedback actuation controller 415. Those skilled in the art will appreciate that each of these processors/controllers may be implemented using one or more different processor-based systems (e.g., general purpose graphics processing units (GP-GPUs), Central Processing Units (CPUs), microprocessors, microcontrollers, multiple processors, multi-core processors, systems on a chip (socs), or other discrete processing-based devices) that may include on-board communication interfaces, I/O docking circuitry, and associated peripheral circuitry for interfacing with cameras, LiDAR, network switching components, ECUs, IMUs, and vehicle actuators and vehicle sensor elements, as required by the described use application.

For example, as shown in fig. 4, the exemplary sensor data processor 405 receives input from different cameras (e.g., camera 1305 d and camera 2305 e, each of which may be implemented as a front view infrared camera sensor) using a USB 3.1 connection that connects embedded multiprocessors (e.g., 2 NVIDIA Jetson TX2 embedded AI computing devices that are essentially different AI supercomputers on a module for use in edge applications, each with CPU and GPU architecture and various standard interfacing hardware) for faster and more robust transfer of information from the sensors to the sensor data processor 405. The sensor data processor 405 also receives LiDAR input from the LiDAR sensor 305f over an Ethernet connection (e.g., via the Ethernet switch 420). With these inputs, the sensor data processor 405 (which may be implemented with both a CPU and a GPU processor) is operable to detect beacons and objects using a novel fusion of camera and LiDAR data. More specifically, the LiDAR sensor 305f detects the beacon and distinguishes the beacon from other objects (such as people, vehicles, cargo tractors, etc.). The cameras 305d, e detect a plurality of objects (such as people, beacons, skids, vehicles, etc.) and provide the camera data to the sensor data processor 405, where such objects can be identified using a learning neural network (e.g., a convolutional neural network trained for such identification). The data fusion software module 310b running on the sensor data processor 405 then fuses these different types of data by projecting the camera detection bounding box into the LiDAR space. Fusing these two distinct and distinct data sources into a multi-sensor fused data source provides a level of enhancement and improved performance in avoiding collisions.

The exemplary collision avoidance controller 410 shown in fig. 4 has an ethernet connection to the sensor data processor 405, and the collision avoidance controller 410 may be implemented, for example, using an NVIDIA Jetson TX2 module having GPGPU/CPU hardware and on-board docking hardware. The collision avoidance controller 410 runs a model predictive control software module 315a, which model predictive control software module 315a is used as a type of look-ahead controller that predicts the shortest possible path to a breakthrough point in space projected radially from the beacon location.

In one embodiment, Model Predictive Control (MPC) software 315a running on the collision avoidance controller 410 incorporates a system kinematics model of the tractor and the vehicle (exemplary embodiments of which are described in detail below) to predict potential collisions between any portion of the tractor/vehicle fleet and high value assets. As noted above, the MPC software 315a computes the control solution to determine the maximum allowable speed at any discrete time in time/space. The timeliness of the collision avoidance problem means that the calculation of the MPC solution is performed in real-time or substantially real-time. In particular, those skilled in the art will appreciate that the control solution determined by the MPC software 315a running on the collision avoidance controller 410 involves a large set of possible solutions, where the cost of each solution to prevent a collision is calculated. The cost function compares the cost of each possible solution determined by the MPC and the MPC software 315a can select the optimal one of the possible solutions based on criteria defined by the cost function. Since each possible solution may be computed independently, embodiments of MPC software 315a may compute such solutions in parallel using the real-time operating system used by collision avoidance controller 410, and in some embodiments using the multi-core/multi-threading capabilities of collision avoidance controller 410 itself (e.g., the 256 CUDA enabled core parallel computing platform and NVIDIA Pascal GP-GPU processing complex used in the NVIDIA Jetson TX2 computation module).

As described herein, the MPC software 315a running on the collision avoidance controller 410 is further used for speed governing (e.g., calculating a control solution to determine maximum allowable speeds at discrete times in time/space). The collision avoidance controller 410 may receive information from positioning circuitry and elements 3005c, such as a GPS, or inertial measurement unit, or other position sensors on the cargo tractor (as shown in fig. 3). As part of an embodiment of the enhanced collision avoidance system 400, the collision avoidance controller 410 provides an output on a Controller Area Network (CAN) bus to a vehicle feedback actuation controller 415. Such a CAN bus provides a standard mechanism for vehicle communication and car interfacing with portions of the cargo tractor, such as the brakes and throttle, the ECU 3005b on the cargo tractor.

The example vehicle feedback actuation controller 415 shown in fig. 4 has a CAN connection to the collision avoidance controller 410, as well as other docking circuitry (e.g., analog, Pulse Width Modulation (PWM) or other parallel, serial, digital or other sense/actuation line interfaces) to control portions of the cargo tractor (e.g., brakes and throttle). The exemplary vehicle feedback actuation controller 415 may be implemented, for example, using an Arduino Due single board 32-bit ARM core microcontroller module with hardware and onboard docking hardware. Vehicle actuation feedback control software 320a running on the example vehicle feedback actuation controller 415 typically calculates the deceleration rate to achieve the desired speed within a particular time window. A feedback control system implemented by vehicle actuation feedback control software 320a running on the controller 415 actuates the tractor and throttle controls 325 to achieve the calculated acceleration or deceleration. The feedback system can bring the vehicle to a complete stop if desired (e.g., if the mobile industrial vehicle is approaching a no entry zone 106 around the reflective beacon 105).

Accordingly, fig. 3 and 4 provide exemplary functional, software, and hardware implementation details related to embodiments of an enhanced collision avoidance system, and methods of operating such an enhanced collision avoidance system. Additionally, those skilled in the art will appreciate that while fig. 4 illustrates three distinct and distinct processor/controller hardware devices, further embodiments of the exemplary enhanced collision avoidance system may implement these utilizing single or other multi-processor or logic based solutions having various software-based modules operative to perform the described enhanced collision avoidance functions.

Fig. 5 is a diagram illustrating details on an exemplary passive beacon 500 for use with an exemplary collision avoidance system according to an embodiment of the present invention. In general, such an exemplary passive beacon 500 may have a characteristic shape so as to produce a specifically identifiable return that may be more easily identified as a beacon, as opposed to and in contrast to other objects. For example, in a particular embodiment, the exemplary passive beacon 500 may be a tall and thin object with highly reflective material thereon so as to stand out as a tall and bright object, while other objects (e.g., cargo tractors, personnel, etc.) may have bright returns, but are typically much wider in comparison.

Referring now to fig. 5, an example of such an exemplary passive beacon 500 is shown, the beacon 500 integrating a base support 505 (e.g., a traffic cone) with a rod 510 extending upwardly therefrom. The rod includes a reflective material disposed along a length of the rod. Such material may be retro-reflective (retro-reflective) bands in the near infrared and visible spectrum, but other embodiments may include other types of reflective material corresponding to material detectable by sensors used on the cargo tractor, such as LiDAR sensors. By integrating the pole with the base support, the visibility of the beacon to the sensing cargo tractor and its sensors can be increased. As shown in FIG. 5, an exemplary LiDAR sensor 515 on a vehicle 115 may have a characteristic distance/angle of reception, where this increased height of the beacon due to the use of the pole presents a better target for LiDAR detection work. For example, the exemplary LiDAR sensor 515 shown in FIG. 5 is an 8-beam LiDAR unit whose beams are focused at different elevation and azimuth angles. The data detected by such exemplary LiDAR sensors typically includes discrete points in space and intensity.

However, some sensors on the cargo tractor, such as monocular cameras, may use the color and/or shape of the base support (e.g., traffic cone) as a distinguishing feature for enhanced detection of the beacon in conjunction with the return captured by the LiDAR. In one embodiment, the beacon may be passive and unpowered. However, those skilled in the art will appreciate that other embodiments of the beacon may be powered to provide more visibility to the sensor suite of the cargo tractor (e.g., illuminated with lights or flashing lights that may be recognized, etc.).

Further, while the embodiment of beacon 500 shown in fig. 5 is a separately located passive beacon structure as opposed to a high value asset (such as an aircraft), another embodiment of the beacon may be implemented with reflective symbols or materials affixed to or integral parts of the high value asset. In this way, such an embodiment of the beacon may be part of, for example, an edge of an aircraft wing, an engine, a nose cone, a tail structure, or other protruding portion of an aircraft (which may have more risk of collision than other portions of the aircraft). Further embodiments may be implemented with extendable structures from such high value assets that may be selectively deployed or actuated from a stowed position or a stowed position to a deployed active position in which they may be detected when there is opportunity for a mobile industrial vehicle (such as a cargo tractor) to be present within the vicinity of the high value asset. For example, an exemplary high value asset (such as an airplane or tractor/trailer) may have an extendable reflective beacon that may be actuated to become visible in a deployed active position on the high value asset.

FIG. 6 is an exemplary image 600 illustrating an exemplary passive beacon 505/510 as seen by an exemplary camera and LiDAR sensor and used as a training input for an exemplary collision avoidance system, in accordance with an embodiment of the present invention. Referring now to fig. 6, an image 600 is an exemplary visual image captured by a camera on a cargo tractor. The camera provides the image as sensed data to the camera's signal processing software which then provides the processed data to a data fusion software module running on the sensor data processor so that the bounding box 605 of the beacon/cone can be identified and the bounding box 605 is associated with the coordinates Xtop、Ytop、XbottomAnd YbottomAnd (4) associating. Thus, bounding box 605 for beacon/cone 505/510 (and represented by coordinates) shown in FIG. 6 is the input to the data fusion software module. LiDAR detects this beacon/cone as another type of sensed data and provides the LiDAR data to the LiDAR sensor's signal processing software, which then provides the processed data to a data fusion software module running on the sensor data processor, so that information about the detected LiDAR space may be another input to the data fusion software module running on the sensor data processor. An example of such detected LiDAR space is shown in FIG. 7, where an exemplary image 700 is shown in graphical formThe distribution of "training data" is obtained. In FIG. 7, the training data shown in diagram 700 is used for a neural network to learn the mapping between LiDAR and cameras. When a LiDAR beam intersects an object in the environment, the LiDAR returns point cloud data representing coordinates and intensity values. The camera generates an image. An object is detected in the image via a depth learning system and the object is outlined with a bounding box. In the example shown in fig. 6, the bounding box has coordinates that outline the region: x and y coordinates (x) of the upper left cornertop,ytop) And the x and y coordinates (x) of the lower right cornerbottom,ybottom). In operation, LiDAR determines the center of each detected object and reports the distance and angle to that object. Thus, the neural network learns the mapping between LiDAR distances and angles and bounding box data from the camera detection bounding boxes.

In more detail, for the example shown in fig. 7, there are 1752 samples collected for the beacon and 1597 samples collected for the cone. In this particular example, the samples cover a camera field of view of about +/-20 degrees from left to right, and the region of interest is about 5 to 20 meters ahead. Thus, the graph 700 shown in fig. 7 illustrates sample points showing the range (meters along the y-axis) and degrees from left to right from the center (along the x-axis) for each sample point.

FIG. 8 is a graphical diagram 800 illustrating exemplary training statistics for an exemplary collision avoidance system, according to an embodiment of the present invention. Referring now to FIG. 8, once the system is trained, an estimate of the relationship between the beacon's camera bounding box and the LiDAR range and angle measurements has been learned. For example, diagram 800 in FIG. 8 shows a predicted position (with an "o" symbol) versus a true position (with a "Δ" symbol), and in this example characterizes the system error rate in terms of error rate, mean of errors, and variance of errors. These are standard indicators for evaluating errors.

In view of the above description of an exemplary enhanced collision avoidance system for mobile industrial vehicles, such as powered cargo tractors and following linked/towed carts, fig. 9 is a block diagram of exemplary general data fusion process steps 900 related to enhanced collision avoidance in accordance with an embodiment of the present invention. Referring now to FIG. 9, the sensor input flow begins on either side of the diagram with the camera input on the left side and the LiDAR input on the right side. For example, on the left side of fig. 9 is block 905, block 905 representing operations performed on camera input data (e.g., camera images) captured by one of the cameras, such as the cameras shown in fig. 3 or fig. 4. The data fusion software module 310b of fig. 3 operates to acquire camera input, recognize and detect objects (such as beacons/cones in an image), and create a bounding box that represents the image coordinates of the beacons/cones. The camera object recognition deep learning neural network 910, running as part of the data fusion software module 310b, then uses the camera detection bounding box to label the detected object (e.g., person, cargo tractor, beacon, etc.) and assigns a confidence level to the detection, and then determines the output 915 as the distance and angle of the detected beacon/cone. Beginning on the right side of FIG. 9, the LiDAR input 920 is used by the data fusion software module to separately detect the distance and angle of beacons/cones based on such LiDAR data. Generally, the beacon/cone will appear bright (due to the reflective intensity of the peg) and relatively high in LiDAR data. The data fusion software module 310b can then compare the LiDAR-based determination of beacon/cone distance and angle to the camera-based determination of beacon/cone distance and angle as part of "fusing" the two types of data or data streams. In this manner, the system advantageously improves performance and accuracy by fusing such disparate data sources to better detect and identify objects relative to a cargo tractor vehicle platform. Additionally, the system improves performance by: false positive detections from LiDAR are reduced by projecting beacons into the camera image via a deep learning method, and the detections are validated using neural network learned projections from the camera to the LiDAR space.

As discussed above, embodiments of the MPC software module 315a running on the collision avoidance controller 410 may utilize kinematic models for spatially sensing and estimating not only the position of the cargo tractor, but also the position of the cart without having sensors on the cart that provide real-time feedback. A more detailed description of such an exemplary kinematic model (also referred to as a dynamic path following model) appears below as part of this detailed description. When using kinematic models as part of the MPC software module 315a running on the collision avoidance controller 410, the controller 410 has access to Inertial Measurement Unit (IMU) 305c position information (such as heading and accelerometer data), as well as ECU 305b information from the cargo tractor (such as wheel speed data). In one embodiment (e.g., a local option), only the position of the cargo tractor relative to the detected object may be known. In some embodiments, the movement of the system may be interpolated or extrapolated from a combination of heading, acceleration, and wheel speed data.

Using this data, a kinematic model implementation run by the MPC software module 315a can estimate the position and orientation of the cart following the cargo tractor relative to the cargo tractor. FIG. 10 is a set of diagrams 1005 and 1020 illustrating exemplary kinematics model visualizations in connection with estimated and predicted movements of an exemplary tractor (industrial vehicle) and a following vehicle (trailer) in which an exemplary collision avoidance system may be deployed, according to embodiments of the present invention. Referring now to FIG. 10, the left-most visualization 1005 shows a tractor pulling two exemplary carts in a line. In the next visualization 1010-. Since the rotation of the cart lags behind the movement of the cargo tractor, the cart movement is calculated and estimated using a kinematic model. This may be shown in the rightmost visualization 1020, where the red track of the cargo tractor is different from the blue track of the forwardmost cart and the yellow track of the next cart — none of them is shown in line as the cargo tractor makes a turn and the carts follow.

In view of the above description of an exemplary enhanced collision avoidance system and how embodiments of the system may be implemented using hardware and software elements, the following is a description of an exemplary method for enhanced collision avoidance that may utilize and use such a system in accordance with embodiments of the present invention that focuses on avoiding high value assets, such as a portion of an aircraft, particular equipment, or areas in which personnel or equipment may be located. For example, the system may be implemented in an embodiment that integrates: sensing and sensor processing systems (e.g., signal processing software and data fusion software running on a sensor data processor module) that detect and identify objects and beacons using distinct and different types of sensor data that are advantageously fused to improve detection; a model predictive controller (e.g., model predictive control software running on a collision avoidance controller module operating with real-time capabilities) that utilizes cargo tractor/trolley kinematics and vehicle dynamics models for collision avoidance and speed management; and a vehicle actuation system (e.g., vehicle actuation feedback control software running on a vehicle feedback actuation controller module) that interfaces with vehicle controls to help move the industrial vehicle and its towed vehicle away from high value assets. One or more beacons are placed in strategic locations to allow highly robust detection and avoidance of high value assets.

In general operation, an embodiment of the method begins with: a first sensor (LiDAR) detects the beacon and distinguishes the beacon from other objects, such as people, vehicles, cargo tractors, and the like. The method continues with: the second sensor (camera (s)) detects one or more objects (such as people, beacons, vehicles, etc.). Advantageously, these data may be fused by first determining a bounding box based on sensor data captured by the second sensor (camera) and determining a mapping space based on sensor data captured by the first sensor (LiDAR). The determined bounding box is then projected into the determined mapping space and then compared to improve how beacons that indicate position relative to the high value asset are identified and in some cases also to distinguish the identification of other objects that may pose a risk to the predicted movement of the goods tractor relative to the high value asset. In this way, the method utilizes the fusion of two data sources to provide improved, enhanced and more robust performance of the collision avoidance system.

Embodiments of the method next use the controller to estimate the shortest possible path to a breakthrough point in space projected radially from the beacon location. This may be accomplished, for example, by Model Predictive Control (MPC) software 315a running on a collision avoidance controller module 410 operating with real-time capabilities, where the MPC software 315a receives information (e.g., from the sensor data processor 405) about locating the beacon and may determine the cargo tractor trajectory relative to the beacon and tractor speed. The collision avoidance controller 410 with the MPC software 315a enabled operates as a type of limited-look-ahead controller. Thus, the MPC software 315a predicts the shortest possible path to a breakthrough point in space projected radially from the beacon location (and in the case of utilizing a determined cargo tractor trajectory relative to the beacon and tractor speeds from the IMU information), and the MPC software 315a, in the case of referencing and utilizing a system kinematics model of the tractor and the cart (such as the model described above and the models referenced in greater detail in the embodiments described below), can also predict potential collisions between any part of the cargo tractor 115 and the cart fleet 120 and high value assets (e.g., the aircraft 100) without actively detecting the position of any towed vehicle following the cargo tractor.

The exemplary method continues with: the MPC software 315a generates a plurality of control solutions to determine the maximum allowable speed at discrete times in time/space. Those skilled in the art will appreciate the necessity of: such control solutions are generated in real-time or near real-time given a large set of possible solutions and time constraints for enabling rapid decisions to be made based on such generated control solutions as the cargo tractor and its towed group of vehicles continue to move. In more detail, each of the control solutions generated by the MPC software 315a running on the collision avoidance controller module 410 may be implemented with a cost function that compares the cost of each control solution, wherein the optimal solution may be selected based on criteria defined by the cost function. For example, a control solution that decelerates the cargo tractor quickly and then travels the same distance may result in higher performance costs relative to another control solution that gradually decelerates the cargo tractor over a longer distance (which results in lower performance costs) while remaining within speed regulatory limits relative to the area near the beacon or within other speed limits to accommodate the particular item to be transported, the number of carts towed, the quality of what is being transported on the carts, etc.

In the event that the cargo tractor 115 has exceeded the maximum allowable speed calculated by the collision avoidance controller 410 with the MPC software 315a enabled, the method may continue to cause the vehicle feedback actuation controller to calculate a deceleration rate to achieve the desired speed within a fixed time window. As part of this step in the method, a feedback control system embodied in vehicle actuation feedback control software 320a operates by actuating the cargo tractor's brakes and/or throttle controls to achieve the calculated deceleration or acceleration. Those skilled in the art will further appreciate that as part of this exemplary method, the vehicle feedback actuation controller may also bring the cargo tractor to a complete stop if desired, such as when the cargo tractor and towed vehicle approach a loading/unloading area designated by one or more beacons, or when the cargo tractor and towed vehicle approach and have a beacon in front of the cargo tractor itself.

Such general method embodiments are consistent with the exemplary embodiment of the method described in the flow chart of fig. 38, in accordance with an embodiment of the present invention. Referring now to FIG. 38, a method 3800 begins at step 3805 where a LiDAR sensor on a mobile industrial vehicle (e.g., a cargo tractor) detects one or more reflective beacons relative to the mobile industrial vehicle. Such a reflective beacon may already be in place, or alternatively, the method 3800 may include the steps of: prior to the detection in step 3805, one or more reflective beacons are placed or deployed relative to the high value asset. In this manner, the exemplary reflective beacon may be physically placed near or in close proximity to a pre-designated location on the high value asset, such as a protruding portion of the aircraft (e.g., a nose of the aircraft, a nacelle extending from a wing of the aircraft, a tip of a wing of the aircraft, etc.), an area where personnel are expected to traverse, or a fixed facility. Deployment of the reflective beacon may also be accomplished by: activating a reflective beacon that is an indivisible part of the high value asset, or actuating such an indivisible reflective beacon to become visible from a concealed or stowed position relative to the high value asset.

At step 3810, method 3800 proceeds to: a camera sensor on a mobile industrial vehicle detects one or more objects relative to the mobile industrial vehicle. At step 3815, the method 3800 uses an exemplary sensor processing system on the mobile industrial vehicle to fuse sensor data detected by each of the LiDAR sensors and the camera sensors to identify a relative location of one or more reflective beacons based on the multi-sensor fused data source using the detected LiDAR sensor data and the detected camera sensor data. In more detail, the fusion as part of step 3815 may be implemented, in particular, with a sensor processing system: determining one or more bounding boxes based on sensor data generated by a camera sensor when one or more objects are detected; determining a mapping space based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected; projecting the determined one or more bounding boxes into the determined mapping space (e.g., by using a conventional neural network as described in more detail below); and comparing the determined one or more bounding boxes with the detected objects in the mapping space to verify the relative position of the one or more reflective beacons.

In yet another embodiment of step 3815, the fusing may be achieved when the sensor processing system deploys fuzzy logic and confidence scores in performing some of the sub-steps described above. For example, such a more detailed implementation of fusion may include the sensor processing system: determining one or more bounding boxes and camera confidence scores based on sensor data generated by a camera sensor when one or more objects are detected; determining a mapping space and a LiDAR confidence score based on sensor data generated by the LiDAR sensor when one or more reflective beacons are detected; projecting the determined one or more bounding boxes into the determined mapping space to identify relative positions of the one or more objects, and determining a final confidence score based on the camera confidence score and the LiDAR confidence score; when the final confidence score for a particular one of the one or more objects is below the confidence threshold, disregarding the identified relative position of the particular one of the one or more objects; and comparing the determined one or more bounding boxes with the objects detected in the mapping space to verify the relative positions of the one or more objects that are not ignored based on their respective final confidence scores.

At step 3820, method 3800 has: a model predictive controller on the mobile industrial vehicle determines a plurality of control solutions, where each of the control solutions defines a threshold allowable speed of the mobile industrial vehicle at a discrete time instant (a time instant in time/space) based on an estimated path to a breakthrough point projected radially from the verified relative position of the one or more reflective beacons. The model predictive controller proceeds at step 3825 to identify one of the control solutions as an optimal solution based on the performance cost function, wherein the one of the control solutions is associated with an optimal threshold allowable speed.

At step 3830, method 3800 proceeds to: when the mobile industrial vehicle exceeds the optimal threshold allowable speed, a vehicle actuation system on the mobile industrial vehicle responsively actuates vehicle speed control elements (e.g., throttle, brake) to cause the mobile industrial vehicle to alter the mobile operation within a desired time window (so as not to lag in responsiveness) and achieve the desired mobile operation relative to a current speed of the mobile industrial vehicle.

In a further embodiment of the method 3800, the mobile industrial vehicle may be implemented by a number of wheeled vehicles, for example a power vehicle (such as a cargo tractor) and a plurality of towed vehicles (such as carts linked by hitches to respective vehicles in front of each cart) serially linked to the power vehicle. Thus, the step of determining the different control solutions may be accomplished by determining such control solutions with a model predictive controller on the mobile industrial vehicle, wherein each of the control solutions defines a threshold allowable speed at discrete times in time/space for a collective vehicle comprising the mobile industrial vehicle based on a predicted path of the powered vehicle and the towed vehicle to a breakthrough point projected radially from the verified relative positions of the one or more reflective beacons, and wherein such predicted path is determined by the model predictive controller without actively detecting the position of any towed vehicle following the powered vehicle.

Those skilled in the art will appreciate that such method embodiments as disclosed and explained above may be implemented with devices or systems such as the exemplary enhanced collision avoidance system described at least with reference to fig. 2-4 (or embodiments of such systems as described in more detail below), and with the sensor suite described above, as well as with different processor/controller modules, and different software modules running on different processor/controller modules as described above. Such software modules may be stored on a non-transitory computer readable medium in each processor/controller module. Thus, when executing such software modules, the collective processor/controller module of the enhanced system for collision avoidance may be operable to perform operations or steps from the exemplary method disclosed above, including variations of the method.

In another embodiment, a further method for enhanced collision avoidance may utilize and use a similar system according to an embodiment of the invention that focuses on object detection and object avoidance. For example, such a system may be implemented in another embodiment that also integrates the following: sensing and sensor processing systems (e.g., signal processing software and data fusion software running on a sensor data processor module) that detect and identify objects and beacons using distinct and different types of sensor data, which are advantageously fused to improve detection; a model predictive controller (e.g., model predictive control software running on a collision avoidance controller module operating with real-time capabilities) that utilizes cargo tractor/trolley kinematics and vehicle dynamics models for collision avoidance and speed management; and a vehicle actuation system (e.g., vehicle actuation feedback control software running on a vehicle feedback actuation controller module) that interfaces with vehicle controls to assist in moving the industrial vehicle and its towed vehicle from collisions with the detected plurality of objects.

In general operation, this particular method embodiment begins with: a first sensor (LiDAR) detects any object in a geometrically defined area projected in the direction of travel of the cargo tractor vehicle as part of a mapping space in the direction of travel of the cargo tractor. The method continues with: the second sensor (camera (s)) detects one or more objects (such as people, beacons, vehicles, etc.). Advantageously, these data may be fused by first determining a bounding box based on sensor data captured by the second sensor (camera) and determining a mapping space based on sensor data captured by the first sensor (LiDAR). The determined bounding box is then projected into the determined mapping space and then compared to improve on how to identify objects in the path of the cargo tractor. In this manner, the method utilizes the fusion of the two data sources to provide improved, enhanced, and more robust performance of the collision avoidance system relative to objects detected in the path of the cargo tractor.

Similar to prior methods, this method embodiment also uses MPC software running on the collision avoidance controller to calculate the maximum vehicle speed, which will allow the system to stop before colliding with an object within the constrained space in the vehicle's direction of travel as detected by the sensor platform. In the event that the cargo tractor has exceeded the maximum allowable speed calculated by the MPC software running on the collision avoidance controller, a feedback control system embodied in vehicle actuation feedback control software operates by actuating the brake and/or throttle controls of the cargo tractor to achieve the calculated deceleration or acceleration. Those skilled in the art will further appreciate that as part of this further method embodiment, the vehicle feedback actuation controller can also bring the cargo tractor to a complete stop if desired.

Those skilled in the art will appreciate that such additional method embodiments as disclosed and explained above may be implemented with a device or system such as the exemplary enhanced collision avoidance system described with at least reference to fig. 2-4, and with the sensor suite described above, as well as with different processor/controller modules, and as described above, with different software modules running on different processor/controller modules. Such software modules may be stored on a non-transitory computer readable medium in each processor/controller module. Thus, when executing such software modules, the collective processor/controller module of the enhanced system for collision avoidance may be operable to perform operations or steps from the exemplary method disclosed above, including variations of the method.

New materials-further enhancement

Object persistence

As noted above, the example model predictive control 315a may track the persistence of detected objects (such as reflective beacons) within the state model. Those skilled in the art will appreciate that embodiments may implement object persistence as a software function within the exemplary collision avoidance system 300 that tracks and updates the position of identified objects (such as reflective beacons) relative to the cargo tractor as the cargo tractor moves through space. This functionality enables improved, enhanced and more accurate collision avoidance calculations for: these objects may have moved beyond the current field of view (FOV) of the sensors on the cargo tractor or have become occluded. In other words, embodiments may implement object persistence as part of the model predictive control 315a to enhance and improve how the example collision avoidance system 300 interprets and tracks detected objects (such as reflective beacons) and avoids collisions with detected objects after the sensor package in front of the mobile industrial vehicle (e.g., the cargo tractor 115) has moved past the detected object (e.g., a reflective beacon) and no longer has the detected object in the FOV of the sensor package.

Thus, in such embodiments, detected objects (such as detected reflective beacons) may persist within the system and be viewed by the model predictor control 315a as part of its functional collision avoidance and spatial awareness aspects, so the system may actually keep track of the vehicles (e.g., the cargo tractor 115 and the cart fleet 120) in space relative to them to ensure that the fleet 120 of vehicles will track past the object without contacting the object or to determine whether the vehicle fleet path 125 will be in contact with the object, and thus the vehicle 115 and its fleet 120 will need to stop. In such an embodiment, the system may keep track of changes in the trajectory of the queue, and therefore will only stop the queue if the operator does not make the required change of direction to avoid contact if the queue does come into contact with the detected object.

Boundary identification

Further embodiments may use boundary types to support various applications such as positioning, entry rejection, and automated mode selection as involved in exemplary collision avoidance systems and methods. In general, exemplary boundaries in the context of an exemplary collision avoidance system used on a mobile industrial vehicle (e.g., cargo tractor 115) may be identified by identifying markers placed in a physical environment. The virtual boundaries may be defined in software that includes a geo-locating instrumentation as part of an exemplary collision avoidance system (e.g., using GPS location sensor 305c and location sensor data provided to model predictive control software module 315 a). Thus, a geo-reference (e.g., a geofence using GPS coordinate location data) can be used in mode selection and regional boundaries, as discussed in more detail below with respect to multi-mode operation of the system and triggers that can change operation between modes based on boundaries, as well as denying areas of entry through geo-reference parameter selection.

Boundary and object identification using marker identification

In further embodiments, predetermined types/shapes of object identifiers (also referred to as markers) may be strategically placed outside of the vehicle in the environment to enable sensors on the vehicle to see, detect, recognize, and generate sensor data that allows the exemplary collision avoidance system to react accordingly. In more detail, such object identifier markers may have a symbology of a shape and type as part of the marker to uniquely identify the object or bounding region (and orientation in some embodiments) by, for example, a predetermined code and/or size. An example of such object identifier tagging may be implemented by ArUco tagging, which allows a camera-based system to quickly and reliably identify unique tags and estimate range, angle, and orientation. Embodiments may use the ArUco marker to identify the boundaries of the moving cargo tractor 115 and identify objects such as a cargo loader (such as loader 110) and its orientation. The identification of the unique object identifier marker allows the collision avoidance controller 410 to track these points in space without continuously observing the marker. The example collision avoidance system 300 may respond to the identified objects and boundaries in a manner similar to object persistence to change the operation of the mobile industrial vehicle (e.g., the cargo tractor 115) and/or change the operating mode of the collision avoidance system 300, as explained in more detail below. The unique identification and orientation estimation allows the collision avoidance controller 410 within the system 300 to define these objects as geometric shapes and respond accordingly. Thus, object identification using cameras and ArUco markers allows the exemplary collision avoidance system 300 to provide a more accurate localized response to mission critical objects (e.g., loading gates 110 for loading into the aircraft 100) given the ability to distinguish from reflective beacons that may lack further contextual information (e.g., unique difference information, and ranging, angle, and orientation information) provided by such object identification markers.

Exemplary operating modes-Driving Lane and aircraft boarding Port area

In still further embodiments, the example collision avoidance system may be programmatically configured to operate in different operating modes (e.g., using different operating parameters for vehicle operation, such as speed, what sensors to use, sensor settings, distance constraints, etc.) depending on the operating zone in which the vehicle having the collision avoidance system is operating. Additionally, embodiments may enable an exemplary collision avoidance system on a vehicle to independently switch between modes without communicating with a larger network.

In more detail, embodiments may have an airplane boarding gate area (AGA) mode and a Driving Lane (DL) mode. Each of the two exemplary operating modes AGA and DL function within different operating parameters and uniquely utilize system features during operation of a vehicle having the exemplary collision avoidance system 300 configured to operate in such different modes. For example, an exemplary DL mode is defined by the following boundaries: this boundary separates the driving lane (e.g., the area where the vehicle 115 passes in a lane that is not close to the aircraft) from the aircraft gate area (e.g., the area where the vehicle 115 may be in close proximity to the aircraft 100 when attempting to transport baggage in the cart 120 to the loader 110 next to the aircraft 100). For example, the maximum operating speed in the physical driving lane area (i.e., the area where the vehicle 115 may operate in DL mode) is 2 times the maximum speed of the aircraft boarding gate area (i.e., where the vehicle 115 may autonomously and automatically switch to operate in a more constrained AGA mode). In such an example, the beacon detection capability used in some embodiments of collision avoidance may not be applicable when operating in the less restrictive DL mode. In this example, the exemplary DL operating mode may rely on a forward protection system to prevent collisions with objects in the vehicle path when operating at increased speeds compared to the aircraft gate area. Exemplary collision avoidance system functions available in these two exemplary modes are described in more detail below with reference to fig. 40 and 41. In further embodiments, additional modes of operation may be enabled by defining new operating parameters and identifying different regional boundaries for different modes.

Multi-mode operation switching based on tag identification

Embodiments described herein may use the above-described exemplary object identifier tags (e.g., ArUco tags) to designate different regional boundaries for different operating modes of a collision avoidance system of a vehicle. The detection of these markers may provide input to an embodiment of the exemplary collision avoidance system 300 so that the system can detect such markers and identify such markers as related objects or boundaries (e.g., based on the encoding of a particular ArUco marker). The system may then responsively identify when to switch from a "forward protection detection mode" (e.g., DL mode), which is to be used outside of the restricted area (e.g., the boarding gate area associated with loading the aircraft), to a "boarding gate area type detection mode" (e.g., AGA mode) when entering the more restricted boarding gate area. In this manner, the example collision avoidance system 300 on the vehicle 115 may then operate in a more restrictive gate mode (e.g., AGA mode) that focuses primarily on beacon detection and also reduces the maximum speed of the tractor. The example collision avoidance system 300 may also utilize location information (e.g., GPS data) to know when to switch back to the forward protection detection mode upon exiting the gate area based on further detection of markers or other geographic references. In the forward guarding detection mode (outside of the gate area), the allowable speed of the cargo tractor 115 may be greater than when operating in the gate area type detection mode.

In such embodiments, an exemplary object identifier marking (e.g., a specific ArUco marking) may be used to indicate where an object is located, such as where the aircraft loader 110 will be in the gate area near the aircraft 100. Thus, detection of such a dedicated object identifier tag may allow the exemplary collision avoidance system to further enter another mode of operation and to de-assert (disconnect) the internal gate area collision avoidance response when the tractor/trolley queue 115/120 is within a threshold distance from the aircraft loader (i.e., the detected object identifier tag) because there may be operating times as follows: during this operating time, the train of carts behind the cargo tractor 115 may need or desire light contact with the aircraft loader (or closer to the loader than would normally be allowed by the collision avoidance system) to provide the ability to load/unload containers between the carts 120 and the aircraft loader platform.

In these further embodiments, these system triggers may allow for further enhanced and improved collision avoidance responses in both environments, while minimizing situations where certain detections may unnecessarily stop the tractor/vehicle train. Preventing unnecessary system triggered stops/responses and allowing these changes in the operating mode (gate zone collision avoidance, which focuses primarily on beacons, and then outside gate zones, which focuses primarily on front protection collision avoidance) provides technical solutions and practical applications of the above-described system elements to even further improve collision avoidance and enhance safe logistics operations involving logistics vehicles, such as cargo tractors 115 and towed vehicles 120.

Fig. 39 is a diagram of another exemplary tractor collision avoidance system operational view deployed in another exemplary logistics environment in accordance with an embodiment of the present invention. As shown in fig. 39, another embodiment having an aircraft 100 is shown with exemplary object identification markers 3900, 3905. Marker 3900 is deployed in this embodiment and is coded as an exemplary boundary-identifying marker, thereby identifying an exemplary regional boundary where the operation of system 300 on the cargo tractor 115 may differ on either side of the regional boundary identified by marker 3900. Marker 3905 is deployed in this embodiment as an object identifier marker that is encoded to be associated with exemplary loader 110 (and may include the orientation of loader 110 from the sensor analysis of marker 3905). As shown in fig. 39, the example collision avoidance system 300 on the vehicle 115 may responsively and automatically switch from the DL mode to the more restrictive AGA mode when the cargo tractor 115 has moved from the DL operational area below the boundary identification indicia 3900 (where the system 300 operates in the less restrictive DL mode) and into the AGA operational area above the boundary identification indicia 3900.

Dynamic field of view

In a front guard embodiment (e.g., when the exemplary collision avoidance system 300 is operating in a less restrictive DL mode), the collision avoidance portion of the overall system 300 may use sensors (e.g., sensors 305d, e, f) and sensor data generated by such sensors in an improved and enhanced manner. In more detail, the example collision avoidance system 300 may dynamically adjust a field of view (FOV) of interest to the system 300 that effectively changes where the sensor may be focused and/or the extent to which the sensor receives the width of the field. In such embodiments, this allows the exemplary collision avoidance system 300 to change, refine, and dynamically adjust for changes in the operating mode of the system. For example, the example collision avoidance system 300 (e.g., the multi-processor module 405 running the signal processing software module 310 a) may make changes to prioritize a portion of the sensor data generated by the sensors, which effectively draws the system 300 more focus on sensor data based on the direction of travel and/or the direction in which the vehicle 115 is turning. Embodiments may prioritize sensor data in such a way to essentially adapt the FOV of the sensor and achieve a dynamic FOV response by programmatically adjusting the portion of the sensor data from the actual field of view of the sensor that is processed and considered for collision avoidance purposes. This effectively filters out extraneous sensor data from portions of the sensor's actual FOV and causes the system 300 to focus on a subset of the sensor data. Such a subset of sensor data may, for example, result in an adjusted degree of the effective field of reception of the sensor (e.g., where the effective FOV narrows on both sides about the longitudinal axis of the vehicle), or in an adjusted focus of the sensor (e.g., where the effective FOV changes more on one side of the longitudinal axis of the vehicle than on the other side). More generally, such embodiments effectively adjust where the sensor is focused or effectively adjust the extent of the field of reception of the sensor. Such adaptive and dynamic changes to the valid sensor data considered by the system 300 may, for example, be in response to changes in the direction of the vehicle 115 (e.g., changes in angular velocity in the trajectory of the cargo tractor 115), and/or in response to identifying object identification indicia that indicate a change to the operating region within which the exemplary collision avoidance system 300 (and its vehicle 115) is operating to help prevent unnecessary/unwanted system-initiated vehicle stops. In such embodiments, the exemplary collision avoidance system 300 may better determine that: for example, if the travel path of the vehicle is adjusted to avoid the detected object within an acceptable distance from where a collision would occur if the path had not been changed, then whether the object would be avoided. Thus, the dynamic FOV may be deployed as part of an exemplary collision avoidance system operating in a forward protection detection mode as a situation-adaptive FOV, allowing the dynamic FOV enabled exemplary collision avoidance system 300 to respond to objects in the vehicle path.

Minimum keep-out distance lock (lockout)

As noted above, different operating modes of the example collision avoidance system 300 may have different operating parameters, and system features are utilized in different ways during operation of a vehicle having the example collision avoidance system 300 configured to operate in such different modes. For example, in addition to speed limiting operating parameters and field of view parameters for the particular sensor used in a given operating mode, further exemplary operating parameters/characteristics that may be relevant to a particular operating mode of the exemplary collision avoidance system 300 may include: a minimum access prevention distance (KoD). Generally, the minimum KoD is the radial distance from the object at which the example collision avoidance system 300 may be implemented using a vehicle actuation system and cause a complete immediate vehicle stop. Thus, the exemplary minimum KoD lock allows for a full brake-to-stop response for objects that may enter the sensor FOV monitored by the exemplary collision avoidance system. Such an exemplary minimum KoD may be different for different operating modes, as the speeds involved in different operating modes may be different, which may require that minimum KoD be higher to account for higher potential speeds in a particular operating region (e.g., under speed limit parameters associated with the operating mode of that region). However, other areas may have a desired minimum KoD that provides more distance to the detected object for reasons other than braking to a stop (e.g., an object in which the area has a dangerous nature, which may reasonably require a larger minimum KoD, etc.).

Local temporary system override (override)

A further feature of the example collision avoidance system 300 may be a local temporary system override. The local temporary system override feature of the exemplary system 300 allows a tractor operator to disable the system 300 on a time limited basis. This may be accomplished by interacting with a gear selector 325 (i.e., one of the vehicle actuators controlled by the vehicle actuation feedback control 320 a). For example, placing the cargo tractor 115 in a park condition using this feature implemented as part of the exemplary collision avoidance system 300 would disable the system 300, but placing the cargo tractor 115 in a drive or reverse condition from a park position could begin a countdown timer that reengages the system 300. In an embodiment, such an exemplary countdown timer may be dependent on cart selection. Thus, increasing the cart count may extend the countdown period, and information regarding the cart count may be maintained by the example collision avoidance system 300 utilizing such local system override features. Further embodiments may allow cart counting as a selection through a user interface to the exemplary collision avoidance system 300 that may all be used to enter information regarding, for example, the number of carts in the queue 120, as well as the length and quality of a particular embodiment of the queue 120. Using such car count information, the example collision avoidance system 300 may be used when determining potential control solutions calculated by the example model predictive controller of the system 300.

Further embodiments of the exemplary collision avoidance system 300 may include further software-based modules in certain logical segments of further roles within the system that enhance the operation of the system 300, including a user interface with which to enter information (such as cart count information) and monitor the status of the system 300, in accordance with embodiments of the present invention. In more detail, further embodiments may include an exemplary sensor platform management software-based module as part of the sensor processing segment 310 shown in FIG. 3, while a system management software-based module may be part of the collision avoidance control segment 315 shown in FIG. 3. Embodiments of an exemplary sensor platform management and system management module may include the following: these aspects provide for system initiation functionality, user input for mode selection, communication features, and generation of different types of user interfaces for the exemplary collision avoidance system 300.

For example, such exemplary modules may include an auto-start feature for the exemplary system 300, wherein upon system power-up, the vehicle actuation feedback controller 415 initiates a start-up cycle for the remaining system components in the system 300, which results in the automatic enablement of the exemplary collision avoidance system 300 upon completion of system initialization.

In another example, such exemplary modules may include a software-based network connection for the exemplary system 300. While embodiments of the exemplary collision avoidance system 300 have numerous features and operational scenarios in which the system 300 operates in an autonomous or semi-autonomous mode that does not require connection to a larger network and to systems over such networks, the inclusion of network connections (e.g., over Wi-Fi, cellular, or other wireless technologies) includes the ability to allow remote system monitoring and manual commands to various system states and parameters of the system 300, as well as to receive updated information regarding a particular operating environment (e.g., identification information regarding particular object identification tags used within a particular aircraft environment, etc.).

From a user interface perspective, such exemplary modules may enable system 300 to present one or more different graphical user interfaces, including remote visualizers and/or status indicators. An exemplary graphical user interface generated by such modules as part of the exemplary collision avoidance system 300 may provide an intuitive interface for user input of adjustment system parameters (such as car count information, etc.). An exemplary remote system visualizer may provide a graphical representation of, for example, MPC calculations and control response solutions. Further, the exemplary status indicator module (implemented as part of such an exemplary sensor platform management and system management module) may communicate the current system status and high-level actions of the exemplary collision avoidance system 300 to a driver of the vehicle 115, a local observer on the vehicle 115, a local observer located outside of the vehicle 115 but in the same operating region as the vehicle 115, and/or a remote observer outside of the operating region of the vehicle 115.

Further example embodiments are illustrated in fig. 40-41, in which different operating regions are shown, and in which an example vehicle and its onboard collision avoidance system may be switched from DL mode to AGA mode, which is engaged in different collision avoidance system operating parameters and functions in an autonomous and automatic manner that enhances the collision avoidance capabilities of such an example vehicle. Fig. 40 is a diagram of another exemplary tractor collision avoidance system operational diagram deployed in another exemplary logistics environment, wherein the exemplary tractor collision avoidance system operates in an exemplary Driving Lane (DL) mode in accordance with an embodiment of the present invention. Referring now to fig. 40, exemplary cargo tractor vehicles 115a-115d are illustrated operating in exemplary DL operating area 4005, while exemplary aircraft 100 and exemplary loader 110 are disposed alongside aircraft 100 within exemplary AGA operating area 4000. As shown in fig. 40, in this example, the exemplary DL operating region 4005 includes a driving lane that does not violate the aircraft 100, or does not allow vehicles within the DL operating region 4005 to be so close to the aircraft 100 as to pose an inherent risk of collision for such high value assets, but forward protection monitoring allows the vehicles 115a-115d in the DL operating region 4005 to help avoid collisions with other objects detected in such driving lane. While in the example DL operation area 4005, embodiments of the example collision avoidance system 300 may be used on vehicles 115a-115d in DL mode, where operating parameters and system features (e.g., increased system parameters and decreased set of deployed system features) may be used with less constraints than if operating within the AGA operation area 4000 in AGA mode. For example, an exemplary DL mode of the system 300 within the DL operation region 4005 may include a speed limit of 10 mph, a sensor FOV constrained to a frontal protection detection mode operation, and a minimum object KoD set to 4 m. In the DL mode, the system 300 on the exemplary cargo tractor vehicle 115a may use a dynamically adjusted sensor that responsively adjusts the sensor FOV based on changes in the movement of the vehicle 115a (such as when the vehicle 115a is turning). As shown in fig. 40, when the vehicle 115a turns, the sensors on the example collision avoidance system 300 on the vehicle 115a may adjust their FOV to look in the direction of the turn to better view objects in the path of the vehicle 115 a. Further, as shown in fig. 40, the collision avoidance system 300 on the vehicle 115d may detect objects on its travel path (e.g., the turning vehicle 115a and its towed vehicle fleet 120 a) and, as a result, cause the vehicle 115d to automatically slow down or stop.

Fig. 41 is a diagram of the exemplary logistics environment of fig. 40, but wherein the exemplary tractor collision avoidance system on vehicle 115a has been automatically switched to operate in an exemplary airplane boarding pass zone mode, in accordance with an embodiment of the present invention. As shown in fig. 41, the vehicle 115a has made a turn as shown in fig. 40, but has further turned from the DL operation area 4005 into the AGA operation area 4000. The example collision avoidance system 300 on the vehicle 115a detects the example object identification marker 4010 (similar to marker 3900 shown in fig. 39), identifies the marker 4010 as a boundary identification marker based on the encoded information on the marker 4010, and initiates a change from the DL mode to the more restrictive AGA mode for the example collision avoidance system 300 operating on the vehicle 115a without instruction or communication with any larger network. For example, an exemplary AGA mode (e.g., a mode used when operating within the AGA operating region 4005) for the system 300 of the vehicle 115a may include a reduced speed limit of 5 mph; causing the LiDAR sensor FOV to be dynamically adjusted to extend to 270 degrees for enhanced collision avoidance within the AGA operating area 4005; engaging in beacon identification (not just objects) and object persistence as part of collision avoidance; enabling loader identification and tracking (specific types of objects identified using, for example, an exemplary ArUco marker that is recognized to uniquely identify the loader according to the code and its orientation); reduce the smallest object KoD to 2 m; and sets the minimum beacon KoD to 1 m. In this manner, the vehicle 115 and its onboard collision avoidance system 300 may better operate in an environment with high value assets and operate in an autonomous adaptive manner to further enhance collision avoidance with objects in the vehicle path, including high value assets and objects that are no longer within the sensor FOV (due to object persistence).

FIG. 42 is a flow diagram of an exemplary method for enhanced collision avoidance by a mobile industrial vehicle using a multi-mode on-board collision avoidance system, and which may operate in a plurality of different operating regions, in accordance with an embodiment of the present invention. Referring now to FIG. 42, exemplary method 4200 begins by: the multi-mode on-board collision avoidance system on a mobile industrial vehicle operates in a first collision avoidance mode (e.g., DL mode) while the mobile industrial vehicle operates in a first of the different operating regions (e.g., the driving lane region 4005 shown in fig. 40 and 41). At step 4210, method 4200 continues with: one of the sensors of the multimodal on-board collision avoidance system detects an object identification marker (e.g., marker 3900 or marker 4010, such as an ArUco marker, encoded to correspond to a marker representing a regional boundary and configured to indicate an orientation of the regional boundary).

At step 4215, method 4200 continues with: the multi-mode on-board collision avoidance system identifies the detected first object identifying indicia as an operational boundary identifying indicia. For example, the collision avoidance system 300 on the vehicle 115a shown in fig. 41 may detect the flag 4010 and identify the flag 4010 as an operation boundary identification flag that represents a regional boundary between the DL operation area 4005 and the AGA operation area 4000. At step 4220, method 4200 continues with: detecting, by one or more sensors of the multi-mode on-board collision avoidance system, when the mobile industrial vehicle passes a regional boundary associated with the operational boundary identification marker and enters a second of the different operational regions.

At step 4225, method 4200 continues with: when in a second of the different operating zones, the multi-mode onboard collision avoidance system changes from the first collision avoidance mode to the second collision avoidance mode, thereby governing operation of the multi-mode onboard collision avoidance system. Thus, the second collision avoidance mode (e.g., the AGA mode) has at least one operating parameter that is more restrictive than the operating parameter in the first collision avoidance mode (e.g., the DL mode). In more detail, the change from the first collision avoidance mode to the second collision avoidance mode in step 4225 may be implemented using a second set of operating parameters for the multi-mode on-board collision avoidance system when in the second collision avoidance mode rather than using the first set of operating parameters for the multi-mode on-board collision avoidance system when in the first collision avoidance mode, wherein at least one operating parameter common to each set as part of the second set of operating parameters has a more restrictive value when compared to when being part of the first set of operating parameters. Such an operating parameter may be, for example, a speed limit threshold for the mobile industrial vehicle, or an ingress prevention distance for the mobile industrial vehicle (e.g., a minimum radial distance from the mobile industrial vehicle to an object detected by the multimodal on-board collision avoidance system, or a minimum radial distance from the mobile industrial vehicle to a reflective beacon detected by the multimodal on-board collision avoidance system).

In more detail, the second collision avoidance mode and the first collision avoidance mode may be different with respect to what operational features are used on the multi-mode collision avoidance system in the different modes. For example, at least one additional operational feature of the multi-modal collision avoidance system may be used in the second collision avoidance mode (e.g., the AGA mode) when compared to the operational feature of the multi-modal collision avoidance system used in the first collision avoidance mode (e.g., the DL mode). Such additional operational features (or different operational features) may include: for example, a minimum ingress prevention distance threshold feature for preventing the mobile industrial vehicle from moving within a minimum ingress prevention distance threshold from the object detected by the sensor; an object persistence feature for tracking a detected object after the detected object exceeds a field of view of the sensor; an altered field of view characteristic for altering the field of view of the sensor to enhance collision avoidance when operating in the second collision avoidance mode; and/or a dedicated object detection feature for enabling detection of a reflective beacon that differs from other objects alone and in addition to detecting other objects alone when operating in the second collision avoidance mode.

Termination of new materials

Additional details regarding exemplary dynamic path following or kinematic models

As noted above, embodiments may use dynamic path following or kinematic models as part of the applied and enhanced system, apparatus and method embodiments that relate to predicting future states (e.g., of movement and path) of a multi-element mobile industrial vehicle, such as a cargo tractor with a towed vehicle or trailer, as part of an improved embodiment for avoiding collisions with high value assets by the mobile industrial vehicle.

In this particular description of embodiments of such exemplary dynamic path following or kinematic models that may be deployed as part of the applied and enhanced system, apparatus and method embodiments, the following abbreviations are used and are explained as follows:

t: the current time; Δ t: step of time

u (0): initial displacement; u (t): current displacement at time t

u (t + Δ t): the next displacement at t + Δ t; v (0): initial linear velocity

v (t): the current linear velocity at time t; v (t + Δ t): linear velocity at t + Δ t

a (0): an initial linear acceleration; a (t): current linear acceleration at t

a (t + Δ t): linear acceleration at t + Δ t;

θ (0): an initial orientation angle; θ (t): current direction angle at time t

θ (t + Δ t): the next bearing angle at t + Δ t; ω (0): initial angular velocity

ω (t): angular velocity at time t; ω (t + Δ t): angular velocity at t + Δ t

α (0): an initial angular acceleration; α (t): angular acceleration at time t

α (t + Δ t): angular acceleration at t + Δ t;

w: a vehicle width; l: length of vehicle

Lf: hitch length on vehicle front

Lr: hitch length on rear of vehicle

Beta: a steering angle; WB: wheelbase

La: distance from rear axle of towing vehicle to its hitch point

Lb: distance from previous hitch point to rear axle of towed vehicle

Lc: distance from rear axle of towed vehicle to next hitch point

Rra0: rear axle radius of towing vehicle

Rrai: rear axle radius of ith towed vehicle

Rh0: hitch radius of towing vehicle

Rhi: hitch radius of i-th towed vehicle

Subscripts x, y: in the X and Y directions

Subscript d: towed vehicle

Subscript i: the ith vehicle, i = 0 is the towing vehicle, i = 1 to 4 represents the unit being towed.

In general, the embodiments described below of an exemplary dynamic path following or kinematic model (including fig. 11-19) that may be deployed as part of the applied and enhanced system, apparatus, and method embodiments predict continuous motion of a towing vehicle system and follow its trajectory. The exemplary model addresses off-track effects that occur when a towing vehicle system makes a turn. The framework of the exemplary model includes: (1) a state space model that describes the relationship between the moving elements (linear and angular position, velocity and acceleration) of a towing vehicle and its towed vehicle (e.g., a cart and/or trailer); (2) a geometric model that locates the instantaneous position of these vehicles (including the instantaneous position of the hitch point); (3) an Ackerman (Ackerman) steering model that outlines the shape of the entire towing vehicle system at any time by taking into account off-track effects; and (4) a hitch return (back) model that calculates a history of the towed vehicle's heading angle based on the towing vehicle's inputs, and thus captures continuous motion of the towing vehicle system.

In previous attempts to solve the problem of more accurately tracking the continuous motion of a traction vehicle system (a component of a mobile industrial vehicle), considerable errors have been found when comparing the path predicted from the model with the true path of the traction vehicle system. Previous attempts to model this behavior assumed that the following towed vehicle followed the same path as the towing vehicle and ignored off-track effects. While others have attempted to solve this problem using, for example, king pin slipping (kingpin slipping) techniques and movable junction (movable junction) techniques to eliminate off-track deviations of vehicles such as queues, the implementation of these techniques is too costly and most towed vehicle systems still suffer from off-track problems. Therefore, to improve prediction accuracy, an improved dynamic model is developed that accounts for off-track effects, as described in more detail below.

In vehicle systems like queues, off-track effects mean: compared to towing vehicles, towed vehicles always follow a tighter path around corners, and the more units (trailers) that are towed, each subsequent trailer will follow a tighter path than the trailer that passed before it. As shown in FIG. 11, the example dynamic modeling framework 1105-: a state space model is employed to calculate the instantaneous position and speed of the towing vehicle based on newton's second law and thereby estimate the position of the subsequent towed vehicle by assuming that each towed unit follows the same path (sequence of heading angles) as the towing vehicle. Equation (1) shown below lists a state space model that calculates its instantaneous position and velocity based on the initial conditions of the towing vehicle collected from the IMU.

Equation (1)

Figure 779982DEST_PATH_IMAGE001

FIG. 12 is a diagram of an exemplary single rigid object model, according to an embodiment of the present invention. Assume that the towing vehicle and towed unit are rigid objects (such as object 1205) with three degrees of freedom: translating in the X and Y directions and rotating about Z (as shown in fig. 12), the state space model can be represented as equation (1). The position calculated according to equation (1) represents the position of a reference point at the towing vehicle and the real-time shape of the towing vehicle will be determined based on the coordinates of this point and its dimensions. The same method is then applied to determine the instantaneous shape of the following vehicle.

In equation (1), uxAnd uyRespectively representing the X and Y positions of a reference point of the rigid object (e.g., the center of the front end of the towing vehicle or towed unit). Based on the reference point, the positions of other points within the rigid object can be easily determined following a geometric relationship. As a rigid object, any point on the towing vehicle or each towed unit should have the same direction, speed, and acceleration. The linear velocity and acceleration in the X and Y directions may be related to the direction angle θ, which is expressed in equation (2) shown below:

equation (2)

The real-time heading angle of the towing vehicle calculated according to equation (1) is then used to predict the heading angle of the following towed unit to fully determine the shape of the entire towing vehicle system at any time. In estimating the angle of the towed vehicle, previous models assumed that the towed vehicle followed the same history of angular positions as the towing vehicle. In other words, the instantaneous heading angle of the towing vehicle is transmitted to the following vehicle with a suitable time delay that depends on the stiffness of the connection between two adjacent vehicles.

FIG. 13 is a diagram of an exemplary mobile towing vehicle system having four towed units in accordance with an embodiment of the present invention. Referring now to fig. 13, an exemplary mobile towing vehicle system 1300 is shown as a polygon representing a towing vehicle 1305, and a series of towed units (e.g., carts or trailers) 1310a-1310d linked with hitches 1315a-1315 d. Based on the calculated or estimated positions of the towing vehicle 1305 and its sequence by the towing units 1310a-1310d, a polygon model is developed to predict the instantaneous shape of the exemplary mobile towing vehicle system 1300 at any time. The beacon system is used to update the instantaneous position of a reference point on the towing vehicle (e.g., the middle of its front end), which will be an input parameter to the model. In the developed polygon model, the exemplary towing vehicle and each exemplary towed unit are assumed to be rectangular with four vertices, and then the shape of the entire towing vehicle system may be represented as a polygon formed by line segments connecting all vertices. Equation (3) below explains how to calculate the global coordinates (related to reference point O) of the four vertices of the i-th towed element.

Equation (3)

Figure 878836DEST_PATH_IMAGE003

However, by assuming that each towed unit follows the same path (sequence of heading angles) as the towing vehicle, the off-track effect is ignored. In fact, due to off-track effects, the towed vehicle follows a tighter path around the corner than the towing vehicle. When using previously known models to predict the shape of a mobile towing vehicle system having more than two towed units, the omission of this effect contributes to significant errors.

Embodiments of the improved model may track the path of a towing vehicle system. FIG. 14 is a diagram of an exemplary geometric model of a towing vehicle system having an exemplary towing vehicle 1405 and two towed vehicle units 1410a, 1410b, showing hitch point H, according to an embodiment of the present invention1、H2. As part of such an embodiment, the geometric model determines the coordinates of all vertices of the mobile towing vehicle system at any time, based on which the instantaneous shape of the system can be easily mapped. The model allows the connection between the towing vehicle and the towed vehicle, and between any two adjacent towed vehicles, to be represented as: a rigid link (e.g., 1415a, 1415 b) from the rear end mid-section of the towing vehicle (or the towed vehicle in front) to the hitch point; and another rigid link (e.g., links 1420a, 1420 b) from the hitch point to the front end mid-portion of the towed vehicle (or a towed vehicle behind). This modeling of the connection allows a steering model to be implemented to predict vehicle systemsOff-track effects are captured.

Referring to the exemplary geographic model shown in FIG. 14, various labels and abbreviations are used. For example, wtFor indicating the width of the towing vehicle,/tDenotes its length, wdIndicates the width of the unit to be towed,/dDenotes its length, LrIndicating the length of the hitch attached to the rear end of the towing vehicle (from 5 to H)1),Lf toIndicating the length of the hitch attached to the front end of the first towed unit (from H)1To O'). Top 1-5 and hanging point H of traction vehicle model1Can be calculated as:

equation (4)

1 'to 5' and H can be represented in the same manner2Coordinates with respect to the local reference point O':

equation (5)

Figure 666029DEST_PATH_IMAGE006

And

Figure 159196DEST_PATH_IMAGE007

similarly, for the ith towed unit, its five vertices and HiThe relative coordinates with respect to its local reference point can be easily expressed as:

equation (6)

Next, the relative coordinates of the four vertices of the first towed unit are referenced(equation (4)) are mapped back to the global reference point O in order to obtain their global coordinates. This operation may be performed by transforming the reference point from O' to O. To find the mapping relationship, three vectors may be used

Figure 500889DEST_PATH_IMAGE009

Andto construct a triangle delta OH1O' as illustrated in fig. 15 with an exemplary towed unit 1505. Two vectors

Figure 135767DEST_PATH_IMAGE012

Andis OH in length1= lt+ LrAnd H1O'= Lf(ii) a The two vectors are oriented by an angle theta1And theta0Are indicated. According to the cosine and sine laws, the triangle delta OH can be completely matched1O' is solved and can be followed as

Figure 423846DEST_PATH_IMAGE014

Andto easily map the coordinates of O' to the coordinates of O. Thus, the global coordinates of the four vertices (1 'to 4') of the first towed vehicle can be calculated as:

equation (7)

Figure 157502DEST_PATH_IMAGE016

Figure 80852DEST_PATH_IMAGE017

Examining equation (7) and combining it with equation (6), the global coordinates of the four vertices of the i-th towed unit (related to reference point O) can be obtained as:

equation (8)

In order to correctly calculate the turning radii of the towing and towed vehicles when the towing vehicle system makes a turn, an ackerman steering model may be used. Those skilled in the art will appreciate that ackermann steering principles define geometry (geometry) applied to all vehicles in a towing vehicle system with respect to the turning angle of the steering wheel, referred to as steering angle β. By this principle, the radius of several key points of the vehicle system can be determined, on the basis of which the position of the towed unit in relation to the towing vehicle can be determined and the path of the entire system can be simulated very well. By using ackermann steering principles as part of this embodiment of the new path-following model, an improved and enhanced description of the instantaneous position of the towing vehicle and each towed vehicle, which takes into account the maximum deviation trajectory, can be achieved. Embodiments of such a model are further explained below.

Fig. 16 illustrates a simplified exemplary vehicle system having one towing vehicle (tractor) 1600 and one towed vehicle (trailer) 1605 and various distance reference lengths in a tractor-trailer model according to embodiments of the present invention. Fig. 17 is a diagram of an exemplary scale model of an exemplary towing vehicle 1700 and two towed vehicles (trailers) 1705, 1710 illustrating a particular length and a particular radius from a reference point 1715 defining a series of virtual triangles, in accordance with an embodiment of the present invention. To implement the ackerman steering principle, WB is used to represent the wheelbase of the tractor, LaFor indicating the length from the rear axle of the tractor to the hitch point, LbFor indicating the length from the hitch point to the rear axle of the trailer, and LcForShowing the distance from the rear axle of the tractor to the next hitch point (fig. 16). In one embodiment, in the case of a vehicle system having multiple towed vehicles, L is expected if the towed vehicles are the same sizebAnd LcThe same is true. The radius calculated for this model includes: rear axle radius (R) of tractor 1700ra0) Rear axle radius (R) of trailer 1705 (first towed unit)ra1) Tractor 1700 hitching radius (R)h0) And a hitch radius (R) of the trailer 1705h1) As depicted in fig. 17. As shown in this figure, a series of virtual triangles may be constructed based on the steering angle and the defined radius and length, from which the radius is calculated according to the following trigonometric relationship.

Equation (9)

Wherein, γ1Indicating the difference between the directions of the towing vehicle and the first towed vehicle, following a relation

Figure 905085DEST_PATH_IMAGE020

(FIG. 14). It should be mentioned that with the steering model presented, the radial position of any point on the vehicle system can be calculated in a similar way to equation (9). We show only the equations used to calculate the rear axle radius and hitch radius for testing and verification purposes. The front axle radius of the towing vehicle and its position are completely determined by a kinematic model or a state space model as shown in equation (1) and need not be estimated based on trigonometric relations.

Furthermore, equation (9) can be easily modified by simply replacing the steering angle and size of the towing vehicle with that of the towed vehicle and applied to calculate the radius of any towed vehicle. Equation (10) shows a general formula for calculating the rear axle and hitch radius of the ith towed vehicle, assuming that subsequent towed vehicles are of the same sizeAnd have the same LbAnd Lc

Equation (10)

Figure 516064DEST_PATH_IMAGE021

Hitching return method for path following simulation

The ackermann steering model helps predict the instantaneous shape of the towing vehicle system, but lacks the ability to simulate continuous motion of the towing vehicle system by presenting its intermittent steps in sequence. For example, if the towing vehicle system is traveling in a straight line when the steering wheel of the towing vehicle is quickly turned 10 ° away from the forward direction, the towing vehicle and all towed units will immediately adjust themselves into the appropriate radial positions as calculated according to equations (9) and (10) rather than gradually moving into those positions.

To more accurately simulate the continuous motion of a towed vehicle system, a "hitch return method" was developed that employed instantaneous shapes calculated from an ackermann steering model as reference points while continuously following the path of the towed vehicle system with high accuracy. Referring now to fig. 18A-18C, the method begins with a simple one tractor 1800 (with rear hitch 1805) and one trailer 1815 (with front hitch 1810) model in three states: (1) an initial state (fig. 18A) when the model is traveling in a straight line; (2) a neutral state (fig. 18B) when the tractor begins to make a turn while the trailer is still traveling in this straight line; and (3) a final state (fig. 18C) when the tractor's angle input has been transferred to the trailer.

Since the heading angle of the towing vehicle is fully determined based on the IMU data, we need only develop an angular increment Δ θ to estimate the towed vehicle from the initial state to the final stated1The process of (1). Following the trigonometric relationship illustrated in FIG. 19, the angle may be varied according to the X and Y offsets between its forward hitch point (Point 1) and its rear axle center point (Point 2)Increment of delta thetad1The calculation is as follows:

equation (12)

An exemplary procedure is generated based on the developed hook return model (equations (11-12)) and implemented into a simulation software package using C + + programming. Continuous motion of an exemplary towing vehicle system having two towed units is successfully simulated using a simulation tool, and the simulated path of the towing vehicle system model closely matches the real path measured from the scale model, and off-track effects when the towing vehicle makes a turn are properly accounted for. It is worth mentioning that in the simulation the speed of the towing vehicle and its steering angle are input variables, based on which the angular speed of the towing vehicle can be calculated asWherein the rear axle radius R is due to the center of rotation of the towing vehicle being located at the center point of the rear axleraThe center point of the rear axle is assumed to be the center of rotation of the vehicle because the front wheels are free wheels, which generate steering angles for the vehicle body to followraThe kinematics of the towed vehicle, including its distance, velocity, acceleration, angle of rotation, and angular velocity and acceleration, follow the relationships described in Newton's second Law, and may be calculated using a state space model (equation (1)).

Additional details regarding multi-sensor detection system embodiments and operation thereof

Further exemplary embodiments may include systems, apparatuses, and methods in which an industrial vehicle (such as a freight tractor) utilizes LiDAR and monochrome cameras to detect passive beacons, and utilizes model predictive controls to stop the vehicle from entering a constrained space. In such embodiments, the beacon may be implemented solely with a standard orange traffic cone (depending on the desired elevation), or may be deployed with a highly reflective vertical pole attached. LiDAR may detect these beacons, but may suffer from false positives due to other reflective surfaces, such as a worker's safety vest within the LiDAR's visual environment. As noted above, the embodiments described herein and as follows help reduce false positive detection from LiDAR by: beacons are projected in the camera image via a depth learning method, and the neural network learned projection from the camera to the LiDAR space is used to verify the detection.

In more detail, further embodiments described below (and illustrated with reference to the diagrams in fig. 20-37) provide and utilize a substantially real-time industrial collision avoidance sensor system designed to not impact obstacles or personnel and to protect high value equipment. In general, such embodiments may utilize scanning LiDAR and one or more RGB cameras. Passive beacons are used to mark isolated areas where industrial vehicles are not allowed to enter, thereby preventing collisions with high value devices. The forward guard processing mode prevents a collision with an object directly in front of the vehicle.

To provide a robust system, the sensing processing system of such embodiments may use a LiDAR sensor (e.g., a quinergy eight beam LiDAR) and a camera sensor (e.g., a single RGB camera). LiDAR sensors are active sensors that can work regardless of natural lighting. It can accurately locate objects via its 3D reflections. However, LiDAR is monochromatic and cannot distinguish objects based on color. Moreover, for objects that are far away, LiDAR may only have one to two beams that intersect the object, making reliable detection problematic. However, unlike LiDAR, RGB cameras can make detection decisions based on texture, shape, and color. An RGB stereo camera may be used to detect objects and estimate 3D positions. Although embodiments may use more than one camera, the use of stereo cameras typically requires a significant amount of additional processing and may result in difficulty estimating depth when objects lack texture cues. In another aspect, a single RGB camera may be used to accurately locate objects in the image itself (e.g., determine bounding boxes and classify objects). However, the resulting localization of the projection into 3D space is poor compared to LiDAR. Furthermore, cameras will degrade in foggy or rainy environments, while LiDAR may still operate effectively.

In the description of further embodiments that follows below, embodiments may use both LiDAR sensors and RGB camera sensors to accurately detect (e.g., identify) and locate objects using a data fusion process that allows for both types of data to be used when detecting or identifying object locations. Such an embodiment better addresses collision avoidance using, for example: a fast and efficient method of learning projections from camera space to LiDAR space and providing camera output in the form of LiDAR detection (distance and angle); a multi-sensor detection system that fuses both cameras and LiDAR detection to obtain more accurate and robust beacon detection; and/or technical solutions that have been implemented using a single Jetson TX2 board (dual CPU and GPU) board (a type of multi-processor module) to run sensor processing, and a separate second controller (TX 2) for a Model Predictive Control (MPC) system to help achieve substantially near real-time operation and avoid lag time (which may lead to collisions). In the context of the description of further embodiments below, context information regarding certain types of sensor detection (e.g., camera detection, LiDAR detection) and their use for subsequent object detection, as well as context information regarding fuzzy logic, may be applied in embodiments to combine data from different sensors and obtain detection scores for use in such embodiments.

Those skilled in the art will appreciate that object detection from camera images may involve both classification and localization of each object of interest. Information about how many objects are expected to be found in each image may not be available or unknown, which means that there is a different number of outputs for each input image. Additionally, the locations in the image where these objects may appear, or how much of their size may be unavailable or unknown. Those skilled in the art will further appreciate that with the advent of deep learning (referred to herein as "DL" -also referred to as deep structured learning or layered machine learning), existing object detection methods using DL have outperformed many traditional methods in terms of both accuracy and speed. Those skilled in the art will further appreciate that there are systems that improve detection results in a computationally intelligent manner based on such existing DL detection methods.

Generally, image object detection using DL and camera images may use two known methods. One approach is based on regional proposals. A faster region with a convolutional neural network (R-CNN) is an example. The method first runs the entire input image through several convolutional layers to obtain a feature map. Then, there is a separate area proposal network that uses these convolution features to propose possible detection areas. Finally, the rest of the network will give a classification of these proposed areas. This kind of architecture may significantly reduce the processing speed, since there are two parts in the network, one for predicting the bounding box and the other for classification. Another type of approach uses one network for both predicting potential areas and for label classification, such as with the "look-once-you-see (YOLO)" approach. Given an input image, the YOLO method first divides the image into a coarse mesh. For each mesh, there is a set of basic bounding boxes. For each basic bounding box, if YOLO considers an object to be present in the grid location, YOLO predicts an offset from the true location, a confidence score, and a classification score. YOLO is fast, but sometimes small objects in the image may not be detected.

The method may use LiDAR detection as opposed to object detection based on camera sensor data. For LiDAR detection-based approaches, one difficult part may involve classifying points based only on sparse 3D point clouds. One skilled in the art will appreciate that one approach may use Eigen-feature analysis (Eigen-feature analysis) with a weighted covariance matrix of a Support Vector Machine (SVM) classifier. However, this method is directed to dense airborne (airborne) LiDAR point clouds. In another known approach, one skilled in the art will appreciate that the feature vector is classified for each candidate object with respect to a training set of manually labeled object positions.

Those skilled in the art will further appreciate that DL has also been used on 3D object classification. Many existing DL-based 3D object classification problems involve two steps: a data representation to be used for the 3D object is determined, and a Convolutional Neural Network (CNN) is trained on the representation of the object. VoxNet is a 3D CNN architecture that can be used for efficient and accurate object detection from LiDAR and RGBD point clouds. An example of DL for volume shapes is the Princeton model net dataset, which proposes a volume representation of a 3D model and a 3D volume CNN for classification. However, these solutions also rely on high density (high beamcount) LiDAR, so they would not be suitable for systems with an eight-beam Quatergy M8 LiDAR sensor, which is an economically viable LiDAR sensor for deployment on mobile industrial vehicles such as cargo tractors.

In systems that fuse different data for object detection, as in the embodiments described herein, different sensors for object detection have their advantages and disadvantages. Embodiments of sensor fusion may integrate different sensors for more accurate and robust detection. For example, in object detection, a camera may provide rich texture-based and color-based information that is typically lacking in LiDAR. In another aspect, LiDAR may be operated in low visibility, such as at night or in mid-fog or rainy days. Camera processing in severe weather conditions may degrade or even fail completely. Also, to detect object position relative to the sensor, LiDAR may provide a much more accurate estimate of spatial coordinates than a camera. Since both cameras and LiDAR have their advantages and disadvantages, embodiments that improve and enhance object detection based on fused data may take advantage of their advantages and eliminate their disadvantages when they are fused together. One method for camera and LiDAR fusion uses external calibration (e.g., another method that uses various checkerboard (checkerboard) patterns, or finds corresponding points or edges in both the LiDAR and camera images in order to perform external calibration). However, this known approach requires that the LiDAR sensor be expensive, with a relatively high vertical specific resolution (e.g., based on 32 or 64 beams). Another method estimates a transformation matrix between LiDAR and cameras. These methods are limited and only suitable for modeling indoor and short-range environments.

Another approach uses a similarity metric that automatically registers LiDAR and optical images. However, this method also uses dense LiDAR measurements. A third method uses stereo cameras and LiDAR for fusion, which fuses sparse 3D LiDAR and dense stereo image point clouds. However, matching of corresponding points in a stereo image is computationally complex and prone to errors if there is little texture in the image. Both of these methods require dense point clouds and will not be effective with smaller LiDAR such as Quanergy M8. In contrast to previous approaches, the embodiments described herein differ in a unique and inventive manner, such as by using a single camera and relatively inexpensive eight beam LiDAR for an outdoor collision avoidance system, which avoids hysteresis that may be intolerable for substantially real-time collision avoidance systems.

92页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于车辆的传感器组件和用于监控传感器的方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!