System and method for fusing sensor data in a vehicle

文档序号:799588 发布日期:2021-04-13 浏览:6次 中文

阅读说明:本技术 在车辆中融合传感器数据的系统和方法 (System and method for fusing sensor data in a vehicle ) 是由 D·古奈尔 于 2020-10-10 设计创作,主要内容包括:本发明公开了在车辆中融合传感器数据的系统和方法。该系统包括形成为第一片上系统的图像处理器,该图像处理器用于处理由相机从车辆外部获得的图像以对物体进行分类和识别。形成为第二SoC的周围视像处理器处理由周围视像相机从该车辆外部获得的近景图像以对该车辆的指定距离内的障碍物进行分类和识别。这些近景图像比由该相机获得的图像更靠近该车辆。超声处理器获得到这些障碍物中的一个或多个障碍物的距离,并且形成为微控制器的融合处理器基于该车辆的速度低于阈值来融合来自该周围视像处理器和该超声处理器的信息。(Systems and methods for fusing sensor data in a vehicle are disclosed. The system includes an image processor formed as a first system on a chip for processing images obtained by the camera from outside the vehicle to classify and identify objects. A peripheral vision processor formed as a second SoC processes a close-up image obtained by a peripheral vision camera from outside the vehicle to classify and identify obstacles within a specified distance of the vehicle. The close-up images are closer to the vehicle than the image obtained by the camera. An ultrasonic processor obtains distances to one or more of the obstacles, and a fusion processor formed as a microcontroller fuses information from the ambient vision processor and the ultrasonic processor based on the speed of the vehicle being below a threshold.)

1. A system configured to fuse sensor data in a vehicle, the system comprising:

an image processor formed as a first system on a chip and configured to process images obtained by a camera from outside the vehicle to classify and identify objects;

a peripheral vision processor formed as a second SoC and configured to process a close-range image obtained by a peripheral vision camera from outside the vehicle to classify and identify an obstacle within a specified distance of the vehicle, wherein the close-range image is closer to the vehicle than an image obtained by the camera;

an ultrasonic processor configured to obtain distances to one or more of the obstacles; and

a fusion processor formed as a microcontroller and configured to fuse information from the ambient vision processor and the ultrasound processor based on a speed of the vehicle being below a threshold.

2. The system of claim 1, wherein the peripheral vision processor is further configured to display the obstacles identified and classified by the peripheral vision processor on a rear view mirror of the vehicle.

3. The system of claim 1, further comprising a deserializer configured to provide images obtained by the camera from outside the vehicle to the image processor and to provide close-up images obtained by the surround view camera to the surround view processor.

4. The system of claim 3, further comprising an interior camera configured to obtain an image of a driver of the vehicle, wherein the deserializer provides the image of the driver to the image processor or the peripheral vision processor to determine a driver state, the driver state indicating fatigue, alertness, or distraction.

5. The system of claim 1, further comprising a communication port configured to obtain data from additional sensors and provide data from the additional sensors to the fusion processor, the additional sensors comprising a radar system or a lidar system, and the data from the additional sensors comprising ranges or angles to one or more of the objects.

6. The system of claim 5, wherein the fusion sensor is configured to fuse information from the image processor and the additional sensor based on a speed of the vehicle being above a second threshold.

7. The system of claim 1, further comprising a power monitoring module configured to supply power to and monitor components of the system, the components including the image processor, the ultrasound processor, and the fusion processor.

8. The system of claim 1, wherein the fusion processor is further configured to obtain map information and provide an output to a display that combines the fused result with the map information, and the fusion processor is further configured to generate a haptic output based on the fused result.

9. The system of claim 1, wherein the fusion processor is configured to provide information to an advanced driver assistance system.

10. The system of claim 9, wherein the advanced driver assistance system uses information from the fusion processor to control operation of the vehicle.

11. A method for fusing sensor data in a vehicle, the method comprising:

obtaining an image from outside the vehicle with a camera;

processing images from outside the vehicle using an image processor formed as a first on-chip system to classify and identify objects;

obtaining a close-up image from outside the vehicle using a surround view camera;

processing the close-range image to identify and classify obstacles within a specified distance of the vehicle using a peripheral vision processor formed as a second SoC, wherein the close-range image is closer to the vehicle than an image obtained by the camera;

transmitting an ultrasonic signal from an ultrasonic sensor and receiving a reflection;

processing the reflections using an ultrasonic processor to obtain distances to one or more of the objects; and

fusing information from the ambient vision processor and the ultrasound processor using a fusion processor formed as a microcontroller based on the speed of the vehicle being below a threshold.

12. The method of claim 11, further comprising displaying the obstacles identified and classified by the peripheral vision processor on a rear view mirror of the vehicle.

13. The method of claim 11, further comprising providing an image obtained by the camera from outside the vehicle and a close-up image obtained by the surround view camera to a deserializer, wherein an output of the deserializer is provided to the image processor or the surround view processor.

14. The method of claim 13, further comprising providing an image of a driver of the vehicle obtained from within the vehicle using an interior camera to the deserializer and providing an output of the deserializer to the image processor or the ambient vision processor to determine a driver state, the driver state indicating fatigue, alertness, or distraction.

15. The method of claim 11, further comprising obtaining data from additional sensors using a communication port and providing data from the additional sensors to the fusion processor, wherein the sensors comprise a radar system or a lidar system and the data from the additional sensors comprises a range or angle to one or more of the objects.

16. The method of claim 15, further comprising the fusion processor fusing information from the image processor and the additional sensor based on the speed of the vehicle being above a second threshold.

17. The method of claim 11, further comprising using a power monitoring module to supply power to and monitor components of the system, wherein the components include the image processor, the ultrasound processor, and the fusion processor.

18. The method of claim 11, further comprising the fusion processor obtaining map information and providing the fused result to a display in combination with the map information, and the fusion processor generating a haptic output based on the fused result.

19. The method of claim 11, further comprising the fusion processor providing the fused results to an advanced driver assistance system.

20. The method of claim 19, further comprising the advanced driver assistance system using the fused results from the fusion processor to control operation of the vehicle.

Technical Field

The invention relates to automotive sensor fusion.

Background

Vehicles (e.g., automobiles, trucks, construction equipment, agricultural equipment, automated factory equipment) may include a number of sensors for providing information about the vehicle and the environment inside and outside the vehicle. For example, a radar system or a lidar system may provide information about objects around the vehicle. As another example, a camera may be used to track the eye movements of the driver to determine if drowsiness is a potential safety risk. Each sensor individually may be limited in terms of providing a comprehensive assessment of current security risks. Thus, automotive sensor fusion may be desirable.

Disclosure of Invention

According to a first aspect, the present invention provides a system for fusing sensor data in a vehicle, the system comprising: an image processor formed as a first system on a chip (SoC) and configured to process an image obtained by a camera from outside a vehicle to classify and recognize an object; a peripheral view processor formed as a second SoC and configured to process close-range images obtained by peripheral view cameras from outside the vehicle to classify and identify obstacles within a specified distance of the vehicle, wherein the close-range images are closer to the vehicle than images obtained by the cameras; an ultrasound processor configured to obtain distances to one or more of the obstacles; and a fusion processor formed as a microcontroller and configured to fuse information from the ambient vision processor and the ultrasound processor based on a speed of the vehicle being below a threshold.

The peripheral vision processor also displays the obstacles identified and classified by the peripheral vision processor on a rear view mirror of the vehicle.

A deserializer supplies an image obtained by the camera from outside the vehicle to the image processor, and supplies a close-up image obtained by the surrounding view camera to the surrounding view processor.

An interior camera obtains an image of a driver of the vehicle, wherein the deserializer provides the image of the driver to the image processor or the ambient vision processor to determine a driver state, the driver state indicating fatigue, alertness, or distraction.

A communication port obtains data from additional sensors and provides data from the additional sensors to the fusion processor. The additional sensors include a radar system or a lidar system, and the data from the additional sensors includes a range or angle to one or more of the objects.

The fusion sensor fuses information from the image processor and the additional sensors based on the speed of the vehicle being above a second threshold.

A power monitoring module supplies power to components of the system and monitors the power. These components include the image processor, the ultrasound processor, and the fusion processor.

The fusion processor obtains map information and provides an output to a display that combines the result of the fusion with the map information. The fusion processor generates a haptic output based on a result of the fusion.

The fusion processor provides information to an advanced driver assistance system.

The advanced driver assistance system uses information from the fusion processor to control operation of the vehicle.

According to a second aspect, the invention provides a method for fusing sensor data in a vehicle, the method comprising: obtaining an image from outside the vehicle with a camera; processing an image from outside the vehicle using an image processor formed as a first system on a chip (SoC) to classify and identify objects; obtaining a close-up image from outside the vehicle using a surround view camera; processing the close-range images to identify and classify obstacles within a specified distance of the vehicle using a peripheral vision processor formed as a second SoC, the close-range images being closer to the vehicle than images obtained by the camera; transmitting an ultrasonic signal from an ultrasonic sensor and receiving a reflection; processing the reflections using an ultrasonic processor to obtain distances to one or more of the objects; and fusing information from the ambient vision processor and the ultrasound processor using a fusion processor formed as a microcontroller based on the speed of the vehicle being below a threshold.

The method may further include displaying the obstacles identified and classified by the surrounding vision processor on a rear view mirror of the vehicle.

The method may also include providing an image obtained by the camera from outside the vehicle and a close-up image obtained by the surround view camera to a deserializer. The output of the deserializer is provided to the image processor or the peripheral view processor.

The method also includes providing an image of a driver of the vehicle obtained from within the vehicle using an interior camera to the deserializer and providing an output of the deserializer to the image processor or the ambient vision processor to determine a driver state. The driver status indicates fatigue, alertness, or distraction.

The method also includes obtaining data from additional sensors using the communication port and providing data from the additional sensors to the fusion processor. The sensors include a radar system or a lidar system, and the data from the additional sensors includes a range or angle to one or more of the objects.

The method also includes the fusion processor fusing information from the image processor and the additional sensors based on the speed of the vehicle being above a second threshold.

The method also includes supplying power to a component of the system using a power monitoring module and monitoring the power. These components include the image processor, the ultrasound processor, and the fusion processor.

The method also includes the fusion processor obtaining map information and providing the fused result in combination with the map information to a display, and the fusion processor generating a haptic output based on the fused result.

The method also includes the fusion processor providing the result of the fusion to an advanced driver assistance system.

The method also includes the advanced driver assistance system using the fused results from the fusion processor to control operation of the vehicle.

The objects and advantages of the present invention, as well as a more complete understanding of the present invention, will be obtained by reference to the following detailed description and drawings.

Drawings

For a better understanding, reference may be made to the drawings. The components in the drawings are not necessarily to scale. Like reference numerals and other reference numerals designate corresponding parts throughout the different views.

FIG. 1 is a block diagram of an exemplary vehicle implementing automotive sensor fusion in accordance with one or more embodiments of the present invention;

FIG. 2 is a block diagram of an exemplary controller implementing automotive sensor fusion in accordance with one or more embodiments of the invention; and

FIG. 3 is a process flow of a method of implementing automotive sensor fusion in accordance with one or more embodiments.

Detailed Description

As previously mentioned, sensors may be used to provide information about the vehicle and the environment inside and outside the vehicle. Different types of sensors may be relied upon to provide different types of information for autonomous or semi-autonomous vehicle operation. For example, radar or lidar systems may be used for object detection to identify, track, and avoid obstacles in the path of the vehicle. A camera positioned to obtain images within the passenger compartment of the vehicle may be used to determine the number of passengers and driver behavior. A camera positioned to obtain an image of the exterior of the vehicle may be used to identify the lane markings. Different types of information may be used to perform automated operations (e.g., collision avoidance, autobraking) or to provide driver warnings.

Embodiments of the present systems and methods described in detail herein relate to automotive sensor fusion. The information from the various sensors is processed and combined on-chip to obtain a comprehensive assessment of all conditions that may affect vehicle operation. That is, a situation that may not present itself as a hazard (e.g., a vehicle approaching a detected road edge marking) may be considered hazardous when linked with other information (e.g., driver distraction). The action to be taken (e.g., driver warning, autonomous or semi-autonomous operation) is selected based on the composite assessment.

FIG. 1 is a block diagram of an exemplary vehicle 100 implementing automotive sensor fusion in accordance with one or more embodiments of the present invention. The vehicle 100 includes a controller 110 for implementing sensor fusion in accordance with one or more embodiments. The controller 110 may be referred to as an Electronic Control Unit (ECU) in the automotive field. The components of the controller 110 involved in sensor fusion are described in further detail with reference to fig. 2. The controller 110 obtains data from several exemplary sensors. The controller 110 includes processing circuitry that may include an Application Specific Integrated Circuit (ASIC), an electronic circuit, one or more processors and one or more memory devices that execute one or more software or firmware programs, a combinational logic circuit, or other suitable components that provide the described functionality. The components of the controller 110 involved in sensor fusion may be considered a multi-chip module, as described in further detail.

The exemplary sensors shown for vehicle 100 include camera 120, surround view camera 130, interior camera 140, ultrasonic sensor 150, radar system 160, and lidar system 170. The exemplary sensors and components illustrated in fig. 1 are not generally intended to limit the number or locations that may be included within or on the vehicle 100. For example, although the example interior camera 140 is shown with a field of view FOV3 directed toward the driver in the left drive vehicle 100, additional interior cameras 140 may be directed toward the driver or one or more passengers. The one or more internal cameras 140 may include Infrared (IR) Light Emitting Diodes (LEDs).

As another example, there may be up to three cameras 120 and up to twelve ultrasonic sensors 150. The ultrasonic sensor 150 transmits an ultrasonic signal to the exterior of the vehicle 100 and determines the distance to the object 101 based on the time-of-flight (time-of-flight) of the transmission and any reflections from the object 101. A comparison of the field of view FOV1 of the exemplary forward facing camera 120 and the field of view FOV2 of the exemplary surround view camera 130 shown under the side view mirror indicates that the FOV2 associated with the surround view camera 130 is closer to the vehicle 100 than the FOV 1.

FIG. 2 is a block diagram of an exemplary controller 110 implementing automotive sensor fusion in accordance with one or more embodiments of the invention. In describing aspects of the controller 110 in detail, further reference is made to fig. 1. The fusion processor 200 obtains and fuses information from other components. These components include an image processor 210, an ambient vision processor 220, an ultrasound processor 230, and a communication port 240. Each of these components will be described in further detail. Fusion processor 200 may be a microcontroller.

The image processor 210 and the peripheral view processor 220 obtain deserialized data from the deserializer 250. The deserialized data provided to the image processor 210 comes from the one or more cameras 120 and optionally the one or more internal cameras 140. The image processor 210 may be implemented as a system on a chip (SoC) and may execute machine learning algorithms to recognize patterns in images from the one or more cameras 120 and optionally from the one or more internal cameras 140. The image processor 210 detects and identifies objects 101 near the vehicle 100 based on the deserialized data from the one or more cameras 120. Exemplary objects 101 include lane markings, traffic signs, road markings, pedestrians, and other vehicles. Based on the deserialized data obtained from the one or more interior cameras 140, the image processor 210 may detect the driver state. That is, the deserialized data may be facial image data from the driver of the vehicle 100. Based on this data, image processor 210 may detect fatigue, drowsiness, or distraction. When the vehicle 100 is traveling at a speed that exceeds a threshold (e.g., 30 kilometers per hour (kph)), the fusion processor 200 may weigh the information from the image processor 210 more (than the information from other components).

The deserialized data provided to the ambient vision processor 220 comes from one or more ambient vision cameras 130 and optionally one or more interior cameras 140. Similar to the image processor 210, the ambient vision processor 220 may be implemented as a SoC and may execute machine learning algorithms to recognize and report patterns. The ambient vision processor 220 may stitch together the images from each of the ambient vision cameras 130 to provide an ambient vision (e.g., 360 degree) image. In addition to providing this image to the fusion processor 200, the ambient vision processor 220 may also provide this image as a rearview mirror display 260. As previously described with reference to the image processor 210, when images from one or more interior cameras 140 or cameras are provided to the peripheral vision processor 220, the peripheral vision processor 220 may detect a driver state (e.g., fatigue, drowsiness, or distraction). When the vehicle 100 is traveling below a threshold speed (e.g., 10kph), the fusion processor 200 may weigh more information from the peripheral vision processor 220 (as compared to information from other components). For example, information from the ambient vision processor 220 may be used during parking.

The ultrasonic processor 230 obtains the distance to the object 101 near the vehicle 100 based on the time-of-flight information obtained by the ultrasonic sensor 150. For example, during a low speed scene such as parking, the fusion processor 200 may associate objects 101 whose distances are obtained by the ultrasound processor 230 with objects 101 identified by the surrounding vision processor 220. Noise and other objects 101 that are not of interest may be filtered out based on the identification by the image processor 210 or the ambient view processor 220. Communication port 240 obtains data from radar system 160, lidar system 170, and any other sensors. Based on the data from these sensors, the communication port 240 may transmit range, angle information, relative velocity, lidar images, and other information about the object 101 to the fusion processor 200.

In addition to the information from the processor of the controller 110, the fusion processor 200 also obtains map information 205 for the vehicle 100. According to an exemplary embodiment, the fusion processor 200 may provide all of the fused information (i.e., the fusion-based integrated information) to an Advanced Driver Assistance System (ADAS) 275. This integrated information includes: objects 101 identified based on detection by the camera 120 and the ambient vision camera 130, as well as distances of these objects based on the ultrasonic sensor 150, driver states identified based on processing of images obtained by the camera 140, information from sensors (e.g., radar system 160, lidar system 170), and map information 205. As previously mentioned, the most relevant information may be based on the speed of the vehicle 100. Generally, at higher speeds, information from the exterior camera 120, radar system 160, and lidar system 170 may be most useful, while at lower speeds, information from the ambient vision camera 130 and ultrasound camera 150 may be most useful. Regardless of the speed of the vehicle 100, in any scenario, the interior camera 140 and information about the driver's state may be relevant.

Based on the aggregated information, the ADAS 275 may provide an audio or visual output 270 (e.g., through an infotainment screen of the vehicle 100) of the object 101 indicated on the map. For example, the relative position of the detected object 101 to the vehicle 100 may be indicated on a map. The ADAS 275 may also provide a tactile output 280. For example, the driver's seat may be vibrated to alert the driver in the event that it is determined based on the image processor 210 that the images from the one or more interior cameras 140 indicate driver inattention, and it is also determined that the images from the one or more exterior cameras 120 indicate an upcoming hazard (e.g., an object 101 in the path of the vehicle 100). The ADAS 275 (which may be part of the controller 110) may additionally facilitate autonomous or semi-autonomous operation of the vehicle 100.

According to alternative embodiments, the fusion processor 200 may perform the functions discussed with respect to the ADAS 275 itself. Accordingly, fusion processor 200 may provide audio or visual output 270 directly or may control haptic output 280. The fusion processor 200 may implement machine learning to weight and fuse information from the image processor 210, the ambient view processor 220, the ultrasound processor 230, and the communication port 240. The controller 110 also includes a power monitor 201. The power monitor 201 supplies power to the other components of the controller 110 and monitors that the correct power level is supplied to each component.

FIG. 3 is a process flow of a method 300 for implementing automotive sensor fusion using the controller 110 (i.e., the ECU of the vehicle 100) in accordance with one or more embodiments of the present invention. These processes are discussed with continued reference to fig. 1 and 2. At block 310, data is obtained from a plurality of sources, including all of the sources indicated in FIG. 3 and described in detail with reference to FIG. 1. Images from outside the vehicle 100 are obtained by one or more cameras 120. A close-up image is obtained by the peripheral vision camera 130. Images from the driver or otherwise passengers within the vehicle are obtained by the interior camera 140. The ultrasonic transducer 150 emits ultrasonic energy and receives reflections from the object 101 so that the time of flight of the ultrasonic energy can be recorded. The radar system 160 indicates a range to the object 101, a relative velocity and a relative angle to the object. Lidar systems may also indicate range. The map information 205 indicates the location of the vehicle 100 using a global reference. As previously mentioned, not all sources are equally relevant in all scenarios. For example, in a low speed scene such as parking, the ambient vision camera 130 and the ultrasonic sensor 150 may be more relevant than the camera 120 whose field of view is farther from the vehicle 100. In higher speed scenes, such as highway driving, camera 120, radar system 160, and lidar system 170 may be more relevant.

Processing and fusing the data to obtain integrated information involves using various processors of the controller 110 (as discussed with reference to fig. 2) at block 320. The image processor 210 and the peripheral vision processor 220 process the images to indicate the object 101 and determine the driver status. These processors 210, 220 use a deserializer 250 to obtain an image. The ultrasonic processor 230 uses the time of flight information from the ultrasonic sensor 150 to determine the distance to the object 101. Communication port 240 obtains data from sensors such as radar system 160 and lidar system 170. The fusion processor 200 weights and fuses the processed data to obtain integrated information. As previously described, the weighting may be based on the speed of the vehicle 100.

As indicated in fig. 3, the process at block 330 may be optional. This process includes providing the integrated information from the fusion processor 200 to the ADAS 275. The providing of the output or vehicle control at block 340 may be performed either directly from the fusion processor 200 or through the ADAS 275. The output may be in the form of audio or visual output 270 or tactile output 280. Vehicle control may be autonomous or semi-autonomous operation of the vehicle 100.

What has been described above is an example of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art may recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.

11页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:高动态范围激光雷达

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!