Method, controller and computer program product for determining head orientation/positioning

文档序号:1281022 发布日期:2020-08-28 浏览:9次 中文

阅读说明:本技术 确定头部定向/定位的方法、控制器和计算机程序产品 (Method, controller and computer program product for determining head orientation/positioning ) 是由 约尔格·安格迈尔 于 2020-02-18 设计创作,主要内容包括:本发明涉及确定车辆乘员的头部定向/定位的方法、控制器和计算机程序产品,该方法包括以下步骤:依赖于相对于第一传感器的布置的不同的头部定向和/或定位来确定布置在车辆的内部空间中的第一成像传感器的针对头部定向和/或定位的第一检测区域;依赖于相对于第二传感器的布置的不同的头部定向和/或定位来确定布置在车辆的内部空间中的至少一个第二成像传感器的针对头部定向和/或定位的第二检测区域;并且依赖于车辆乘员的头部定向和/或定位地利用如下那个传感器来确定头部定向和/或定位,在这个传感器的检测范围中能比在另一传感器的检测范围中更好地确定头部定向和/或定位。(The present invention relates to a method, controller and computer program product for determining the head orientation/positioning of a vehicle occupant, the method comprising the steps of: determining a first detection region for head orientation and/or positioning of a first imaging sensor arranged in an interior space of the vehicle in dependence on a different head orientation and/or positioning relative to the arrangement of the first sensor; determining a second detection region for the head orientation and/or positioning of at least one second imaging sensor arranged in the interior space of the vehicle in dependence on a different head orientation and/or positioning relative to the arrangement of the second sensor; and depending on the head orientation and/or positioning of the vehicle occupant, the head orientation and/or positioning is determined by means of a sensor whose detection range can be determined better than in the detection range of another sensor.)

1. Method for determining a head orientation (K1, K2) and/or positioning of a vehicle occupant (1), the method comprising the steps of:

determining a first detection region (E1) (V1) for the head orientation (K1, K2) and/or positioning of a first imaging sensor (S1) arranged in the interior space (2) of the vehicle in dependence on the different head orientation (K1, K2) and/or positioning relative to the arrangement of the first sensor (S1),

determining a second detection region (E2) (V2) for the head orientation (K1, K2) and/or positioning of at least one second imaging sensor (S2) arranged in the interior space (2) of the vehicle in dependence on the different head orientations (K1, K2) and/or positioning relative to the arrangement of the second sensor (S2) and

depending on the head orientation (K1, K2) and/or the position of the vehicle occupant (1), the head orientation (K1, K2) and/or the position is determined using the sensor (S1, S2), the head orientation (K1, K2) and/or the position being determined in the detection range (E1, E2) of this sensor better than in the detection range (E1, E2) of the other sensor (S1, S2).

2. The method according to claim 1, which utilizes a plurality of imaging sensors (S1, S2), wherein only data fusion of those sensors (S1, S2) whose respective detection regions (E1, E2) in which the head orientation (K1, K2) and/or location can be accurately determined is used for determining the head orientation (K1, K2) and/or location of the vehicle occupant (1).

3. The method of claim 1 or 2, wherein the method is computer implemented.

4. Controller (10) for automated driving functions, comprising

At least one first interface (11) for obtaining data of the first imaging sensor and of the at least one second imaging sensor (S1, S2),

a computer (12) comprising a memory (13) in which the results of method steps V1 and V2 according to the method of claim 1 have been stored, wherein the computer is configured for carrying out the method steps V3 of method claim 1 or for carrying out the method according to claim 2 in order to determine the signal (S) in dependence on the determined head orientation (K1, K2) and/or position of the vehicle occupant (1), and

a second interface (14) for outputting said signal.

5. The controller (10) of claim 4, wherein the computer (12) is configured to determine a confidence in the data for the imaging sensor (S1, S2).

6. An interior monitoring system (20) of a vehicle, comprising a first and at least one second imaging sensor (S1, S2) and a controller (10) according to claim 4 or 5, wherein data is exchanged between the sensors (S1, S2) and the controller.

7. The interior space monitoring system (20) of claim 6, wherein the first sensor (S1) is a sensor of a 2D camera and the second sensor (S2) is a sensor of a 3D camera.

8. Computer program product for determining a head orientation (K1, K2) and/or a positioning of a vehicle occupant (1), the computer program product comprising software code sections which, when the computer program is run on a controller (10) according to claim 4 or 5 of an interior space monitoring system (20) according to claim 6 or 7, cause the interior space monitoring system (20) to carry out the method according to any one of claims 1 to 3.

9. The computer program product of claim 8, comprising a software code segment that causes the controller (10) to determine a confidence in data for an imaging sensor (S1, S2) of the interior space monitoring system (20).

10. Computer-readable data carrier on which a computer program product according to claim 8 or 9 is stored.

Technical Field

The present invention relates to a method, a controller and a computer program product for determining the head orientation and/or positioning of a vehicle occupant. The invention further relates to an interior space monitoring system of a vehicle having a controller according to the invention. The invention further relates to a computer-readable data carrier on which a computer program product according to the invention is stored.

Background

Devices for controlling safety systems in motor vehicles are known. For example, DE19932520a1 discloses a device for controlling at least one safety system in a motor vehicle in dependence on output signals from a sensor for detecting the positioning of objects and/or persons on the seat of the motor vehicle, wherein at least two cameras directed at the seat are provided as sensors, wherein in an evaluation unit a three-dimensional image of the objects and/or persons is obtained with a two-dimensional picture recorded in the cameras, from which three-dimensional image an output signal is derived, wherein the size and/or shape of the head of a person positioned on the seat can be output as the output signal via the evaluation unit.

In the case of detection by means of a camera system, it is possible for the head to determine the position and orientation. Achieving high accuracy for all head poses is a big problem.

Disclosure of Invention

The invention was started on this basis. The task of the invention is to optimize the head positioning data.

The invention and embodiments of the invention are set forth in the description and drawings.

The method according to the invention is used for determining the head orientation and/or positioning of a vehicle occupant. The method is solved by first determining a first detection region for a head orientation and/or positioning of a first imaging sensor arranged in an interior space of the vehicle in dependence on a different head orientation and/or positioning relative to the arrangement of the first sensor. Furthermore, a second detection region of the at least one second imaging sensor arranged in the interior of the vehicle for the head orientation and/or positioning is first determined as a function of a different head orientation and/or positioning relative to the arrangement of the second sensor. Depending on the head orientation and/or position of the vehicle occupant, the head orientation and/or position is determined by a sensor whose detection range can be determined better than in the detection range of another sensor.

By using a plurality of imaging sensors, different accuracies for different head poses are obtained based on the positioning of the sensors in the interior space. According to the invention, the accuracy of the two sensors is determined beforehand by means of the first two method steps. For example, the first imaging sensor is arranged on the windshield behind the steering wheel of the vehicle. The second imaging sensor is arranged, for example, on the vehicle roof on the right side of the vehicle occupant. The first imaging sensor has an accuracy of +/-0.5cm per +/-1 deg. when the vehicle occupant is looking toward the steering wheel. In this case, the second imaging sensor has an accuracy of only 1.0cm per +/-1 °. The first imaging sensor has an accuracy of +/-3.5cm per +/-10 deg. when the vehicle occupant is looking right up. In this case, the second imaging sensor has a higher accuracy of 0.5cm per +/-1 °. These accuracies are known in advance, for example, by testing, wherein the head orientation and/or positioning is determined by means of a measuring system. In active use, only the data of those sensors whose respective detection regions are used, a higher accuracy for determining the head orientation and/or positioning is then achieved. When the detection areas are known, it is then possible to switch automatically to the respective detection area. For example, when the head is rotated from a first orientation (in which the first sensor is mounted directly along the line of sight of the vehicle occupant) to a second orientation (in which the line of sight is directed at the second sensor), then the second sensor may determine the head orientation better than the first sensor. The head is located substantially in the center of the second detection region. By determining the head orientation and/or positioning more accurately according to the invention, safety-relevant functions, such as triggering an airbag, are determined and triggered more accurately, which rely on an accurate knowledge of the head orientation and/or positioning. The safety during driving is thus increased overall by means of the invention. The vehicle occupant is for example a driver or a co-driver or a passenger. According to the invention, the accuracy of determining the head orientation and/or position is thus improved. With a more accurate determination of the head direction and/or positioning, the safety system, in particular the airbag or the belt tensioner, can be actuated more effectively.

According to another aspect of the invention, a plurality of imaging sensors is used. In order to determine the head orientation and/or position of the vehicle occupant, only the data of those sensors are fused, the head orientation and/or position being able to be determined accurately in the respective detection ranges of these sensors. Data fusion is thus optimized and the determination of the head orientation and/or positioning is improved in terms of security and computational power.

Preferably, the method is computer implemented.

The controller according to the invention is configured for automated driving functions. The controller comprises at least one first interface to obtain data of the first imaging sensor and the at least one second imaging sensor. The controller also includes a computer. The computer comprises a memory in which the results of the first two method steps of the method according to the invention have been stored. The computer is configured to carry out the last method step of the method according to the invention, also using a plurality of sensors and data fusion, in order to determine the signal as a function of the determined head position and/or orientation of the vehicle occupant. In addition, the controller further comprises a second interface to output the signal.

The controller collates data from the sensors as input signals that are processed by a computer, such as a computer platform, and provides logic and/or power levels as adjustment or control signals. The determined signal is a regulation or control signal. The determined signals are used to regulate and control vehicle actuators, in particular actuators for longitudinal and/or lateral controls and/or safety systems, via the second interface, in order to enable automated or autonomous driving operation. The controller is connected to the sensor via a first interface. The first interface may be one member or comprise a plurality of members, i.e. one member per sensor. The data exchange takes place either wired or wirelessly, for example via radio technology. The controller is integrated in the on-board electrical system of the road vehicle. The controller is in particular an electronic controller for automated driving functions, which is referred to in english as a domain ECU, in particular an ADAS/AD domain ECU.

The computer of the controller is implemented, for example, as a System-on-a-Chip (System-on-a-Chip) with a modular hardware concept, i.e., all or at least most of the functions are integrated on one Chip and can be expanded in a modular manner. The chip can be integrated into the controller. The computer includes, for example, a multi-core processor and a memory module. The multi-core processor is configured for signal/data exchange with the memory module. For example, the multi-core processor includes a bus system. The memory module forms a main memory. The memory module is, for example, RAM, DRAM SDRAM or SRAM. In a multicore processor, a plurality of cores are arranged on a single chip, that is to say on a semiconductor component. Compared to multiprocessor systems, multi-core processors, in which each individual core is arranged in a processor socket and the individual processor sockets are arranged on a motherboard, enable higher computing power and are implemented more inexpensively in one chip. The computer preferably includes at least one Central Processing Unit, referred to in English as a Central Processing Unit, CPU for short. The computer also includes at least one graphics processor, which is called Graphic Processing Unit, GPU for short. Graphics processors have special microarchitectures for parallel processing of processes. In particular, the graphics processor comprises at least one processing unit, which is particularly implemented for implementing tensor and/or matrix multiplications. Tensor and/or matrix multiplication are central computational operations for deep learning. The computer preferably also comprises a hardware Accelerator for artificial intelligence, in particular a so-called Deep Learning Accelerator (Deep Learning Accelerator). Furthermore, the computer or controller is configured for modular expansion with a plurality, preferably at least four, of such chips. Thus, the computer as a whole is optimized and scalable for machine learning, that is the computer can match different SAE J3016 levels.

In a preferred refinement of the invention, the computer is configured to determine a confidence level for the data of the imaging sensor. This means that the computer specifies a confidence level for the data of the imaging sensor, that is to say what the computer is reliable when recognizing the object. For example, if an imaging sensor is occluded, the computer determines that the confidence of that sensor is 0, in which case data from another imaging sensor is used. For example, when the occupant looks to the right, then the confidence of a camera disposed behind the steering wheel, for example, is 50%, while the confidence of a camera disposed along the line of sight of the vehicle occupant, for example, is 100%. It can thus be ensured that only those imaging sensors which provide the best data, that is to say the data with the greatest degree of confidence, are used for object recognition. If two imaging sensors provide data with the same confidence, the data may be fused.

The interior monitoring system of a vehicle according to the invention comprises a first imaging sensor and at least one second imaging sensor. The interior space monitoring system further comprises a controller according to the invention. Data is exchanged between the sensor and the controller.

The interior space monitoring system is a sensor system for orienting and/or classifying, preferably three-dimensional identification, the vehicle occupant. The interior space monitoring system provides, in particular, data for safety-relevant aspects, such as, for example, depending on the position of the vehicle occupant or the interior space temperature with which force the airbag and/or the belt tensioner is activated. The interior space monitoring system also determines the use of additional adaptive security functions. With the controller according to the invention, the internal space monitoring system can determine the orientation and/or position of the head particularly accurately. Thus, the interior space monitoring system can, for example, determine particularly accurately whether the driver is looking at the road, in order to decide whether the driver can take over the control of the vehicle, known in english as take-over, if the vehicle was previously automatically controlled.

Preferably, the first sensor is preferably a sensor of a 2D camera, while the second sensor is a sensor of a 3D camera, in particular a time-of-flight camera. The 2D camera is inexpensive. The 2D camera is for example a 2D monocular camera. Alternatively, the second sensor is a sensor of a 2D stereoscopic camera system. The 2D stereoscopic camera system corresponds to a 3D camera. The 3D camera provides depth information and thus improves the determination of head orientation and/or position. By fusing data from the 2D camera and the 3D camera, the determination of the head orientation and/or position may be further optimized. Depth images of the head can be obtained via pixel-by-pixel measurements of optical time-of-flight using a time-of-flight camera (known in english as time-of-flight (tof)). The second sensor is in particular a LIDAR sensor. Using a LIDAR sensor, the environment is scanned by means of light pulses in order to obtain a 2D or 3D representation of the environment.

The computer program product according to the invention is used for determining the head orientation and/or positioning of a vehicle occupant. The computer program product comprises software code segments. The software code section causes the internal space monitoring system according to the invention to carry out the method according to the invention when the computer program is run on the controller of the internal space monitoring system according to the invention.

The software code section of the computer program product is a command sequence by means of which the controller, when loading the computer program, causes a determination of the head orientation and/or positioning of the vehicle occupant and outputs a signal in dependence on the result of the determination, in particular to control a safety-relevant vehicle function. Thus, the computer program product brings about technical effects.

Preferably, the computer program product comprises software code segments that cause the controller to determine a confidence in the data from the imaging sensor of the interior space monitoring system.

A further aspect of the invention is a computer-readable data carrier on which a computer program product according to the invention is stored. For example, the controller loads and implements the data carrier in a memory module of the computer. The data carrier is, for example, a USB stick, an SD card, advantageously an SD card with integrated WLAN functionality, a CD, a DVD or a BlueRay-Disc (blue light).

Drawings

Embodiments are illustrated in the drawings. Wherein:

FIG. 1 illustrates a first embodiment of an interior space monitoring system according to the present invention;

FIG. 2 shows a second embodiment of an interior space monitoring system according to the invention; and is

Fig. 3 shows an embodiment of the controller according to the invention with a schematic representation of the method according to the invention.

In the drawings, like reference numbers indicate identical or functionally similar elements. For the sake of clarity, only the respectively relevant reference parts are illustrated in the figures in order to avoid repetition.

Detailed Description

Fig. 1 and 2 show a vehicle occupant 1 in an interior 2 of a passenger vehicle. The vehicle occupant 1 is a driver seated in a driver seat behind the steering wheel 3.

The interior space 2 includes an interior space monitoring system 20. The first imaging sensor S1 and the second imaging sensor S2 in the interior space monitoring system 20 are shown in fig. 1. The first imaging sensor S1 is a 2D monocular camera mounted on the windshield behind the steering wheel 3. In the first detection region E1 of the first imaging sensor S1, the first head orientation K1 and the positioning when viewed straight ahead can be determined particularly accurately. The second imaging sensor S2 is a 3D camera mounted on the vehicle roof above and to the right of the vehicle occupant. In the second detection region E2 of the second imaging sensor S2, the second head orientation K2 and the positioning when looking to the right can be determined particularly precisely. Detection zones E1 and E2 have been determined depending on different head orientations and positions. If the vehicle occupant 1 is looking straight ahead, the data of the first imaging sensor S1 is used to determine the head orientation K1 and position. If the vehicle occupant 1 is looking to the right, the data of the second imaging sensor S2 is used to determine the head orientation K2 and the positioning. In this case, the head orientation K2 and the positioning are more accurately determined using the second imaging sensor S2. If the line of sight of the vehicle occupant 1 is directed to the space between the first imaging sensor S1 and the second imaging sensor S2, the data of the two sensors S1 and S2 are fused.

Fig. 3 shows a controller 10 according to the invention in an interior space monitoring system 20.

The controller 10 comprises a first interface 11. The controller 10 is connected in terms of signals via the first interface 11 to the imaging sensors S1 and S2 of the interior space monitoring system 20. The data of the sensors S1 and S2 are provided to the computer 12 of the controller via the first interface 11. The computer 12 includes a memory 13. Before the active use of the interior space monitoring system 20, the following method steps have been carried out:

v1: determining a first detection region E1 of the first imaging sensor S1 for a head orientation K1, K2 and/or a positioning in dependence on a different head orientation K1, K2 and/or positioning relative to an arrangement of the first sensor S1

And is

V2: a second detection range E2 for the head orientation K1, K2 and/or the positioning of at least the second imaging sensor S2 is determined as a function of the different head orientation K1, K2 and/or the positioning relative to the arrangement of the second sensor S2.

The results of method steps V1 and V2 are stored in memory 13. The computer 12 carries out a method step V3, in which the head orientation K1, K2 and/or the position is determined as a function of the head orientation K1, K2 and/or the position of the vehicle occupant 1 using the sensor S1, S2, in which case the head orientation K1, K2 and/or the position can be determined better in the detection ranges E1, E2 than in the detection ranges E1, E2 of the other sensor S1, S2. During the execution of method step V3, computer 12 accesses the results stored in memory 13. The computer 12 obtains as a result a signal S for actuating the vehicle actuator as a function of the determined head orientation K1, K2 and/or position. The signal S is provided via the second interface 14 of the controller 10.

List of reference numerals

1 vehicle occupant

2 inner space

3 steering wheel

E1 first detection region

S1 first imaging sensor

K1 first head orientation

E2 second detection area

S2 second imaging sensor

K2 second head orientation

10 controller

11 first interface

12 computer

13 memory

14 second interface

20 internal space monitoring system

9页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:副驾席用气囊装置

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类