Method and device for evaluating the state of a driver, and vehicle

文档序号:1631346 发布日期:2020-01-14 浏览:23次 中文

阅读说明:本技术 用于评估驾驶员状态的方法和设备以及车辆 (Method and device for evaluating the state of a driver, and vehicle ) 是由 F·巴德 M·布赫纳 J·尼曼 于 2018-07-19 设计创作,主要内容包括:本发明涉及一种用于评估驾驶员状态的方法和设备以及一种车辆,其中,在检测步骤中,传感器辅助地检测到相对于车辆而定义的视野中的驾驶员视线方向,以及根据至少一个影响视野的参数来查明紧靠视线方向定向的立体角,并且在评估步骤中,驾驶员的三维环境的至少一个目标点借助于其关于所查明的立体角的方位来评估,并且根据该评估来确定并且输出与注意力相关的驾驶员状态。(The invention relates to a method and a device for assessing the state of a driver, and to a vehicle, wherein, in a detection step, a sensor detects the direction of the driver's line of sight in a field of view defined relative to the vehicle in an assisted manner, and a solid angle oriented in close proximity to the direction of the line of sight is ascertained as a function of at least one field-of-view-influencing parameter, and wherein, in an assessment step, at least one target point of the three-dimensional environment of the driver is assessed by means of its orientation in relation to the ascertained solid angle, and the state of the driver relevant to attention is determined and output as a function of the assessment.)

1. Method (1) for assessing a driver state of a driver (9) in relation to a vehicle, in particular a motor vehicle, wherein the method comprises the following steps:

a detection step (S1) in which a sensor detects, in an assisted manner, a driver' S (9) direction of sight (8) in a field of view defined relative to the vehicle, and ascertains a solid angle (10) oriented in close proximity to the direction of sight (9) from at least one field of view-influencing parameter; and

an evaluation step (S4) in which at least one target point (2) of the three-dimensional environment of the driver (9) is evaluated by means of its orientation with respect to the ascertained solid angle (10), and in accordance with this evaluation an attention-dependent driver state is determined and output.

2. The method (1) as claimed in claim 1, wherein the solid angle (10) is ascertained in the detection step such that the line of sight direction (8) extends through the center of the solid angle (10).

3. Method (1) according to one of the preceding claims, wherein the at least one parameter characterizes a sitting position and/or a physical body characteristic of the driver (9) and is ascertained sensorially or provided via an interface.

4. Method (1) according to one of the preceding claims, wherein the at least one parameter is ascertained dynamically on the basis of the vehicle state.

5. The method (1) according to claim 4, wherein the at least one parameter characterizes a speed of the vehicle.

6. Method (1) according to one of the preceding claims, wherein the at least one parameter is defined by an interior space component (6) of the vehicle.

7. Method (1) according to one of the preceding claims, wherein the at least one target point (2) of the three-dimensional environment of the driver (9) is given by a predetermined point on the vehicle itself.

8. The method (1) according to claim 1, wherein one or more targets outside the vehicle are detected sensorwise and each of said detected targets outside the vehicle defines at least one target point (2) of the three-dimensional environment of the driver (9).

9. Method (1) according to one of the preceding claims, wherein a virtual cross section (7) through the solid angle (10) is defined by means of the ascertained solid angle (10) and the lateral boundaries of the cross section (10) are defined as a function of the distance (d) of the virtual cross section from the driver (9); and is

The evaluation of the at least one target point (2) is carried out on the basis of a mathematical projection of the target point (2) onto the virtual cross-section (7) by means of the position of the target point with respect to the ascertained solid angle (10).

10. The method (1) according to claim 9, wherein the at least one target point (2) is assessed as not being noticed by the driver (9) if the at least one target point (2) when projected onto the virtual cross-section (7) is not located within the lateral boundaries of the virtual cross-section (7).

11. The method (1) according to claim 9 or 10, wherein the at least one target point (2) is assessed as being noticed by the driver (9) if the at least one target point (2) when projected onto the virtual cross-section (7) is located within the lateral boundaries of the virtual cross-section (7).

12. The method (1) according to claim 11, wherein the evaluation of the at least one target point (2) characterizes a probability of the driver (9) noticing the at least one target point (2).

13. Method (1) according to one of the preceding claims, wherein a function of the vehicle is controlled based on the outputted attention-dependent driver state.

14. Device for ascertaining an attention-related driver state of a driver (9) of a vehicle, in particular of a motor vehicle, which device is provided for carrying out the method (1) according to one of claims 1 to 13.

15. Vehicle, in particular motor vehicle, having a device according to claim 13.

Technical Field

The invention relates to a method and a device for evaluating a driver state of a driver of a vehicle, in particular a motor vehicle, and to a motor vehicle having such a device.

Background

In some cases, in order to control functions of the vehicle, in particular auxiliary functions, knowledge of the attention of the vehicle driver with respect to the object related to the function may be necessary. For example, it can be provided that the transverse traffic assistance device counteracts a lane change if the driver does not or only inadequately direct his attention to what is known as a "blind spot". Similarly, it can be provided, for example, that the jam assistance device automatically initiates the braking of the vehicle if the driver, while driving in the event of a jam, does not pay his attention to the vehicle parked in front of him.

In order to assess the driver's attention with respect to such targets, it is known from the prior art to monitor the visual focus of the driver. For this purpose, the head orientation, i.e. the inclination of the head with respect to the longitudinal, transverse and vertical axes of the vehicle, can be detected and a viewing direction can be derived therefrom, by means of which it is also possible to ascertain: whether the target is in the driver's field of view.

Disclosure of Invention

The object of the invention is to improve the evaluation of the driver state of a driver of a vehicle, in particular to evaluate the attention of the driver with respect to points or objects more accurately and/or more reliably.

This object is achieved by a method according to claim 1, an apparatus according to claim 14 and a vehicle having such an apparatus according to claim 15.

A first aspect of the invention relates to a method for assessing a driver state of a driver in relation to a vehicle, in particular a motor vehicle, wherein the method has the following steps: (i) a detection step in which a sensor detects a driver's gaze direction in a defined field of view relative to the vehicle in an assisted manner and ascertains a solid angle oriented in close proximity to the gaze direction from at least one field of view-influencing parameter; and (ii) an evaluation step in which at least one target point of the three-dimensional environment of the driver is evaluated by means of its orientation with respect to the ascertained solid angle, and in dependence on this evaluation an attention-dependent driver state is determined and output.

In the sense of the present invention, the field of view of the driver is understood to be the region in the three-dimensional environment of the driver in which the driver can visually perceive the target given the head orientation. In particular, the area may be limited by other objects, such as occlusions. In this sense, the field of view can be physically measured or at least estimated, and the parameters influencing the field of view can relate, for example, to the arrangement of these other objects (for example vehicle components) which obstruct the driver's line of sight out of the vehicle. In particular, the driver's field of view is the area in the three-dimensional environment of the driver to which the driver actively places his face and thus his attention. In this sense, the parameters influencing the visual field may also relate to stimuli, in particular visual stimuli, which cause a change in the orientation of the head and thus the visual field.

In the sense of the present invention, a target point of a three-dimensional environment of a driver is understood to be a point which represents a target, in particular with respect to the position of the driver. The target point may represent, for example, the position of the traffic participant, in particular relative to the driver, or the position of the operating element in the interior of the vehicle, in particular relative to the driver. The plurality of target points may thus also represent lines and/or surfaces, for example road regions (in particular driving lanes) or dashboard regions (in particular odometers or other display devices).

By taking into account at least one parameter characterizing the influence on the field of view, it is possible to ascertain or at least estimate: which region within the ascertained solid angle in the three-dimensional environment of the driver is always or at least particularly clearly and/or reliably visually perceived by the driver. The determination of the solid angle from the direction of the driver's gaze allows spatially to distinguish target points located within or outside the solid angle with high resolution and thus allows a reliable assessment of the driver's attention with respect to at least one target point.

In particular, a dynamic monitoring of the driver's attention with respect to the at least one target point can be achieved. If the driver, in particular the driver's head, is also moved accordingly to the direction of the driver's line of sight, the solid angle is also correspondingly redirected according to the method according to the invention. Even if the orientation of the solid angle in the three-dimensional environment of the driver is only slightly corrected, further target points may thus enter or leave the ascertained solid angle. In particular, the attention of the driver with respect to the at least one target point can thus be monitored in a substantially spatially continuous manner in the three-dimensional environment of the driver.

Overall, the invention enables a reliable assessment of the attention of the vehicle driver with respect to certain targets in the driver's surroundings.

In a preferred embodiment, the solid angle is determined in the detection step in such a way that the viewing direction extends through the center of the solid angle, in particular the geometric center of gravity. The center, in particular the geometric center of gravity, defines a region in which the driver's attention is assessed to be particularly high. The solid angle can thus reliably give the driver an area of his attention.

The position of the at least one target point is preferably evaluated with respect to the center of the solid angle, in particular the geometric center of gravity. The position of the at least one target point in the region of the center, in particular in the vicinity of the center, is preferably defined as a high driver attention with respect to the at least one target point; whereas the orientation of the at least one target point in the edge region of the solid angle, in particular at a larger distance from the center of the solid angle, is defined as a low attention of the driver. This enables dynamic and differentiated ascertainment of the driver's attention with respect to the target point.

In a further preferred embodiment, the at least one parameter characterizes the sitting position and/or the anatomical characteristics of the driver. Preferably, the at least one parameter is ascertained by means of a sensor or is provided via an interface.

For this purpose, the head of the driver, in particular the driver, can be detected by a sensor device, for example a camera, wherein, in addition to the head orientation derived from the sensor data generated in this way, the position of the head in the vehicle, the relative eye position (i.e. the direction of the eyes with respect to the line of sight perpendicular to the direction in which the face of the driver extends) and/or the posture of the driver are also preferably derived. Alternatively or additionally, the at least one parameter may also be input by the driver via a user Interface (Man Machine Interface, MMI).

The at least one parameter may in particular characterize the vision of the driver, for example whether the driver is wearing a vision aid and to what extent the vision aid restricts the driver's field of view, or to what extent the eye position on the driver's anatomy (i.e. the orientation of the eyes on the driver's head) influences the field of view.

By means of each of the embodiments mentioned above, the solid angle can be ascertained particularly reliably and precisely, as a result of which a correspondingly accurate evaluation of the target point with respect to its position with reference to the solid angle can be achieved.

In a further preferred embodiment, the at least one parameter is ascertained dynamically as a function of the vehicle state, in particular the driving situation. In this way, it is possible to take into account that the driver's field of view is restricted, in particular physically and/or psychologically, in a specific vehicle state or in a specific driving situation.

If the vehicle is, for example, in a jam, the driver's field of view is substantially limited to the immediate surroundings of the vehicle. In other words, the driver's field of vision is limited by the other vehicles surrounding the vehicle, so that, for example, a driver of a motorcycle passing between stopped vehicles cannot or can only be perceived visually in a limited manner. Similar considerations apply for example to severe weather conditions, such as snow and/or fog. Accordingly, in the case of a limited field of view, the solid angle can preferably also be correspondingly limited, so that it covers only a correspondingly reduced part of the field of view.

In a further preferred embodiment, the at least one parameter characterizes the speed of the vehicle. With increasing speeds, the driver's perception capability is typically focused more on the road section located in front of the driver (so-called "tunnel sight"), so that the driver's field of view may be more restricted with increasing speed. Therefore, in the case of an increase in speed, the solid angle range is preferably limited.

In a further preferred embodiment, the at least one parameter is defined by an interior space component of the vehicle. The at least one parameter preferably characterizes the position of the driver's head relative to interior space components that may obstruct the driver's line of sight (e.g., rear view mirrors, A, B and/or the C-pillar, the dashboard, the steering wheel, the navigation display, and/or the like). The solid angle can be ascertained to a certain extent in that a target point located behind one of the interior components with reference to the head of the driver is reliably evaluated as unnoticed by the driver.

In a further preferred embodiment, the at least one target point of the three-dimensional environment of the driver is specified by a predetermined point on the vehicle itself. In particular, the at least one target point is a static, i.e. fixed-position target point and is provided by an operating element of the vehicle function and/or by the geometry of the vehicle interior.

The position of the at least one target point is preferably derived from a model of the vehicle, for example a CAD model. In one embodiment, the at least one target point may be formed by at least one node of a mesh model of the vehicle. Here, the spatial density of the grid points of the grid model of the vehicle may preferably be adapted according to a predetermined accuracy of the driver state assessment.

By configuring the target point as a predetermined point on the vehicle itself, it is possible to detect precisely and reliably: whether the driver has focused his attention on a single target point of the vehicle (for example an operating element) or on an operating surface formed by a plurality of target points (for example a display).

In a further preferred embodiment, the sensor detects one or more objects outside the vehicle. In particular, each of the detected targets external to the vehicle may define at least one target point of the three-dimensional environment of the driver. Thereby, the driver's attention with respect to the target outside the vehicle can also be reliably evaluated.

In a further preferred embodiment, a virtual cross section through the ascertained solid angle is defined by means of the solid angle and the lateral boundaries of the cross section are defined as a function of the distance of the virtual cross section from the driver. Preferably, the evaluation of the at least one target point is carried out by means of the position of the target point with respect to the ascertained solid angle also on the basis of a mathematical projection of the target point onto the virtual cross-section. In this case, the virtual cross section is preferably defined such that it is oriented substantially perpendicular to the ascertained viewing direction.

Preferably, the virtual cross-section through the solid angle substantially contains a plane a on which a/r can be plotted according to the formula Θ2A solid angle Θ is defined, wherein the radius r, by which the solid angle can be defined according to the formula mentioned, is preferably the distance to the driver. The lateral boundary of the cross-section can be determined by the intersection of the ascertained solid angle with a plane perpendicular to the ascertained viewing direction. In this case, the virtual cross section can be designed, in particular depending on the solid angle, as a circle, an ellipse, a rectangle, a trapezoid, a parallelogram or an irregular shape.

By mathematically projecting the at least one target point onto the virtual cross-section, the attention of the driver with respect to the at least one target point can be reliably and accurately ascertained.

In a further preferred embodiment, the at least one target point is evaluated as unnoticed by the driver if it is not located within the lateral boundaries of the virtual cross-section when projected onto the virtual cross-section. This enables a simple and fast evaluation of the target point relating to the attention of the driver.

In a further preferred embodiment, the at least one target point is evaluated as noticed by the driver if it is located within the lateral boundaries of the virtual cross-section when projected onto the virtual cross-section. The target points can thus be divided by means of the lateral boundaries of the cross-section into: target points that the driver cannot notice at all due to external conditions; and a target point that can be at least potentially noticed by the driver.

In a further preferred embodiment, the evaluation of the at least one target point characterizes the probability of the driver noticing the at least one target point. Here, the probability may be used as a measure for the driver's attention to the target point. In other words, the driver's attention to the target point can be evaluated by means of the probability. The probability is preferably ascertained by means of the distance of the projection of the target point onto the virtual cross section from the center of the solid angle, in particular the center (e.g. the geometric center), and is in particular inversely proportional to this distance, i.e. the smaller the distance of the projection from the center of the solid angle, the higher the probability that the ascertained driver perceives the target point. The driver's field of view can thus be divided into a central field of view, which is visually focused by the driver, around the direction of the line of sight and a peripheral field of view at the edges of the solid angle. Thus, the evaluation of the at least one target point relating to the attention of the driver can be performed dynamically and differently.

In a further preferred embodiment, the function of the vehicle is controlled on the basis of the output attention-dependent driver state. Preferably, one or more driver assistance systems are controlled on the basis of the output driver state that is relevant to attention, so that in a predetermined driving situation they can react differently according to the respective driver state. This enables the functions of the vehicle to be controlled differently.

A second aspect of the invention relates to a device for ascertaining an attention-related driver state of a driver of a vehicle, in particular of a motor vehicle, which is provided for carrying out the method according to the first aspect of the invention.

A third aspect of the invention relates to a vehicle, in particular a motor vehicle, having an apparatus according to the second aspect of the invention.

The features and advantages described in the context of the first aspect of the invention and its advantageous embodiments also apply to the other aspects of the invention mentioned and its corresponding advantageous embodiments and vice versa, where technically expedient.

Drawings

Further features, advantages and possibilities of use of the invention emerge from the following description in conjunction with the drawings, in which the same reference numerals are used throughout for identical or mutually corresponding elements of the invention. In the figures (at least partially schematically):

FIG. 1 illustrates an embodiment of a method for assessing a driver state;

FIG. 2 shows an example for a target point; and

fig. 3 shows an example for a virtual cross-section.

Detailed Description

Fig. 1 shows an exemplary embodiment of a method 1 for evaluating a driver state of a driver in relation to a vehicle, in particular a motor vehicle.

In a detection step S1, a parameter is detected, and a driver state evaluation is performed based on the parameter. By means of a sensor device, for example a camera mounted in the steering wheel of the vehicle, the head posture, the sitting position, the anatomy of the driver and/or the like can be detected. First, the driver's gaze direction is derived from these parameters.

The driver's direction of sight is preferably defined by a starting point (e.g. the centre of the driver's head or a point between the driver's eyes, e.g. in the region of the nasal root) and a vector. The starting point and the vector may be given in a coordinate system, e.g. a vehicle coordinate system. The viewing direction may be understood in particular as a ray having a defined direction and a defined starting point. The driver's gaze direction may provide a preliminary rough estimate of the driver's field of view.

In sub-step S1a of the detection step S1, the determination of the driver' S field of view is further refined by ascertaining the solid angle oriented in close proximity to the gaze direction by means of said parameters. The solid angle may also be understood as a viewing cone, wherein objects located within the solid angle may be visually perceived by the driver. The solid angle or viewing cone is preferably ascertained such that its origin is located at the starting point of the viewing direction.

By means of these parameters, for example, the opening angle of the solid angle, i.e., the angular range around the viewing direction, is ascertained. Here, the solid angle

In order to be able to determine the position of the target point (with respect to which the driver' S attention should be assessed) mathematically with reference to a solid angle, in a second sub-step S1b of the detection step S1, a virtual cross-section through the solid angle is preferably defined by means of the ascertained solid angle. In this case, the virtual cross section is preferably perpendicular to the viewing direction and has a limited extent both in the horizontal direction (i.e. along the transverse axis of the vehicle) and in the vertical direction (i.e. along the vertical axis of the vehicle).

The horizontal and vertical boundaries can be related to the distance of the virtual cross-section from the driver, in particular from the starting point of the driver's direction of sight, for example from the center of his head. If the virtual cross section is defined, for example, within a low distance from the driver, the virtual cross section has only a small area. In contrast, if a virtual cross section is defined within a higher distance from the driver, the virtual cross section has a larger area.

In this case, the size of the virtual cross section, in particular the spatial boundary of the virtual cross section, preferably gives a measure for the size of the driver's field of view.

The determination of the virtual cross-section, in particular the spatial boundary thereof, is dependent on the parameters detected sensorically over the solid angle region. The parameters involved in ascertaining solid angles or defining virtual cross sections are also referred to as parameters affecting the driver's field of view.

From these parameters which influence the driver's field of view, for example the driver's anatomy and/or sitting position, the limits of the field of view, in particular the horizontal and/or vertical boundaries of the virtual cross-section, are derived. Alternatively or additionally, the distance of the virtual cross-section from the driver is also determined by means of parameters that influence the driver's field of view.

The parameters influencing the driver's field of vision may relate to the arrangement of objects in the interior of the vehicle, in particular relative to the driver's head, which objects obstruct the driver's line of sight out of the vehicle.

Examples for such targets are A, B and/or C-pillars, rear view mirrors, dashboards or parts of dashboards and/or the like. In particular the dashboard or parts of the dashboard form a horizontal line below which the driver cannot visually perceive objects outside the vehicle.

Another parameter affecting the driver's field of vision may relate to the driver's vision, in particular if the field of vision is limited horizontally or vertically by a vision aid, or if the eye position on the anatomy obstructs the field of vision.

Another parameter influencing the driver's field of vision can also be given by external conditions, i.e. situations in which a psychologically relevant influence of the field of vision occurs. In the case of high vehicle speeds, a so-called tunnel view may occur for example on the part of the driver, through which the driver's view is reduced. Thus, at higher speeds, the virtual cross-section may be defined closer to the driver's head.

In the target detection step S2, a target point of a target is detected, which should be evaluated in relation to the attention of the driver. In this case, the target points define the position of the respective target relative to the vehicle and/or relative to the driver, in particular relative to the head of the driver. The target point may be, for example, a point in the vehicle coordinate system.

The target point of a target outside the vehicle (e.g. other traffic participants) is ascertained by detecting the target outside the vehicle, for example by means of a camera, a radar sensor, a lidar sensor, an ultrasonic sensor and/or the like.

Target points of targets inside the vehicle (for example components of the vehicle interior, such as the steering wheel, vehicle status displays and/or vehicle function operating elements) are preferably reserved in a database and read in a target detection step S2.

In a mapping step S3, the orientation of the detected target point with respect to the solid angle is ascertained. For this purpose, the target point can be projected onto a virtual cross-section, the distance of the projection of the target point from the spatial boundary of the cross-section and/or from the center of the cross-section (for example the intersection of the cross-section with the viewing direction) being ascertained. The center of the cross section may in particular be the geometric center of gravity of a solid angle.

The target point is preferably projected onto the cross-section by means of a mapping matrix which, when applied to the target point, leads to a perspective mapping of the target point. In this case, it may be necessary to convert the target point from one coordinate system (for example the vehicle coordinate system) into another coordinate system (for example the head of the driver or the coordinate system of the virtual cross section). In this case, different target points, for example a target point of a target outside the vehicle and a target point of a target inside the vehicle, have to be converted differently, for example from an external coordinate system of the vehicle surroundings or an internal coordinate system of the vehicle interior.

In an evaluation step S4, the ascertained orientation of the detected target point with respect to the solid angle, in particular on the virtual cross section, is evaluated. Target points which are located outside the solid angle or whose projection is located outside the spatial boundary of the cross section are located outside the driver's field of view and are therefore not visually perceptible by the driver. These target points are evaluated as unnoticed.

A target point located within the solid angle or whose projection lies within the spatial boundary of the cross-section is evaluated as being located within the driver's field of view and is therefore evaluated as being at least potentially noticeable. These target points can be at least potentially visually perceived by the driver.

The assessment of the driver's attention with respect to the target point can be made as a function of the distance of the projection of the target point from the center of the solid angle or of the virtual cross section, for example as a function of the distance from the direction of the line of sight. The target point whose projection is located in the vicinity of the center is perceived by the driver with a higher probability than a target point whose projection is located in the edge region of the virtual cross section, i.e. in the vicinity of the spatial boundary of the virtual cross section and therefore has a greater distance from the center.

In this case, the lines and/or planes defined by the target points can also be evaluated in relation to the attention of the driver. To this end, the length or area of the intersection of the line and/or plane defined by the plurality of target points with the virtual cross-section is ascertained. The probability of the driver perceiving a line or a plane defined by a plurality of target points can then be ascertained from the size of the intersection length or area.

Fig. 2 shows an example of a target point 2 in the cab of a vehicle, which defines the position of the target in a coordinate system, for example a vehicle coordinate system.

The target point 2 may be divided into two groups: target points 2 which are given by targets in or on the vehicle itself and are also referred to below as vehicle interior target points; and a target point 2 which is given by a target outside the vehicle and is also referred to as vehicle-outside target point in the following.

The target point 2, which is provided by a target in or on the vehicle itself, can define the position of an operating element 3 of the vehicle (for example on the dashboard 4 of the vehicle) or a structural component 6 of the vehicle (for example the a-pillar of the vehicle). In this case, a plurality of target points 2 can also be defined as 5 below, in which, for example, information relating to the vehicle state is displayed to the driver.

The target points in the vehicle interior are preferably formed by nodes of a wire mesh model of the vehicle, in particular of the vehicle interior. The line mesh model can describe a contour of the vehicle interior, wherein target points 2 corresponding to the operating elements are arranged on the contour.

Target points outside the vehicle are ascertained by sensorially detecting the vehicle surroundings, for example by monitoring the vehicle surroundings by means of one or more cameras, radar sensors, lidar sensors, ultrasonic sensors and/or the like.

By means of the ascertained viewing direction and the solid angle based on this viewing direction or by means of the virtual cross section 7 ascertained through the solid angle, it is possible to evaluate: the determined target point 2 is located in the driver's field of view and is particularly perceived by the driver with what probability.

In this case, the position of the target point 2 is preferably evaluated relative to the virtual cross section 7, in particular relative to the center 7a of the virtual cross section 7, which is defined, for example, by the intersection of the virtual cross section 7 with the line of sight direction of the driver.

In the example shown, the vehicle-exterior target point 2a, which gives the position of the other vehicle in the vehicle coordinate system, is closer to the center 7a of the virtual cross-section 7 than the vehicle-interior target point 2b, which gives the position of the operating element of the vehicle in the vehicle coordinate system. Accordingly, the driver perceives the other vehicle with a higher probability than the operating element.

Fig. 3 shows an example of a virtual cross section 7, which is ascertained on the basis of a viewing direction 8 of a driver 9 of the vehicle and a solid angle 10 ascertained with the aid of the viewing direction 8. The virtual cross section 7 is understood to be the field of view of the driver 9.

In order to ascertain an assessment of the attention associated with the target, the position of the target is given by a target point 2a, the target point 2a of which is projected onto a virtual cross-section 7. The projected target point 2a' gives the probability that the target is perceived by the driver 9 in relation to the virtual cross section 7, in particular the position on the virtual cross section 7.

In this case, objects whose projected target point 2a 'is located in the region of the center 7a of the virtual cross section 7, in particular in the vicinity of the intersection of the virtual cross section 7 with the viewing direction 8, are perceived with a higher probability than objects whose projected target point 2b' with the corresponding target point 2b is located in the edge region of the virtual cross section 7, in particular in the region of the spatial boundary of the virtual cross section 7 formed by the solid angle 10.

The viewing direction 8, the solid angle 10 and/or the virtual cross section 7 are ascertained by or with the aid of at least one sensor-related parameter which influences the field of view of the driver 9. The at least one parameter influencing the field of view of the driver 9 may define the dimensions of the solid angle 10 or the spatial boundaries of the virtual cross section 7.

If the vehicle is driven at an increased speed, for example, the field of view of the driver 9 will be limited. This is colloquially referred to as a tunnel sight. In the example shown, this is represented by a further virtual cross-section 7 'which is at a greater distance d' from the head of the driver 9 than the distance d of the virtual cross-section 7 from the head of the driver 9. In the case of an increase in speed, the further virtual cross section 7', which can be understood as the field of view of the driver 9, is spatially limited by a smaller solid angle 10'. Thus, in the case of an increase in speed, the driver 9 no longer perceives the target with the target point 2 b.

List of reference numerals

Method for evaluating the state of a driver

2. 2a, 2b target points

2a ', 2b' projected target points

3 operating element

4 instrument panel

5 noodles

6 Assembly of a vehicle interior

7. 7' virtual Cross section

7a center of the virtual cross section

8 direction of sight

9 driver

10 solid angle

Method steps S1-S4

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于运输长焊接铁轨的设备

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!