Method, motor vehicle and system for assisting a driver of a motor vehicle during overtaking

文档序号:1047598 发布日期:2020-10-09 浏览:22次 中文

阅读说明:本技术 用于在超车过程中辅助机动车的驾驶员的方法、机动车以及系统 (Method, motor vehicle and system for assisting a driver of a motor vehicle during overtaking ) 是由 P·拉赫尔 T·安布鲁斯特 于 2018-11-14 设计创作,主要内容包括:本发明涉及一种用于在超车过程中辅助机动车(1)的驾驶员的方法。为了在超车过程中能够实现较高的安全水平,设置以下步骤:-确定与机动车(1)的行驶有关的车辆数据,-根据车辆数据来确定驾驶员想要对外部车辆(3)超车的意图,-确定机动车(1)的位置,-提供关于机动车(1)的位置的周围环境数据,其中,该周围环境数据涉及机动车(1)的周围环境(U)中的位置固定的对象(5),-从周围环境数据中推导出视距,并且仅当确定了驾驶员的超车意图时:-根据视距,通过机动车(1)的前照灯(15)要么将警告指示(31)投影到周围环境中的行车道区域上,要么照亮超车路线(7)。(The invention relates to a method for assisting a driver of a motor vehicle (1) during a passing maneuver. In order to be able to achieve a high safety level during overtaking, the following steps are provided: -determining vehicle data relating to the travel of the motor vehicle (1), -determining from the vehicle data an intention of the driver to overtake the external vehicle (3), -determining the position of the motor vehicle (1), -providing ambient data about the position of the motor vehicle (1), wherein the ambient data relates to a position-fixed object (5) in the ambient environment (U) of the motor vehicle (1), -deriving a line of sight from the ambient data, and only if an intention of the driver to overtake is determined: depending on the viewing distance, either a warning indication (31) is projected onto a traffic lane region in the surroundings or the overtaking route (7) is illuminated by means of the headlights (15) of the motor vehicle (1).)

1. A method for assisting a driver of a motor vehicle (1) during overtaking, the method comprising the steps of:

-determining vehicle data related to the driving of the motor vehicle (1),

-determining from the vehicle data an intention of the driver to overtake the external vehicle (3),

-determining the position of the motor vehicle (1),

-providing surroundings data about the position of the motor vehicle (1), wherein the surroundings data relate to position-fixed objects (5) in the surroundings (U) of the motor vehicle (1),

-deriving a line of sight from the ambient data,

and only when the driver's intention to cut in is determined:

depending on the viewing distance, either a warning indication (31) is projected onto a traffic lane region in the surroundings or the overtaking route (7) is illuminated by means of the headlights (15) of the motor vehicle (1).

2. The method of claim 1, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

the steps of providing the surroundings data and deriving the viewing distance are carried out in a server device (2) outside the vehicle, and the viewing distance is received by the motor vehicle (1) from the server device (2).

3. The method of claim 2, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

ambient data is collected at least partially by the server device (2) from a plurality of other vehicles.

4. The method of claim 3, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

the ambient data comprise camera images of the respective front cameras (16) of the local motor vehicle (1) and/or of a plurality of other motor vehicles.

5. The method of claim 4, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

during the derivation, the viewing distance is derived from the camera images by means of a machine learning device.

6. The method according to any one of the preceding claims,

it is characterized in that the preparation method is characterized in that,

depending on the viewing distance, a warning indication is projected onto the traffic lane area when the viewing distance is greater than a viewing distance limit, which is determined or predefined, and the overtaking route (7) is illuminated when the viewing distance is less than the viewing distance limit.

7. The method of claim 6, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

the visibility limit is determined on the basis of the speed of the motor vehicle (1) and/or the speed of the external vehicle (3).

8. The method according to any one of the preceding claims,

it is characterized in that the preparation method is characterized in that,

the line of sight is derived at least in part from map data that is part of the ambient data.

9. The method of claim 8, wherein the first and second light sources are selected from the group consisting of,

it is characterized in that the preparation method is characterized in that,

the visibility is derived at least partially from altitude data and/or gradient data as part of map data, wherein the limitation of the visibility is determined by the vertical course of the road (6) on which the motor vehicle is driving.

10. The method according to any one of the preceding claims,

it is characterized in that the preparation method is characterized in that,

the intention of the driver to cut into an outside vehicle (3) is determined on the basis of the actuation of the steering lamps of the motor vehicle (1) and/or on the basis of an in-vehicle camera, in particular by recognizing the head-of-turn sight.

11. A motor vehicle (1) having a driver assistance system for assisting during overtaking, having:

first determination means (10) for determining vehicle data relating to the travel of the motor vehicle (1),

-second determination means (11) for determining an intention of a driver of the motor vehicle (1) to overtake the external vehicle (3) on the basis of the vehicle data,

-a positioning device (13) for determining the position of the motor vehicle (1),

-a computation unit (12) for providing surroundings data relating to the position of the motor vehicle (1) and for deriving a viewing distance from the surroundings data, wherein the surroundings data relate to objects (5) that are fixed in position in the surroundings (U) of the motor vehicle (1), and

-a headlight (15) for projecting a warning indication (31) by means of the headlight (15) of the motor vehicle (1) onto a traffic lane area in the surroundings (U) and for illuminating a passing route (7), wherein,

the computation unit (12) is designed to, depending on the viewing distance, and only if the driver's intention to cut a car is determined, provide for either projecting a warning indication (31) or illuminating the cut-in route (7).

12. A system (9) for assisting during overtaking, the system having:

-a motor vehicle (1),

-a server device (2) external to the vehicle,

a first determination device (10) of the motor vehicle (1) for determining vehicle data relating to the travel of the motor vehicle (1),

-second determination means (11) of the motor vehicle (1) for determining an intention of a driver of the motor vehicle (1) to overtake the external vehicle (3) on the basis of the vehicle data,

a positioning unit (13) of the motor vehicle (1) for determining the position of the motor vehicle (1),

-a computing unit (12, 22) of the motor vehicle (1) and/or of the server device (2) for providing surroundings data for the position of the motor vehicle (1) and for deriving a viewing distance from the surroundings data, wherein the surroundings data relate to a stationary object (5) in the surroundings (U) of the motor vehicle (1), and

a headlight (15) of the motor vehicle (1) for projecting a warning indication (31) onto a traffic lane region in the surroundings and for illuminating a passing route (7), wherein

The computation unit (12, 22) is designed to provide for either a projection of a warning indication (31) or an illumination of the overtaking route (7) depending on the viewing distance and only if the driver's overtaking intention is determined.

Technical Field

The invention relates to a method for assisting a driver of a motor vehicle during a passing maneuver. Another aspect of the invention relates to a motor vehicle having a driver assistance system. The invention further relates to a system for assisting in a passing maneuver, comprising a motor vehicle and a server device outside the vehicle.

Background

It is known from the prior art that the luminous area of a headlight can be adjusted according to traffic conditions. For this purpose, for example, a headlight system with an adjustable light-emitting region is used. The higher the resolution of the headlight, the more targeted the luminous region thereof can be adapted to the traffic situation. In this context, different systems for high-resolution headlamps are known. The high-resolution headlight may have, for example, a matrix illuminant, a micromirror device, a liquid crystal device or a laser scanner.

For example, document DE 102014214649 a1 discloses a method for locating a light-emitting region of a headlight of a vehicle on the basis of the surroundings of the vehicle. In this case, the optimum positioning of the luminous region of the headlight can be such that a shadow region is present around the oncoming vehicle, and the surrounding region without vehicle next to the shadow region is illuminated to the greatest extent by the headlight.

Furthermore, the light signal can be projected onto the traffic lane by means of a high-resolution headlight. This is known, for example, from document DE 102009009473 a1, which provides a method for supporting the driver of a vehicle and other traffic participants. In this case, it is evaluated whether another traffic participant constitutes a collision risk for the vehicle and, if necessary, a light signal is projected from the vehicle onto the traffic lane, which light signal warns the other traffic participant of the collision risk. If, for example, during a passing maneuver, it is determined that no collision risk is imminent, no light signal is projected onto the traffic lane for the time being. If it is determined that the vehicle has started the passing process at too short a distance while the speed of all the participating vehicles remains constant, the light signal is projected onto the traffic lane in front of the other vehicles in order to decelerate the other traffic participants.

Document DE 102014009253 a1 relates to a method for controlling the light distribution of a headlight of a motor vehicle. In this case, the light distribution of the headlights can be adjusted according to the driving route of the vehicle. The travel route may be established, for example, in consideration of the calculated scheduling operation possibility of the vehicle. For example, the travel route of the vehicle is predicted only for the most likely scheduling operation. The light distribution of the headlamps can then be adjusted on the basis of the travel route determined in this way. For example, overtaking or lane changing may be identified as a scheduled operation.

Disclosure of Invention

The aim of the invention is to achieve a higher safety level during overtaking of a motor vehicle.

According to the invention, this object is achieved by the subject matter of the independent claims. Advantageous embodiments with suitable developments are the subject matter of the dependent claims.

A first aspect of the invention relates to a method for assisting a driver of a motor vehicle during a cut-in. In this case, vehicle data relating to the travel of the motor vehicle are first determined. The vehicle data may, for example, describe the speed, acceleration, actuation of the steering lights, relative speed to an external vehicle, or recognition of a head-over-shoulder view (Schulterblick) of the driver of the motor vehicle while looking/turning around. From these vehicle data, the driver's intention to cut-in to the outside vehicle is determined. In particular, a probability value can be determined from the vehicle data according to predetermined or adjustable or learnable rules, which probability value indicates how likely the driver will cut into the outside vehicle in the future. If the probability value exceeds a predetermined probability limit, the driver may be deemed to have an intention to cut-in to the external vehicle.

The position of the motor vehicle is then determined. This takes place, for example, by means of receivers for navigation satellite systems (in particular GPS, Galileo, Baidu or Glonass) or via mobile communication networks. Some embodiments of the invention provide that the determination of the position is only carried out when the driver's intention to cut-in to an external vehicle is determined.

Ambient data/environmental data relating to the position of the motor vehicle are then provided, wherein the ambient data relate to stationary objects in the surroundings of the motor vehicle. In other words, the ambient data can be adapted to a previously determined position of the motor vehicle. The stationary object in the surroundings of the motor vehicle may be, for example, a wall, a tree, a bush, a forest, a bag, a building or another object which blocks the driver's view of the road section located in front of the motor vehicle. The ambient data can be provided, for example, by a receiving unit which is designed to receive the ambient data from a server device outside the vehicle. Such reception may be performed, for example, via a mobile communication network. The viewing distance is derived from the ambient data. The viewing distance is derived from the ambient data, for example, by a computing unit of the motor vehicle. In some embodiments of the invention, the step of providing ambient data and/or deriving the line of sight may be performed only when an intention of the driver to overtake the external vehicle is identified.

In a further step, only when the intention of the driver to pass is determined, depending on the viewing distance, either warning indications are projected onto the traffic lane area in the surroundings or the passing route is illuminated by the headlights of the motor vehicle. In other words, when the driver's intention to cut into a vehicle is determined, either a warning indication is projected or the cut-in route is illuminated by the headlights of the motor vehicle. In the above example, the projection or illumination is only performed when the probability value for the next passing maneuver is greater than a predetermined probability limit.

According to one embodiment, the above-mentioned steps of the method can be carried out completely in a motor vehicle or by means of a motor vehicle. The motor vehicle in this case has corresponding devices for carrying out the individual steps.

In a further embodiment, it is provided that the steps of providing the surroundings data and of deriving the viewing distance are carried out in a server device outside the vehicle, and the viewing distance is received by the motor vehicle from the server device. For example, the ambient data may be provided by the storage unit in a server device outside the vehicle. The viewing distance derived from the ambient data is then transmitted to the motor vehicle on the part of the server device. For example, the receiving unit of the motor vehicle may receive the line of sight from the server device. Preferably, such transmission is via a mobile communication network.

One refinement provides that the server device collects the ambient data at least in part from a plurality of other motor vehicles. In other words, ambient data is received and collected from a fleet of vehicles, i.e., a plurality of other vehicles. The viewing distance can be derived directly from the respective ambient data upon reception of the respective ambient data from the respective one of the plurality of other vehicles. Alternatively, the respective ambient data of a plurality of other motor vehicles may be collected first and then jointly analyzed with respect to the viewing distance. In particular, a plurality of line-of-sight values are derived from the respective ambient data of the respective motor vehicle of the plurality of other motor vehicles. The plurality of visibility values can then be combined to a visibility transmitted to the motor vehicle. This summarization is carried out in particular by averaging, regression or statistical analysis methods. This makes it possible to reduce errors in determining the viewing distance.

According to one refinement, the ambient data comprise camera images of the respective front cameras of the local motor vehicle and/or of a plurality of other motor vehicles. Thus, for example, respective camera images from a plurality of other motor vehicles are collected. From these camera images, the viewing distance can be derived by means of predetermined rules. By deriving the viewing distance from the camera images of a plurality of other motor vehicles, errors in the image analysis of the camera images of the motor vehicle can be avoided. Furthermore, camera images of a plurality of motor vehicles taken during the daytime may preferably be taken into account for determining the viewing distance. For this purpose, the camera image can be selected according to the image brightness. For example, only those camera images of a plurality of other motor vehicles whose average brightness exceeds a predetermined brightness value are selected for deriving the viewing distance. In this way, the viewing distance with respect to the position of the motor vehicle can also be determined at night. Furthermore, this embodiment ensures a higher reliability in the derivation of the viewing distance than in the derivation of the viewing distance from the camera image of the front camera of the motor vehicle. The problem of the driver of the motor vehicle having difficulty estimating the viewing distance himself, in particular at night, is therefore solved according to this embodiment. For example, stationary objects in the surroundings of a motor vehicle may block oncoming vehicles. This can lead to the driver erroneously judging that overtaking may not be dangerous, especially at night, when the driver of the vehicle is not aware of the headlights of an oncoming vehicle.

The camera image can show stationary objects in the surroundings of the motor vehicle. In other words, stationary objects in the surroundings can be recognized and/or localized from the camera image. In particular, from the camera image, by means of a preset algorithm: the distance to the road section located in front of the motor vehicle can be seen much more clearly and is output as a line of sight. In other words, a point is sought in the camera image, up to which the road section lying in front of the motor vehicle can be seen. This point may be referred to as the "end of line of sight" or "end of line of sight". The distance to this point may be taken as the line of sight.

In a further development, the line of sight is derived from the camera images by means of a machine learning device during the derivation. For example, the machine learning device may be designed to identify the sight-line end. For example, machine learning devices are trained for this purpose on the basis of test images. In training the machine learning device, the machine learning device may determine rules for identifying the gaze endpoint. This allows a particularly reliable determination of the viewing distance.

One refinement provides that, depending on the viewing distance, a warning indicator is projected onto the traffic lane region when the viewing distance is greater than a viewing distance limit, and the passing route is illuminated when the viewing distance is less than the viewing distance limit, wherein the viewing distance limit is determined or predefined. In particular, the visibility limit is determined or predefined in such a way that overtaking can be carried out without danger if the visibility range is greater than the visibility limit. In this way, the driver can be assisted during the passing by informing him when the visibility range is insufficient to achieve a safe passing. The driver can additionally be assisted by: the driver is informed of the fact that the visibility range is sufficient to enable safe overtaking by illuminating the overtaking path on the one hand and on the other hand enables good viewing of the overtaking path.

Preferably, the line of sight limit is determined based on the speed of the motor vehicle and/or the speed of the external vehicle. For example, the faster the speed of the motor vehicle and/or the speed of the external vehicle, the larger the line-of-sight limit is determined. This ensures that the line-of-sight limit is sufficiently large at high speeds and, at the same time, is not selected to be unnecessarily large at low speeds.

According to a refinement, the viewing distance can be derived at least in part from map data, which is part of the ambient data. Thus, for example, the line of sight may be derived from one or more camera images, from map data, or from map data and one or more camera images. The map data may relate to permanently fixed objects in the surroundings of the motor vehicle, for example. Permanent position-fixing objects are, for example, buildings, walls and houses. Objects which are not permanently fixed in position are, for example, bushes, fields, in particular maize or hop (hop). Thus, the fixed location objects are divided into persistent fixed location objects and non-persistent fixed location objects. A forest or individual trees may be attributed to persistent or non-persistent location-fixed objects, depending on the individual case. From the camera images, objects that are not permanently fixed in position in the surroundings of the motor vehicle can be determined. In addition, however, permanently positionally fixed objects can also be recognized from the camera image. The accuracy can be further improved by determining the line of sight from the map data. The accuracy is again improved when the viewing distance is determined both from the map data and from the camera image.

In a further embodiment, a line of sight can be derived at least in part from altitude data and/or gradient data as part of the map data, wherein the limit for the line of sight is determined by the vertical course of the road on which the motor vehicle is driving. For example, the line of sight may be limited by a hill or hill located in front of the vehicle. In this case, the driver of the motor vehicle cannot see the oncoming motor vehicle behind the hill or behind the bag. The driver can thus be informed by projecting a warning indication: the sight distance is not enough to realize overtaking due to hills or bags. This increases the safety of the motor vehicle.

The driver's intention to cut-in to an outside vehicle can be determined from the manipulation of the turn signal of the motor vehicle. Alternatively or additionally, the intention of the driver to cut into an outside vehicle can be determined from an in-vehicle camera of the motor vehicle, in particular by recognizing a wiggle sight. In other words, this intention of the driver can be determined, for example, when the driver indicates, by operating the turn signal, that he wants to exit the original lane behind the outside vehicle. In this case, in order to determine the intention of the driver to pass the outside vehicle, further variables can additionally be taken into account. For example, the speed of and/or relative speed between the motor vehicle and the external vehicle may be considered to determine the intent.

A second aspect of the invention relates to a motor vehicle having a driver assistance system for assisting during overtaking, having: first determining means for determining vehicle data relating to the travel of the motor vehicle; second determining means for determining from the vehicle data an intention of the driver of the motor vehicle to overtake the external vehicle; a positioning device for determining the position of the motor vehicle; a computing unit for providing surroundings data relating to the position of the motor vehicle, wherein the position data relates to objects that are fixed in position in the surroundings of the motor vehicle. Furthermore, the computing unit is designed to derive the viewing distance from the ambient data. The motor vehicle further has a headlight for projecting warning instructions by means of the headlight of the motor vehicle onto a traffic lane region in the surroundings and for illuminating the overtaking route, wherein the computing unit is designed to project the warning instructions or to illuminate the overtaking route depending on the viewing distance and only if the intention of the driver to overtake is determined. In other words, the computing unit is designed to provide for projection or illumination only when the driver's intention to cut into a car is determined.

Preferably, the motor vehicle is a car, such as a passenger car or a truck. The motor vehicle may comprise an internal combustion engine and/or an electric machine as power plant.

The positioning device may be designed, for example, as a receiver for a navigation satellite system, for example as a GPS receiver. The computing unit may be designed to receive the ambient data from a receiving unit of the motor vehicle for provision. In other words, the ambient data may be received by the receiving unit from the server device and subsequently provided by the computing unit for further processing.

In particular, a motor vehicle is provided for carrying out the method according to one of the preceding claims.

Another aspect of the invention relates to a system for assisting during overtaking, the system having a motor vehicle and a server device external to the vehicle. The server device outside the vehicle is in particular stationary.

The motor vehicle of the system comprises: first determining means for determining vehicle data relating to the travel of the motor vehicle; second determining means for determining from the vehicle data an intention of the driver of the motor vehicle to overtake the external vehicle; a positioning unit for determining the position of the motor vehicle. Furthermore, the motor vehicle comprises headlights for projecting warning indicators onto the traffic lane area in the surroundings and for illuminating the overtaking route.

The system further comprises a computing unit for providing surroundings data relating to the position of the motor vehicle and for deriving a viewing distance from the surroundings data, wherein the surroundings data relate to objects that are fixed in position in the surroundings of the motor vehicle. Furthermore, the computing unit is designed to provide for either a projection of a warning indication or an illumination of the passing route, depending on the viewing distance and only if the intention of the driver to pass is determined. The computing unit can be located completely in the motor vehicle or completely in the server device. Preferably, however, the computing unit is arranged partially in the motor vehicle and partially in the server device. For example, the computing unit is composed of a first computing section and a second computing section, wherein the first computing section is arranged in the motor vehicle and the second computing section is arranged in the server device. The system may be arranged for carrying out the method according to one or more of the above embodiments. Additionally, the motor vehicle may have a receiving unit, for example, in order to receive ambient data and/or a line of sight from the server device. Preferably, the second calculation part is designed to provide ambient data and to derive the viewing distance from the ambient data. The first calculation part is preferably designed to receive a viewing distance derived from the surroundings data and to specify, depending on the viewing distance, either a projection of a warning indication or an illumination of the overtaking route.

The invention also comprises combinations of the embodiments described.

The invention also includes a development of the method according to the invention, which has the features as already described in connection with the development of the motor vehicle according to the invention. For this reason, corresponding modifications of the method according to the invention are not described here.

Drawings

Embodiments of the present invention are described below. For this purpose, it is shown that:

fig. 1 schematically shows a system for assisting a passing process, having a motor vehicle and a server device outside the vehicle; and

fig. 2 shows an example of the present method according to a potential cut-in procedure.

The examples described below are preferred embodiments of the present invention. In the exemplary embodiments, the described parts of the embodiments are features of the invention which can be considered as independent of one another, which features in each case also improve the invention independently of one another and can therefore also be considered as constituent parts of the invention, individually or in different combinations than those shown. Furthermore, the embodiments can also be supplemented by further features of the invention already described.

In the figures, elements having the same function have the same reference numerals, respectively.

Detailed Description

Fig. 1 shows very schematically a motor vehicle 1 and a server device 2 outside the vehicle, which are each part of a system 9 for assisting a driver during a cut-in. The server device 2 is in particular stationary. In other words, the server apparatus 2 is not a part of the vehicle. The server device 2 and the motor vehicle 1 can be designed for communication with each other. The motor vehicle 1 is provided for receiving data from the server device 2. For example, the communication or data reception takes place via a mobile communication network, preferably via LTE, UMTS, GPRS or GSM. However, various other, in particular wireless, communication connections between the server device 2 outside the vehicle and the motor vehicle 1 are also possible. According to fig. 1, the motor vehicle has a receiving unit 14, which is designed to receive data from the server device 2. The receiving unit 14 is for example a mobile communication device.

The motor vehicle 1 has a first determination device 10 for determining vehicle data. The vehicle data relate to the driving of the motor vehicle 1. For example, the vehicle data may be the speed of the motor vehicle 1, the relative speed between the motor vehicle 1 and the external vehicle 3, the acceleration of the motor vehicle 1, the actuation of the steering lamps of the motor vehicle 1 and/or data relating to the driver of the motor vehicle 1. The data relating to the driver of the motor vehicle 1 can in particular specify in which direction the driver is looking and/or whether the driver is looking sideways or at the rear.

The vehicle data are transmitted from the first determination means 10 to second determination means 11, which are designed to determine, on the basis of the vehicle data, an intention of the driver of the motor vehicle 1 to cut into the external vehicle 3. In particular, the second determination means 11 determines a probability value that indicates how likely the driver wants to exceed the external vehicle 3. If the probability value is greater than a predetermined probability limit, it can be assumed that the driver has an intention to overtake the external vehicle 3. In other words, the existence of an intent is determined, in particular when the probability value is greater than a predetermined probability limit.

The positioning device 13 of the motor vehicle 1 is designed to determine the position of the motor vehicle 1. The positioning device 13 is a receiver for signals of a navigation satellite system. For example, the positioning device 13 can be designed for receiving signals of a GPS, Glonass, Baidu and/or Galileo satellite system. The positioning device 13 can determine or ascertain the position of the motor vehicle 1 from the satellite signals.

In the present exemplary embodiment, the motor vehicle 1 then transmits its position to the server device 2 outside the vehicle by means of the receiving unit 14. Ambient data relating to the position of the motor vehicle 1 are then provided in a server device 2 outside the vehicle, wherein the ambient data relate to the stationary objects 5 in the surroundings U of the motor vehicle 1. Such stationary objects 5 are, for example, hills or hill bags 50, trees or forests, bushes or pastures, crops such as corn (maize field 51) or neglect, buildings, walls or houses. The ambient data may include different types of data, as will be described in more detail below. The calculation unit 22 derives the line of sight from the ambient data. The viewing distance is specific to a previously determined position of the motor vehicle 1. The apparent distance represents how far the driver of the vehicle 1 is along the clear view of the road 6 traveled by the vehicle 1. In other words, the viewing distance indicates how far the driver's line of sight is not obscured by the positionally fixed object 5 in the direction of travel along the road 6.

This viewing distance is in turn transmitted by the server device 2 to the motor vehicle 1. In motor vehicle 1, it is determined by computing unit 12, depending on the viewing distance, whether warning indication 31 should be projected onto a traffic lane area of road 6 or overtaking path 7 of road 6 should be illuminated. The overtaking route 7 may be, in particular, a reverse lane in the case of a two-lane road 6. The calculation unit 12 can transmit the result of this determination to the headlights 15 of the motor vehicle 1. In other words, the headlight 15 is actuated by the computing unit 12 accordingly, so that it either projects a warning indicator 31 onto the traffic lane region of the road 6 or illuminates the passing route 7.

The headlight 15 is preferably a high-resolution headlight, which can in particular resolve at least 200, 500, 1000 or 2000 pixels. The head lamp 15 includes, for example, a matrix type light emitting body, a micro mirror device, a liquid crystal device, or a laser scanner.

The provision of ambient data by the computing unit 22 of the server device 2 will now be explained in more detail. According to the present embodiment, the server device 2 includes the map database 23 and the image database 21. Map data is stored or can be stored in the map database 23. The map data may include, for example, respective locations of persistent ones of the positionally fixed objects 5. Permanent position-fixing objects are, for example, buildings, walls and hills and hill bags 50. The camera images are or can be stored in an image database 21. The camera images can be captured by a plurality of motor vehicles and transmitted to the server device 2. The server device 2 thus receives the respective camera images from a plurality of motor vehicles. For example, the server apparatus 2 includes the receiving unit 20 for this purpose. The camera images can be collected and classified by the receiving unit 20 and correspond to the respective shooting positions. The shooting position may describe at which position the corresponding camera image was shot. In some embodiments of the invention, the receiving unit 20 may also be designed for extracting the end point of the line of sight from the respective camera image. This End of the line of Sight may also be referred to as the "End of Sight" (End of Sight). In this case, the following are extracted from the respective camera image by means of predetermined rules for image analysis: what is the viewing distance in the respective recording position in the direction of travel. This end point of the line of sight can be stored in the image database 21 in association with different recording positions and different camera images from a plurality of motor vehicles. From each of the camera images, a respective value is derived for the end point of the line of sight. In other embodiments, each respective image of a plurality of vehicles may be stored in the image database 21. In other embodiments, not only the camera image but also different values for the end point of the viewing distance are stored.

The calculation unit 22 may provide map data from the map database 23 and/or camera images from the image database 21 and/or different values regarding the sight-line end point from the image database 21 as ambient data. In particular, only those values or camera images from the image database 21 are provided which relate to the end of the line of sight whose recording position corresponds to the position of the motor vehicle 1.

The viewing distance can therefore be derived by the calculation unit 22 from map data, from camera images and/or from values relating to the end point of the line of sight. If the line of sight is derived at least in part from the camera image, this may be done by extracting the line of sight end and then determining the line of sight therefrom.

The extraction of the sight-line end point may preferably be performed by a machine learning device. In this case, the calculation unit and/or the receiving unit 20 can optionally be provided with such a machine learning device. The machine learning device may be trained, for example, by providing a plurality of test images for which the end of line of sight is known. Thus, the machine learning device can derive or refine the rule for extracting the sight-line end point. The machine learning device can also be improved even further in continuous operation. Therefore, the determination of the sight-line end can be always more accurate.

Each of the plurality of vehicles may have a front camera 16. Fig. 1 shows such a front camera 16 for a motor vehicle 1. In particular, only camera images from a plurality of motor vehicles are received, which are captured in the daytime. The camera image is selected, for example, based on the clock time or brightness of the camera image.

Of course, the motor vehicle 1 can likewise be part of a plurality of motor vehicles. In this case, the motor vehicle 1 can transmit the camera image acquired by the front camera 16 to the server device 2 or the receiving unit 20. In particular, in addition to the camera images, the respective recording position for each of the camera images is also transmitted.

In determining the sight-line end point, averaging or regression may be performed. For example, the sight-line end point is extracted from a plurality of camera images received from different vehicles among the plurality of vehicles. The statistical value for the sight-line end can then be determined therefrom by means of averaging or methods for calculating the error. From this statistical value, the viewing distance can be derived particularly accurately.

Finally, the method is shown according to a specific example. According to fig. 2, the motor vehicle 1 is located on a road 6. As part of the vehicle data, it is determined that the motor vehicle 1 has a positive relative speed with respect to the outside vehicle 3, i.e. faster than the outside vehicle 3. Furthermore, as part of the vehicle data, it is determined that the motor vehicle 1 accelerates, i.e. increases its speed. From these vehicle data, a probability value is calculated which indicates how likely the driver of the motor vehicle 1 wants to overtake the external vehicle 3. The probability value is greater than a predetermined probability limit. It is therefore assumed that the driver of the motor vehicle 1 has an intention to overtake the external vehicle 3.

The motor vehicle 1 then determines its position. This position is transmitted by the motor vehicle 1 to the server device 2. In the server device 2, the viewing distance of the motor vehicle 1 is derived from the surroundings data by a computing unit 22. In the present example, the surroundings U are dark (night), so the corn field 51 and the mountain bag 50 are not visible to the driver of the motor vehicle 1. Oncoming vehicles 4 are blocked by fixed-position objects 5, namely the current mountain bags 50 and the corn fields 51. Thus, the driver of the motor vehicle 1 may erroneously believe that: there is no oncoming traffic on the road 6 and he can pass without danger.

Based on the respective sight-line end points extracted from the different camera images of the other motor vehicles with respect to the position of the motor vehicle 1, the computing unit 22 derives a viewing distance which is limited by the objects 5 outside the vehicle, namely the corn field 51 and the hill bag 50. These different camera images are taken by the corresponding front cameras 16 of the other vehicles during the day. Therefore, the corn field 51 and the hill can be recognized without problems. Furthermore, the computation unit 22 derives a viewing distance from the map data, which in turn contains height information, which viewing distance is limited by the hill bag 50. In other words, the computing unit 22 recognizes from the height data or map data that the viewing distance is limited by the hill bag 50 in the position of the motor vehicle 1. The two values determined from the map data and the camera image with respect to the viewing distance are combined to form a common value with respect to the viewing distance and are transmitted to the motor vehicle 1.

In motor vehicle 1, computing unit 12 recognizes that the received visibility range value is less than a predetermined visibility limit. In an alternative embodiment, the computing unit 12 in the motor vehicle 1 recognizes that the viewing distance is less than a previously determined viewing distance limit as a function of the speed of the motor vehicle 1. In other words, the visibility limit can either be fixedly defined or determined during the course of the method. The visibility limit is determined in particular as a function of the speed of the motor vehicle 1, the speed of the external vehicle 3 or as a function of any other data.

Since the visual distance is smaller than the sight-line limit, the calculation unit 12 determines that it is unsafe to overtake the external vehicle 3. For this reason, the headlights 15 of the motor vehicle 1 are controlled by the computing unit 12 in such a way that a warning indication 31 is projected onto the road 6, in this case onto the overtaking route 7. The warning indication 31 may for example comprise a symbol, in particular an exclamation point. Currently, the light-emitting region 30 of the headlight 15 is unchanged from normal operation, since no overtaking of the external vehicle 3 can be carried out without risk or the viewing distance is less than the viewing limit.

If the visibility range is greater than the visibility limit, it can be determined that overtaking of the external vehicle 3 is not feared at least in terms of visibility range. In this case, the light-emitting region 30 of the headlight 15 can be moved such that the passing route 7 is illuminated. In other words, the light emitting area 30 moves in the direction of the overtaking route 7 with respect to the normal operation. In this case, at least one headlight 15 of the motor vehicle 1 is actuated in such a way that a displacement of the light emission region is achieved. If the line of sight is greater than the line of sight limit, then no warning indication 31 is projected.

In summary, the embodiments show how the safety level during overtaking can be improved. It is shown in particular how the driver can be informed at night whether the line of sight is sufficient for overtaking. For this purpose, the viewing distance can be determined from camera images taken by other vehicles during the day.

13页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:用于使用多传感器检测融合来在物流地面支持设备上进行增强的碰撞避免的系统和方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!