Vehicle control device and vehicle control method

文档序号:125289 发布日期:2021-10-22 浏览:20次 中文

阅读说明:本技术 车辆控制装置以及车辆控制方法 (Vehicle control device and vehicle control method ) 是由 村桥善光 吉田贵裕 于 2021-03-26 设计创作,主要内容包括:本发明以提供能够同时实现安全率和舒适性的车辆控制装置以及车辆控制方法为课题。车辆控制装置(100)具有生成本车辆(1)的自动驾驶的行动方案的行动方案生成部(200)、和输出与对象物的探测相关的探测信息的距离检测部(探测设备(DD)、车辆传感器(60)及自动驾驶控制部(120)),行动方案生成部(200)具有当距离检测部探测到障碍物时决定在位置和速度的二维空间内表示碰撞率的分布的二维图、即碰撞率图的碰撞率图设定部250,该碰撞率表示本车辆(1)与所述障碍物之间的碰撞可能性,行动方案生成部(200)基于所述碰撞率图、规定的目标碰撞率、和本车辆(1)的现在的位置和速度来生成现在的行动方案。(The invention provides a vehicle control device and a vehicle control method which can realize safety rate and comfort at the same time. A vehicle control device (100) is provided with an action plan generating unit (200) that generates an action plan for automatic driving of a vehicle (1), and a distance detecting unit (a Detection Device (DD), a vehicle sensor (60), and an automatic driving control unit (120)) that outputs detection information relating to detection of an object, wherein the action plan generating unit (200) is provided with a collision rate map setting unit (250) that determines a collision rate map, which is a two-dimensional map representing a distribution of a collision rate in a two-dimensional space of a position and a speed when the distance detecting unit detects an obstacle, the collision rate map indicating a possibility of collision between the vehicle (1) and the obstacle, and the action plan generating unit (200) generates a current action plan based on the collision rate map, a predetermined target collision rate, and the current position and speed of the vehicle (1).)

1. A vehicle control device that controls a vehicle, characterized by comprising:

an action plan generating unit that generates an action plan for automatic driving of the vehicle;

a vehicle behavior control unit that controls at least a speed of the vehicle based on the behavior pattern; and

a distance detection unit for detecting an object and outputting detection information related to detection of the object,

the behavior pattern generation section sets a maximum deceleration of the vehicle in automatic driving,

the behavior pattern generation unit includes a collision rate map setting unit that determines a collision rate map that is a two-dimensional map showing a distribution of collision rates in a two-dimensional space of position and speed when the distance detection unit detects an obstacle, the collision rates showing a possibility of collision between the vehicle and the obstacle,

the collision rate map is created on the premise that a target stop position is set based on a combination of a predetermined target collision rate, the maximum deceleration, and the detection information,

the action pattern generation unit generates a current action pattern based on the collision rate map, the predetermined target collision rate, and the current position and speed of the vehicle.

2. The vehicle control apparatus according to claim 1,

the collision rate map defines a plurality of grid points within the two-dimensional space,

the collision rate is determined separately for each grid point,

the collision rate at each grid point represents a probability that the vehicle collides with the obstacle when the vehicle behavior control unit instructs the vehicle to decelerate at the target stop position with the maximum deceleration as an upper limit so that the vehicle decelerates from a position and a speed corresponding to the grid point and stops at the target stop position.

3. The vehicle control apparatus according to claim 1,

the behavior pattern generation unit generates, as the current behavior pattern, a behavior pattern that allows sudden braking while maintaining the speed of autonomous driving when the current position and speed of the vehicle are within a low collision rate region that is lower than a 1 st threshold value lower than the predetermined target collision rate,

when the current position and speed of the vehicle are within a high collision rate region on the collision rate map, an action plan for executing a preliminary braking in which a short-time braking is repeated to avoid a sudden braking is generated as the current action plan, the high collision rate region being a region in which the collision rate is lower than the predetermined target collision rate and is equal to or higher than the 2 nd threshold value that is equal to or higher than the 1 st threshold value.

4. The vehicle control apparatus according to claim 2,

the distance detection section has a plurality of sensors,

the probe information includes: a sensor configuration that represents a combination of sensors that detect the obstacle from among the plurality of sensors; forming a detected detection distance by the sensor; and a detection time continuously detected by the sensor,

determining the target stop position so that a collision rate calculated based on an overlap between an instruction value achievement probability density distribution that is a probability density distribution of a position at which a vehicle actually stops when the vehicle is intended to stop at the target stop position and the maximum deceleration is instructed from the vehicle behavior control unit to the vehicle is equal to the predetermined target collision rate, and an error distribution that is distributed with a position indicated by the probe distance as a center and that indicates a difference between a true distance from the obstacle and the probe distance,

the indicated value reaching the probability density distribution is a distribution estimated for the maximum deceleration based on a vehicle stop characteristic of the vehicle, which is measured in advance by the vehicle performing a stopping operation based on a deceleration specified for the vehicle,

the error distribution is estimated for the sensor configuration, the detected distance, and the detected time based on the distance detection characteristics of the distance detection unit measured in advance by the distance detection unit in a situation where the true value of the distance to the object is known.

5. The vehicle control apparatus according to claim 4,

in the collision rate map, grid points (G) on the collision rate mapxc,Gvc) The collision rate p ofc(Gxc,Gvc) Is obtained by the following formula (I),

in the formula (I), the first and second groups of the compound,

representing grid points (G)xc,Gvc) The detection rate of the obstacle by the distance detection section,

representing grid points (G)xc,Gvc) Is measured by the distance detecting section at a distanceThe probability that the obstacle is detected at each position,

pc(gxn,gvn) Watch (A)When the vehicle behavior control part is indicated to enable the vehicle to follow the grid point (G)xc,Gvc) A point (g) on the collision rate map to which the vehicle may shift when the deceleration a is indicated to the vehicle so as to stop at the target stop positionxn,gvn) The rate of collision of the (c) and (d),

p(∈α) Represents the uncertainty εαProbability taken, the uncertainty ∈αRepresenting the difference between the deceleration actually performed by the vehicle and the indication of said deceleration a,

the point (g) on the collision rate mapxn,gvn) By adding said uncertainty e to said deceleration aαThe point obtained by the operation of (1),

collision rates at grid points where the speed on the collision rate map is 0 are given by predetermined values based on the results of measurement of the characteristics of the distance detection unit in advance, and collision rates at grid points other than the position on the collision rate map at the detection distance and the speed of 0 are given,

pc(gxn,gvn) Is determined by a method based on the point (g)xn,gvn) A value found by an operation in which the collision rate at nearby grid points is evaluated in an approximate manner,

the summation operation of the formula (I) is repeated from the grid point corresponding to the speed 0 and the position detection distance in the direction in which the speed increases and/or the direction in which the position approaches the vehicle from the detection distance, thereby obtaining the grid point (G) of the collision rate mapxc,Gvc) The collision rate of (c).

6. The vehicle control apparatus according to claim 5,

probability p (epsilon)α) Has a normal distribution determined based on the characteristics of the vehicle measured in advance.

7. The vehicle control apparatus according to any one of claims 1 to 6,

the behavior pattern generation unit includes a collision rate map storage unit that stores a plurality of collision rate maps obtained from a result of a flow of measuring a stop position when a certain deceleration is instructed to the vehicle, a result of a flow of measuring a characteristic of object detection performed by the distance detection unit when a true value of a distance to an object is known, and the predetermined target collision rate,

the collision rate map setting unit selects one of the plurality of collision rate maps stored in the collision rate map storage unit based on the maximum deceleration and the detection information, and determines the selected one as a collision rate map to be used.

8. The vehicle control apparatus according to any one of claims 1 to 6,

the collision rate map setting unit determines the collision rate map by performing real-time calculation during travel based on a parameter indicating characteristics of the vehicle and the distance detection unit, the predetermined target collision rate, the maximum deceleration, and the probe information,

the parameter is derived from a result of a procedure of measuring a stop position when a certain deceleration is instructed to the vehicle, and a result of a procedure of measuring a characteristic of object detection performed by the distance detection unit in a situation where a true value of a distance to the object is known.

9. The vehicle control apparatus according to claim 8,

the behavior pattern generation unit has a collision rate map storage unit that stores the collision rate map calculated by the collision rate map setting unit in real time during travel,

the collision rate map setting unit sets the collision rate map stored in the collision rate map storage unit as the collision rate map to be used when the collision rate map corresponding to the predetermined target collision rate, the maximum deceleration, and the detection information is stored in the collision rate map storage unit.

10. The vehicle control apparatus according to claim 1 or 2,

the distance detection section has a plurality of sensors for detecting an obstacle,

the probe information includes: a sensor configuration that represents a combination of sensors that detect the obstacle from among the plurality of sensors; and a detection distance detected by the sensor,

the behavior pattern generation unit generates, as a current behavior pattern, a behavior pattern that performs the cruise operation while the obstacle is considered to be absent when the obstacle is detected by only some of the plurality of sensors and the detected distance is equal to or greater than a predetermined distance threshold value,

the behavior pattern generation unit generates, as a current behavior pattern, a behavior pattern in which the obstacle is considered to be present and the preliminary braking is performed, when the obstacle is detected by only some of the plurality of sensors and the detected distance is smaller than the predetermined distance threshold.

11. A vehicle control method that controls a vehicle by a vehicle behavior control portion that controls at least a speed of the vehicle based on an action pattern of automatic driving of the vehicle, characterized by comprising:

setting a maximum deceleration in automatic driving of the vehicle;

detecting a distance between the vehicle and an obstacle and obtaining detection information that is information related to detection of the obstacle;

a step of deciding a two-dimensional map representing a distribution of collision rates representing a possibility of collision between the vehicle and the obstacle in a two-dimensional space of position and speed, that is, a collision rate map, when the obstacle is detected; and

controlling at least the speed of the vehicle based on the determined collision rate map, a predetermined target collision rate, and the current position and speed of the vehicle,

the probe information includes: a sensor configuration that represents a combination of sensors that detect the obstacle from among the plurality of sensors; forming a detected detection distance by the sensor; and a detection time continuously detected by the sensor,

the collision rate map is created on the premise that a target stop position is set based on a combination of a predetermined target collision rate, the maximum deceleration, and the detection information,

the collision rate map defines a plurality of grid points within the two-dimensional space,

the collision rate is determined separately for each grid point,

the collision rate at each grid point represents a probability that the vehicle collides with the obstacle when the vehicle behavior control unit instructs the vehicle to decelerate at the target stop position with the maximum deceleration as an upper limit so that the vehicle decelerates from a position and a speed corresponding to the grid point and stops at the target stop position.

Technical Field

The invention relates to a vehicle control device and a vehicle control method.

Background

Patent document 1 describes a vehicle speed control device including a target speed calculation unit that calculates a target speed value to reach a target point for each point so that a change in speed of a vehicle becomes a continuous curve, based on a position error probability distribution and data in which acceleration or an inclination of acceleration that does not cause discomfort to a driver is registered in advance based on a distance from a shift start point to the target point, and a speed control unit that detects the speed of the vehicle and controls a driving torque so that the speed becomes the target speed value, thereby controlling the speed of the vehicle.

Patent document 2 describes a collision mitigation device including: an object detection unit that detects an object present in the vicinity of the host vehicle; a collision possibility determination means for determining a possibility of collision between the object detected by the object detection means and the own vehicle at each discrete time; and a collision influence reduction mechanism that performs control for reducing the influence of the collision based on the collision possibility determined by the collision possibility determination mechanism.

Documents of the prior art

Patent document

Patent document 1: japanese patent No. 4796400

Patent document 2: japanese patent No. 4967840

Disclosure of Invention

The technique of patent document 1 controls the speed of the vehicle according to the position error probability distribution. Further, the technique of patent document 2 determines the possibility of collision with the own vehicle. These techniques perform speed control of the vehicle based on a position error probability distribution or a collision possibility, and do not relate to an action pattern of automatic driving. Therefore, when determining an action plan using a sensor, there is a problem as follows: the recognition distance is not sufficient, and the recognition distance is not increased in a state where the reliability of the sensor is kept high. There are also problems that the safety level cannot be clarified, the reliability (accuracy, detection rate) of the sensor, and the accuracy of the operation scheme cannot be quantified.

The present invention has been made in view of such circumstances, and an object of the present invention is to provide a vehicle control device and a vehicle control method that can achieve both safety and comfort.

In order to solve the above problem, a vehicle control device according to the present invention controls a vehicle, the vehicle control device including: an action plan generating unit that generates an action plan for automatic driving of the vehicle; a vehicle behavior control unit that controls at least a speed of the vehicle based on the behavior pattern; and a distance detection unit for detecting an object and outputting detection information related to detection of the object, the behavior pattern generation unit sets a maximum deceleration of the vehicle during automatic driving, and the behavior pattern generation unit includes a collision rate map setting unit that determines a collision rate map that is a two-dimensional map showing a distribution of a collision rate in a two-dimensional space of a position and a speed when the distance detection unit detects an obstacle, a collision rate indicating a possibility of collision between the vehicle and the obstacle, the collision rate map being created on the premise that a target stop position is set based on a combination of a predetermined target collision rate, the maximum deceleration, and the detection information, the action pattern generation unit generates a current action pattern based on the collision rate map, the predetermined target collision rate, and the current position and speed of the vehicle.

Effects of the invention

According to the present invention, it is possible to provide a vehicle control device and a vehicle control method that can achieve both safety and comfort

Drawings

Fig. 1 is a diagram showing an overall configuration of a vehicle including a vehicle control device according to an embodiment of the present invention.

Fig. 2 is a functional configuration diagram centering on the vehicle control device of the above embodiment.

Fig. 3 is a configuration diagram of the vehicle control apparatus HMI of the above embodiment.

Fig. 4 is a block diagram of an action pattern generation unit of the vehicle control device according to the above embodiment.

Fig. 5 is a diagram showing a target velocity value curve in the case where deceleration is performed up to the target stop position in the vehicle control device according to the above-described embodiment, (a) is a diagram showing a target velocity value curve in the case where deceleration is performed at 0.6G, (b) is a diagram showing a detection rate p (D | E) at which the sensor detects an obstacle in the state where an obstacle is present, and (c) is a diagram showing a false detection rate at which the sensor detects an obstacle in the state where an obstacle is not presentThe graph (d) is a graph showing an example of a deceleration pattern of an action pattern in the case of automatic driving.

Fig. 6 is a diagram for explaining an action pattern obtained based on the reliability of the sensor in the determination of the target velocity value curve in fig. 5, and shows a case where the excessive detection-side setting using the detection value of the sensor is set up until a region with a low reliability in the determination of the action pattern in fig. 5.

Fig. 7 is a diagram illustrating an action pattern obtained based on the reliability of the sensor in the determination of the target velocity value curve in fig. 5, and shows a case where the setting of the detection value of the sensor is used in a region with a high reliability in the determination of the action pattern in fig. 5.

Fig. 8 is a diagram illustrating directionality intended to solve the problem in fig. 6 and 7.

Fig. 9 is an explanatory view for explaining the collision rate for preventing a collision by decelerating to the front traveling vehicle position (target stop position) in the vehicle control device according to the above embodiment, (a) is a view showing the collision rate for preventing a collision by decelerating to the front traveling vehicle position, and (b) is a view showing the collision rate p (C | S) of (a) by gradation expressiont) The graph (c) is a schematic diagram showing the peak value of the collision rate at the speed 0 of (a).

Fig. 10A is a diagram showing a target velocity value curve obtained based on an action plan into which the collision rate is introduced, the collision rate being a collision rate in the case where the vehicle control device of the above embodiment decelerates the vehicle up to the front traveling vehicle position (target stop position) to prevent a collision.

Fig. 10B is a diagram showing fusion accuracy reliability depending on the configuration of a sensor for detecting an object and the detection time when a detection device of a vehicle detects the object.

Fig. 10C is a diagram showing a change in the amount of deviation of the instruction value obtained by achieving a probability density distribution based on a probability distribution obtained based on fusion accuracy reliability determined by the sensor configuration and the detection time for detecting the object and the instruction value when the object is detected by the detecting device of the vehicle according to the above embodiment.

Fig. 11 is an instruction value determination flow of the behavior pattern of the vehicle control device according to the above embodiment.

Fig. 12 is a diagram showing a relationship between the width of the probability distribution obtained based on the fusion accuracy reliability and the deviation amount in the vehicle control device according to the above embodiment.

Fig. 13 is a flowchart of vehicle control by the vehicle control device of the above embodiment.

Fig. 14 is a diagram showing a collision rate map of the vehicle control device according to the above embodiment, (a) is a diagram showing a collision rate map in a case where a collision is prevented by decelerating to a forward traveling vehicle position, (b) is a diagram showing a collision rate expressed by (a) as a reliability rate in observation in a gradation expression, and (c) is a schematic diagram showing a peak value of the collision rate at a speed of 0 in the collision rate map of (a).

Fig. 15 is a diagram for explaining a relationship between an action pattern and a collision risk in the vehicle control device according to the above embodiment.

Fig. 16 is a diagram showing an error distribution of sensors in the vehicle control device according to the above embodiment.

Fig. 17 is a diagram for explaining the influence of the behavior pattern of the vehicle control device and the sensor on the collision risk in the above embodiment, (a) is a diagram showing a collision rate map in the case where the maximum braking is-0.6G and the standard deviation of the error of the sensor is σ 1, (b) is a diagram showing a target collision rate in the case where the behavior pattern setting of the collision rate map of (a) is changed, and (c) is a diagram showing a target collision rate in the case where the sensor performance of the collision rate map of (a) is degraded.

Fig. 18 is a diagram for explaining a method of using reliability (detection rate) of a plurality of sensors in the vehicle control device according to the above-described embodiment, (a) is a diagram showing an example in the case of detecting a plurality of sensors by AND, (b) is a diagram showing an example in the case of detecting a plurality of sensors by OR, AND (c) is a diagram showing an example in the case of performing braking in accordance with the detection state of a plurality of sensors.

Fig. 19 is a state transition diagram for explaining the change of the collision rate and the anxiety rate obtained by the algorithm α in the vehicle control device according to the above embodiment.

Fig. 20 is a diagram showing a specific example of the probability of a state transition caused by the action shown in fig. 19.

Fig. 21 is a diagram for explaining a state transition process for an action of the vehicle control device according to the above embodiment.

Fig. 22 is a diagram illustrating a change in the detection rate in the state transition process for an action in fig. 21, (a) is a diagram illustrating an example of a transition from the initial state without decreasing the speed in the state transition process in fig. 21, (b) is a diagram illustrating an example of a transition from the initial state with slightly decreasing the speed in the state transition process in fig. 21, and (c) is a diagram illustrating an example of a transition from the initial state with decreasing the speed in the state transition process in fig. 21.

Fig. 23 is a diagram illustrating a relationship between the state transition process for action and the algorithm of fig. 21.

Fig. 24 is a diagram for explaining the continuity processing of the collision rate based on the algorithm α in the vehicle control device according to the above embodiment.

Fig. 25 is a schematic diagram showing an outline of an action pattern of the vehicle control device according to the above embodiment.

Fig. 26 is a schematic diagram showing the action pattern of fig. 25 and the result of the cruise action performed when no obstacle is detected, by a matrix consisting of detected and undetected obstacles, and the presence and absence of obstacles.

Fig. 27 is a schematic diagram illustrating AND detection in the behavior scheme of the vehicle control device according to the above embodiment that utilizes redundancy of two sensors.

Fig. 28 is a schematic diagram illustrating OR detection in an action scheme utilizing redundancy of two sensors in the vehicle control device of the above embodiment.

Fig. 29 is a schematic diagram illustrating HALF AND detection in a redundant action scheme using two sensors in the vehicle control device according to the above embodiment.

Fig. 30 is a state transition diagram of a collision occurrence state of the vehicle control device of the above embodiment.

Fig. 31 is a diagram for explaining a collision rate obtaining method of the vehicle control device according to the above-described embodiment, where (a) is a diagram showing a target velocity value curve obtained based on an action pattern after the collision rate is introduced, (b) is a diagram showing a collision rate at which the velocity is 0, (c) is a diagram showing an error distribution of the distance detection unit, and (d) is a diagram showing a detection rate of the distance detection unit.

Fig. 32 is a state transition diagram showing a collision occurrence process in fig. 31.

Fig. 33 is a diagram showing a grid in which a collision rate map of the vehicle control device according to the above-described embodiment is expressed in a two-dimensional space of a position and a speed.

Fig. 34 is a diagram for explaining conditions applied to the end portions of the grids in the collision rate map of the vehicle control device according to the above embodiment.

Fig. 35 is a diagram illustrating the collision probability obtained in order from the end portions of the meshes of the collision probability map of the vehicle control device of the above embodiment.

Fig. 36 is a diagram for explaining a method of obtaining the collision probability in an approximate manner from the next state of the vehicle control device of the above embodiment.

Fig. 37 is a diagram for explaining a method of obtaining the collision probability in an approximate manner from the next state of the vehicle control device of the above embodiment.

Fig. 38 is a diagram for explaining a method of obtaining the collision probability in an approximate manner from the next state of the vehicle control device of the above embodiment.

Description of the reference numerals

1 vehicle

20 Detector

30 Radar (sensor) (distance detecting part)

40 cam (sensor) (distance detection part)

50 navigation device

55 communication device

60 vehicle sensor (sensor) (distance detector)

70 HMI

100 vehicle control device

110 target lane determining part

120 automatic driving control part (distance detection part)

130 automatic driving mode control part

140 identification part

141 vehicle position recognition unit

142 outside recognition part

143 human detector

144 AI accelerator

145 line generating unit

150 switching control part

160 running control part (vehicle behavior control part)

170 HMI control unit

180 storage part

200 action plan generating part

210 target collision rate setting unit

220 instruction value achievement probability density distribution estimation unit

230 instruction value deviation amount calculation section (deviation amount calculation section)

240 fusion accuracy reliability calculating part

250 collision ratio map setting unit

300 travel driving force output device

310 steering device

320 brake device

1000 collision rate diagram

1001 indicates values to achieve a probability density distribution

1002 indicating value deviation amount

1003 probability distribution based on fusion accuracy confidence

1010 collision rate map storage unit

DD detecting device (sensor) (distance detecting part)

Detailed Description

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.

(embodiment mode)

Fig. 1 is a diagram showing an overall configuration of a vehicle including a vehicle control device 100 according to an embodiment of the present invention. The vehicle (hereinafter referred to as the vehicle 1)1 on which the vehicle control device 100 of the present embodiment is mounted includes, for example, a two-wheel, three-wheel, four-wheel or the like vehicle, an automobile using an internal combustion engine such as a diesel engine or a gasoline engine as a power source, an electric automobile using an electric motor as a power source, a hybrid vehicle having both an internal combustion engine and an electric motor, and the like. An electric vehicle is driven using electric power discharged from a battery such as a secondary battery, a hydrogen fuel cell, a metal fuel cell, or an alcohol fuel cell.

Own vehicle 1

As shown in fig. 1, the vehicle 1 is mounted with a sensor (distance detection unit) such as a probe 20, a radar 30, and a camera 40, a navigation device 50, and a vehicle control device 100.

The detector 20 is, for example, a LIDAR (Light Detection and Ranging or Laser Imaging Detection and Ranging) that measures a distance to a target by measuring scattered Light with respect to irradiation Light. For example, the detectors 20 are provided in two at each of the positions spaced apart from each other in the front direction, and in three at the rear (five in total in the front and rear).

The radars 30 are provided with three in the front and two in the rear (five in total in the front and rear), for example. The radar 30 detects an object by, for example, an FM-cw (frequency Modulated Continuous wave) method.

The camera 40 is a digital camera using a single image sensor such as a ccd (charge Coupled device) or a cmos (complementary Metal Oxide semiconductor). The camera 40 is mounted on the upper portion of the front windshield and/or the rear surface of the interior mirror, etc. The camera 40 repeatedly photographs the front of the host vehicle 1 periodically, for example. In this example, there are two monocular cameras side by side. The camera 40 may also be a binocular camera.

The configuration shown in fig. 1 is merely an example, and a part of the configuration may be omitted, or another configuration may be added.

Vehicle control device 100

Fig. 2 is a functional configuration diagram centering on the vehicle control device 100 of the present embodiment.

As shown in fig. 2, the vehicle 1 is mounted with a detection device DD (sensor) (distance detection unit) including a detector (Finder)20, a radar 30, a camera 40, and the like, a navigation device 50, a communication device 55, a vehicle sensor 60 (sensor) (distance detection unit), an hmi (human Machine interface)70, a vehicle control device 100, a driving force output device 300, a steering device 310, and a brake device 320. These apparatuses and devices are connected to each other through a multiple communication line such as a can (controller Area network) communication line, a serial communication line, a wireless communication network, and the like. The vehicle control device according to the present embodiment is not limited to the "vehicle control device 100", and may include a configuration (the detection device DD, the HMI70, and the like) other than the vehicle control device 100.

< navigation device 50 >

The Navigation device 50 includes a gnss (global Navigation Satellite system) receiver, map information (Navigation map), a touch panel display device functioning as a user interface, a speaker, a microphone, and the like. The navigation device 50 specifies the position of the own vehicle 1 by the GNSS receiver, and derives a route from the position to a destination designated by the user. The route derived by the navigation device 50 is supplied to a target lane determining unit 110 (described later) of the vehicle control device 100. The position of the host vehicle 1 may be determined or supplemented by an ins (inertial Navigation system) that utilizes the output of the vehicle sensors 60. In addition, when the vehicle control apparatus 100 executes the manual driving mode, the navigation apparatus 50 guides a route to a destination by voice and navigation display.

Further, the configuration for determining the position of the own vehicle 1 may be provided independently of the navigation device 50. The navigation device 50 can be realized by the function of a terminal device such as a smartphone or a tablet terminal held by the user. In this case, information is transmitted and received between the terminal device and the vehicle control device 100 through wireless or wired communication.

< communication device 55 >

The communication device 55 performs wireless communication using, for example, a cellular network, a Wi-Fi network, Bluetooth (registered trademark), dsrc (dedicated Short Range communication), or the like. The Communication device 55 wirelessly communicates with an Information providing server of a System that monitors traffic conditions on roads, such as VICS (registered trademark) (Vehicle Information and Communication System), for example, and acquires Information (hereinafter referred to as traffic Information) indicating traffic conditions on roads on which the Vehicle 1 is traveling and on predetermined roads. The traffic information includes information such as congestion information ahead, required time at a congested point, accident, faulty vehicle, construction information, speed limit, lane limit information, a position of a parking lot, and full/empty information of a parking lot, a service area, and a parking area. The communication device 55 may acquire the traffic information by communicating with a wireless beacon provided on the side of the road or the like, or by communicating with another vehicle traveling around the host vehicle 1 between vehicles. The communication device 55 is an example of a "communication unit" that acquires congestion information.

< vehicle sensor 60 >

The vehicle sensor 60 includes a vehicle speed sensor that detects a vehicle speed, an acceleration sensor that detects an acceleration, a gyro sensor that detects an angular velocity around a vertical axis, an orientation sensor that detects an orientation of the own vehicle 1, and the like. The vehicle sensor 60 is sometimes referred to as a "sensor" in the method of designing and calculating the operation pattern in the present specification.

<HMI70>

Fig. 3 is a configuration diagram of the HMI 70.

As shown in fig. 3, the HMI70 has a configuration of a driving operation system and a configuration of a non-driving operation system. The boundaries of these configurations are not clear, and the configuration of the driving operation system may have the function of the non-driving operation system (or vice versa). The navigation device 50 and the HMI70 described above are examples of the "output unit".

The HMI70 includes, as a configuration of a driving operation system, an accelerator pedal 71, an accelerator opening degree sensor 72, an accelerator pedal reaction force output device 73, a brake pedal 74, a brake depression amount sensor (or a master pressure sensor or the like) 75, a shift lever 76, a shift position sensor 77, a steering wheel 78, a steering angle sensor 79, a steering torque sensor 80, and other driving operation devices 81.

The accelerator pedal 71 is an operation member for receiving an acceleration instruction (or a deceleration instruction by a return operation) from a vehicle occupant. The accelerator opening sensor 72 detects the amount of depression of the accelerator pedal 71, and outputs an accelerator opening signal indicating the amount of depression to the vehicle control device 100.

Instead of being output to the vehicle control device 100, the vehicle control device may directly output the vehicle control device to the travel driving force output device 300, the steering device 310, or the brake device 320. The same applies to the other driving operation systems described below. The accelerator reaction force output device 73 outputs a force (operation reaction force) in a direction opposite to the operation direction to the accelerator pedal 71, for example, in accordance with an instruction from the vehicle control device 100.

The brake pedal 74 is an operation member for receiving a deceleration instruction from a vehicle occupant. The brake depression amount sensor 75 detects the depression amount (or depression force) of the brake pedal 74, and outputs a brake signal indicating the detection result to the vehicle control device 100.

The shift lever 76 is an operation member for receiving an instruction to change the shift speed from a vehicle occupant. The shift position sensor 77 detects a shift position instructed by a vehicle occupant, and outputs a shift position signal indicating the detection result to the vehicle control device 100.

The steering wheel 78 is an operation member for receiving a turning instruction by a vehicle occupant. The steering angle sensor 79 detects the operation angle of the steering wheel 78, and outputs a steering angle signal indicating the detection result to the vehicle control device 100. The steering torque sensor 80 detects torque applied to the steering wheel 78, and outputs a steering torque signal indicating the detection result to the vehicle control device 100.

The other driving operation devices 81 are, for example, a joystick, a button, a dial switch, a gui (graphical User interface) switch, and the like. The other driving operation devices 81 receive an acceleration instruction, a deceleration instruction, a turning instruction, and the like, and output the received instructions to the vehicle control device 100.

The HMI70 includes, as a configuration of a non-driving operation system, for example, a display device 82, a speaker 83, a touch operation detection device 84, a content playback device 85, various operation switches 86, a seat 88, a seat drive device 89, a window glass 90, and a window drive device 91.

The display device 82 is, for example, an lcd (liquid Crystal display) or organic el (electroluminescence) display device attached to each part of the instrument panel or any part facing the front passenger seat and the rear seat. The display device 82 may be a hud (head Up display) that projects an image on a front windshield or other windows. The speaker 83 outputs voice. When the display device 82 is a touch panel, the contact operation detection device 84 detects a contact position (touch position) on the display screen of the display device 82 and outputs the detected position to the vehicle control device 100. Further, in the case where the display device 82 is not a touch panel, the contact operation detecting device 84 may be omitted.

The content playback device 85 includes, for example, a dvd (digital Versatile disc) playback device, a cd (compact disc) playback device, a television receiver, a device for generating various guide images, and the like. The display device 82, the speaker 83, the touch operation detection device 84, and the content playback device 85 may be partly or entirely common to the navigation device 50.

The various operation switches 86 are disposed at arbitrary positions in the vehicle interior. The various operation switches 86 include an automatic driving changeover switch 87 that instructs start (or future start) and stop of automatic driving. The automatic drive changeover switch 87 may be any of a gui (graphical User interface) switch and a mechanical switch. In addition, the various operation switches 86 may also include switches for driving the seat driving device 89 and the window driving device 91.

The seat 88 is a seat on which a vehicle occupant sits. The seat driving device 89 freely drives the reclining angle, the front-rear direction position, the tilt angle, and the like of the seat 88. The window glass 90 is provided in each door, for example. The window drive device 91 drives the window glass 90 to open and close.

The in-vehicle camera 95 is a digital camera using a solid-state imaging device such as a CCD or a CMOS. The in-vehicle camera 95 is attached to a position where at least the head of a vehicle occupant who performs a driving operation can be imaged, such as a rearview mirror, a steering column, and an instrument panel. The vehicle interior camera 95 repeatedly photographs a vehicle occupant, for example, periodically.

Returning to fig. 2, the vehicle control device 100 is realized by, for example, one or more processors or hardware having equivalent functions. The vehicle Control device 100 may be configured by a combination of a processor such as a cpu (central Processing Unit), a storage device, and an ecu (electronic Control Unit) or MPU (Micro-Processing Unit) in which a communication interface is connected via an internal bus.

The vehicle control device 100 includes a target lane determining unit 110, an automatic driving control unit 120 (distance detecting unit), an automatic driving mode control unit 130, a recognition unit 140, a switching control unit 150, a travel control unit 160 (vehicle behavior control unit), an HMI control unit 170, and a storage unit 180.

Some or all of the target lane determining unit 110, the respective units of the automatic driving control unit 120, and the travel control unit 160 are realized by a processor executing a program (software). Some or all of them may be realized by hardware such as lsi (large Scale integration) and asic (application Specific Integrated circuit), or may be realized by a combination of software and hardware.

Hereinafter, when the main body is referred to as "o" portion ", the automatic drive control unit 120 reads each program from a ROM EEPROM (Electrically Erasable Programmable Read-Only Memory) as necessary, downloads the program to a RAM, and executes each function (described later). Each program may be stored in the storage unit 180 in advance, or may be taken into the vehicle control device 100 via another storage medium or a communication medium as needed.

< target lane determining part 110 >

The target lane determining unit 110 is realized by, for example, an mpu (micro Processing unit). The target lane determining unit 110 divides the route provided from the navigation device 50 into a plurality of blocks (for example, divided every 100 meters in the vehicle traveling direction), and determines the target lane in each block with reference to the high-accuracy map information 181. The target lane determining unit 110 determines, for example, to travel in the second lane from the left. For example, when a branch point, a junction point, or the like exists in the route, the target lane determining unit 110 determines the target lane so that the host vehicle 1 can travel within a reasonable travel route for traveling to the branch target. The target lane determined by the target lane determining unit 110 is stored in the storage unit 180 as the target lane information 182.

< automatic driving control unit 120 (distance detection unit) >)

The automated driving control unit 120 includes an automated driving mode control unit 130, a recognition unit 140, and a switching control unit 150.

< automatic driving mode control part 130 >

The automated driving mode control unit 130 determines the automated driving mode based on the operation of the HMI70 by the vehicle occupant, the event determined by the action plan generating unit 200, the travel pattern determined by the route generating unit 145, and the like. The automatic driving mode is notified to the HMI control unit 170. In addition, a limit may be set for the automatic driving mode according to the performance of the detection device DD (sensor) of the host vehicle 1.

In any of the automatic driving modes, the mode can be switched to the manual driving mode (override) by operating the configuration of the driving operation system in the HMI 70. For example, the override is started when the operation of the driving operation system of the HMI70 is continued for a predetermined time or more by the vehicle occupant of the host vehicle 1, or when the operation is equal to or more than a predetermined operation change amount (for example, an accelerator opening of an accelerator pedal 71 (described later), a brake depression amount of a brake pedal 74 (described later), and a steering angle of a steering wheel 78 (described later)), or when the operation of the driving operation system is performed a predetermined number of times or more.

< identification part 140 >

The recognition unit 140 includes a vehicle position recognition unit 141, an external recognition unit 142, a person detection unit 143 (detection unit), an AI (Artificial Intelligence) Accelerator 144 (detection unit), an action plan generation unit 200, and a route generation unit 145.

< vehicle position recognition unit 141 >

The vehicle position recognition unit 141 recognizes the lane in which the vehicle 1 is traveling (traveling lane) and the relative position of the vehicle 1 to the traveling lane, based on the high-accuracy map information 181 stored in the storage unit 180 and information input from the probe 20, the radar 30 (sensor), the camera 40 (sensor), the navigation device 50, or the vehicle sensor 60 (sensor).

The vehicle position recognition unit 141 recognizes the traveling lane by comparing the pattern of the road-section objects (for example, the arrangement of the solid lines and the broken lines) recognized from the high-accuracy map information 181 with the pattern of the road-section objects in the periphery of the vehicle 1 recognized from the image captured by the camera 40. In this recognition, the position of the own vehicle 1 acquired from the navigation apparatus 50 and the processing result obtained by the INS may be added.

< external recognition part 142 >

Returning to fig. 2, the external recognition unit 142 recognizes the position, speed, acceleration, and other states of the nearby vehicle based on information input from the probe 20, radar 30, camera 40, and the like. The peripheral vehicle refers to, for example, a vehicle traveling around the host vehicle 1, and is a vehicle traveling in the same direction as the host vehicle 1. The position of the nearby vehicle may be represented by a representative point such as the center of gravity and a corner of another vehicle, or may be represented by an area represented by the outline of another vehicle. The "state" of the nearby vehicle may include the acceleration of the nearby vehicle and the contents of whether a lane change is being made (or whether a lane change is desired) which are grasped based on the information of the various devices. The external recognition unit 142 may recognize the position of other objects such as a guardrail, a utility pole, a parking vehicle, and a pedestrian in addition to the surrounding vehicle.

< human detection part 143 >

The person detection section 143 detects a person from the image captured by the camera 40. Specifically, the person detector 143 detects a specific object (person, bicycle, etc.) of a specific area using the AI accelerator 144. The person detector 143 issues a person detection request to the AI accelerator 144, and the AI accelerator 144 executes AI calculation outside the CPU and transmits a person detection result to the person detector 143. Since high speed is required for human detection, the AI accelerator 144 is used for human detection. However, the configuration may be made without using the AI accelerator 144.

For convenience of explanation, the person detection unit 143 is described separately from the camera 40 and the external recognition unit 142, but may be an image processing unit that extracts a person or the like from an image captured by the camera 40 or an image processing unit that recognizes and detects a person or the like from an outline of an image in the internal processing of the external recognition unit 142 as long as the specific object can be detected. In this case, the person detector 143 is removed from the identifier 140 in fig. 2.

As will be described later, the VICS information obtained from the communication device 55 can be used to further improve the recognition probability of the person detected by the person detection unit 143.

< AI accelerator 144 >

The AI accelerator 144 is a dedicated processor for detecting a person, and uses a computing resource other than the CPU. The AI accelerator 144 is, for example, an accelerator for performing image Processing by a processor enhanced with a gpu (graphics Processing unit) and signal Processing using an fpga (field Programmable Gate array). In addition, the AI accelerator 144 performs the computation of the AI on dedicated hardware (e.g., GPU).

< line generating part 145 >

When executing the lane keeping event, the route generation unit 145 determines any one of the traveling patterns such as constant speed traveling, follow-up traveling, low speed follow-up traveling, deceleration traveling, curve traveling, and obstacle avoidance traveling, and generates a route candidate based on the determined traveling pattern.

The route generation unit 145 evaluates the generated route candidates from two viewpoints of, for example, planning and safety, and selects a route to be output to the travel control unit 160. From the viewpoint of the planning performance, for example, when the following performance with respect to a plan (e.g., an action plan) that has been generated is high and the entire length of the line is short, the line is evaluated to be high. For example, when a lane change is desired in the right direction, a route that is returned after a lane change is temporarily made in the left direction is evaluated as low. From the viewpoint of safety, for example, at each route point, the longer the distance between the host vehicle 1 and the object (surrounding vehicle or the like), the smaller the amount of change in the acceleration/deceleration, the steering angle, and the like, the higher the evaluation.

< switching control part 150 >

Returning to fig. 2, switching control unit 150 switches the automatic driving mode and the manual driving mode between each other based on a signal or the like input from automatic driving switching switch 87 (see fig. 3). Further, the switching control unit 150 switches from the automatic driving mode to the manual driving mode based on an operation instructing acceleration, deceleration, or steering to be performed for the configuration of the driving operation system in the HMI 70. For example, when a state in which an operation amount indicated by a signal input from a configuration of the driving operation system in the HMI70 exceeds a threshold value continues for a reference time or longer, the switching control unit 150 switches from the automatic driving mode to the manual driving mode (override).

Further, after switching to the manual driving mode by the override, the switching control unit 150 may return to the automatic driving mode when an operation of the configuration of the driving operation system is not detected in the HMI70 for a predetermined time. For example, when the right of hand (driving) control for shifting from the automatic driving mode to the manual driving mode is performed at a predetermined point of completion of the automatic driving, the switching control unit 150 outputs information to the HMI control unit 170 so as to notify the vehicle occupant of the right of hand request in advance.

< Driving control part 160 >

The travel control unit 160 controls the travel driving force output device 300, the steering device 310, and the brake device 320 so that the host vehicle 1 passes through the route generated by the route generation unit 145 at a predetermined timing.

The travel control unit 160 functions as a vehicle behavior control unit that controls at least the speed of the vehicle based on the behavior pattern.

< HMI control part 170 >

When the information of the automatic driving mode is notified from the automatic driving control unit 120, the HMI control unit 170 refers to the mode type operation availability information 184 (see fig. 6 described later), and controls the HMI70 according to the type of the automatic driving mode.

The HMI control unit 170 refers to the mode type operation availability information 184 based on the information of the mode acquired from the automated driving control unit 120, and thereby determines the devices permitted to be used (a part or all of the navigation device 50 and the HMI 70) and the devices not permitted to be used. The HMI control unit 170 controls whether or not an operation from the vehicle occupant to the HMI70 of the non-driving operation system or the navigation device 50 is accepted based on the determination result.

For example, when the driving mode executed by the vehicle control device 100 is the manual driving mode, the vehicle occupant operates the driving operation system (for example, the accelerator pedal 71, the brake pedal 74, the shift lever 76, the steering wheel 78, and the like) of the HMI70 (see fig. 3). In addition, when the driving mode executed by the vehicle control device 100 is the automatic driving mode, the vehicle occupant has a surrounding monitoring obligation of the host vehicle 1.

In this case, in order to prevent distraction (driver distraction) due to an action other than driving by the vehicle occupant (for example, an operation of the HMI 70), the HMI control unit 170 controls not to accept an operation performed on a part or all of the non-driving operation system of the HMI 70. In this case, in order to monitor the surroundings of the host vehicle 1, the HMI control unit 170 may cause the display device 82 (see fig. 3) to display the presence of the vehicle around the host vehicle 1 and the state of the vehicle around the host vehicle, which are recognized by the surrounding recognition unit 142, by an image or the like, and may cause the HMI70 to receive a confirmation operation according to the scene during travel of the host vehicle 1.

Further, when the driving mode is the automatic driving, the HMI control unit 170 performs control to receive an operation performed by the vehicle occupant on a non-driving operation system that has not received an operation, with the restriction on the driver's distraction relaxed. For example, the HMI control unit 170 displays a video on the display device 82, causes the speaker 83 (see fig. 3) to output a voice, and causes the content playback device 85 (see fig. 3) to play back content from a DVD or the like. The content played back by the content playback device 85 may include, for example, various contents related to entertainment and art of a television program, in addition to the contents stored in a DVD or the like. In addition, the "content playback operation" shown in fig. 6 may mean a content operation relating to such entertainment and art.

< storage part 180 >

The storage unit 180 stores information such as high-precision map information 181, target lane information 182, behavior pattern information 183, and mode type operation availability information 184. The storage unit 180 is implemented by rom (read Only memory), ram (random Access memory), hdd (hard Disk drive), flash memory, or the like. The program executed by the processor may be stored in the storage unit 180 in advance, or may be downloaded from an external device via an in-vehicle internet device or the like. The program may be installed in the storage unit 180 by installing a portable storage medium storing the program in a drive device not shown.

The high-precision map information 181 is map information having higher precision than the navigation map of the navigation device 50. The high-accuracy map information 181 includes, for example, information on the center of a lane, information on a lane boundary, and the like. The above-mentioned boundaries include the type, color, length, road width, shoulder width, main line width, lane width, boundary position, boundary type (guard rail, plant, shoulder), zebra crossing region, etc. of the lane markers, and these boundaries are included in the high-precision map.

The high-accuracy map information 181 may include road information, traffic control information, address information (address and zip code), facility information, telephone number information, and the like. The road information includes information indicating a road type such as an expressway, a toll road, a national road, and a provincial road, information such as the number of lanes of the road, the width of each lane, the gradient of the road, the position of the road (including three-dimensional coordinates of the longitude, the latitude, and the height), the curvature of a curve of the lane, the position of a confluence point and a branch point of the lane, and a marker provided on the road. The traffic control information includes information on lane closure due to construction, traffic accident, congestion, and the like.

Action plan creation section 200

Basic action scheme

The action plan generating unit 200 sets a start point of the automated driving and/or a destination of the automated driving. The starting point of the automated driving may be the current position of the host vehicle 1 or may be a point indicated by an operation for instructing the automated driving. The action plan generating unit 200 generates an action plan in a section between the start point and the destination of the automated driving. Further, the action plan generating unit 200 may generate an action plan for an arbitrary section.

An action scheme is composed of a plurality of events that are executed in sequence, for example. Examples of events include: a deceleration event for decelerating the host vehicle 1; an acceleration event that accelerates the host vehicle 1; a lane keeping event in which the host vehicle 1 is caused to travel without departing from a traveling lane; a lane change event causing a lane change of travel; an overtaking event for causing the host vehicle 1 to overtake a preceding vehicle; a branch event in which the host vehicle 1 is caused to travel at a branch point so as to change to a desired lane or not to leave the current travel lane; a confluence event of accelerating and decelerating the host vehicle 1 and changing a traveling lane in a confluence lane for converging to a main line; a traffic event in which the automatic driving mode is shifted from the manual driving mode to the automatic driving mode at a start point of the automatic driving, or the automatic driving mode is shifted to the manual driving mode at a predetermined point of termination of the automatic driving, and the like.

The behavior pattern generation unit 200 sets a lane change event, a branch event, or a merge event at a position where the target lane determined by the target lane determination unit 110 is switched. Information indicating the action plan generated by the action plan generating unit 200 is stored in the storage unit 180 as action plan information 183 (described later).

Action plan creation section 200

Behavior scheme of the vehicle based on the collision rate decision

Fig. 4 is a block diagram of the action plan generating section 200.

The behavior pattern generation unit 200 obtains a collision rate indicating the possibility of collision between the obstacle and the vehicle based on the detected distance, and generates a behavior pattern of the vehicle based on the collision rate. The action plan generating unit 200 generates an action plan for automatic driving of the vehicle.

As shown in fig. 4, the action plan generating unit 200 includes: a collision rate map storage unit 1010 that stores a collision rate map 1000 (see fig. 14), the collision rate map 1000 being a map in which an action pattern having a collision rate is visualized by a two-dimensional map of position and velocity; a target collision rate setting unit 210; an indicated value reaching probability density distribution estimating unit 220; an instruction value deviation amount calculation unit 230 (deviation amount calculation unit); a fusion accuracy reliability estimating unit 240; and a collision probability map setting unit 250.

The action plan generating unit 200 generates an action plan that allows sudden braking while maintaining the set speed of automatic driving in a low collision rate region where the collision rate on the collision rate map 1000 is lower than the target collision rate and lower than a predetermined threshold, and generates an action plan of preliminary braking in which sudden braking is avoided by repeating braking for a short time in a high collision rate region where the collision rate on the collision rate map 1000 is lower than the target collision rate and higher than the predetermined threshold.

< target collision rate setting unit 210 >

The target collision rate setting unit 210 sets a predetermined target collision rate.

< indication value achievement probability density distribution estimation unit 220 >

The instruction value achievement probability density distribution estimation unit 220 estimates an instruction value achievement probability density distribution 1001, and the instruction value achievement probability density distribution 1001 is determined by the magnitude and the degree of certainty of the difference between the instruction value from the travel control unit 160 (vehicle behavior control unit) and the position at which the vehicle actually arrives (see fig. 10A).

< indicator value deviation amount calculation unit 230 >

The instruction value deviation amount calculation unit 230 calculates a deviation amount 1002 of an instruction value indicating a distance between the target stop position and the obstacle (see fig. 10A).

< fusion accuracy reliability estimating unit 240 >

The fusion accuracy reliability estimating unit 240 calculates a probability distribution 1003 (see fig. 10A) based on the fusion accuracy reliability of the recognition accuracy of the detection device DD (particularly, the distance detection sensor) (distance detecting unit). The probability distribution 1003 obtained based on the fusion accuracy reliability depends on the (distance) detection accuracy of the camera, the radar, or the like. Therefore, the detection depends on redundancy of sensors described later, AND detection (see fig. 27) OR detection (see fig. 28).

< Collision ratio map setting unit 250 >

The collision probability map setting unit 250 determines a collision probability map according to the following expression (1).

Driving force output device 300, steering device 310, and brake device 320 for traveling

Returning to fig. 2, the vehicle control device 100 controls the running driving force output device 300, the steering device 310, and the brake device 320.

< Driving force output device 300 >

The running driving force output device 300 outputs running driving force (torque) for running the vehicle to the driving wheels. For example, in the case where the vehicle 1 is an automobile using an internal combustion engine as a power source, the travel driving force output device 300 includes an engine, a transmission, and an engine ECU (electronic Control unit) that controls the engine, in the case where the vehicle 1 is an electric automobile using an electric motor as a power source, the travel driving force output device includes a travel motor and a motor ECU that controls the travel motor, and in the case where the vehicle 1 is a hybrid vehicle, the travel driving force output device includes the engine, the transmission, the engine ECU, the travel motor, and the motor ECU.

When the running drive force output device 300 includes only the engine, the engine ECU adjusts the throttle opening degree, the shift speed, and the like of the engine in accordance with information input from the running control unit 160 described later. When the running driving force output device 300 includes only the running motor, the motor ECU adjusts the duty ratio of the PWM signal supplied to the running motor in accordance with the information input from the running control unit 160. When the running driving force output device 300 includes an engine and a running motor, the engine ECU and the motor ECU control the running driving force in cooperation with each other in accordance with information input from the running control unit 160.

< steering device 310 >

The steering device 310 includes, for example, a steering ECU and a motor. The motor changes the orientation of the steered wheels by applying a force to the rack and pinion mechanism, for example. The steering ECU drives the electric motor in accordance with information input from the vehicle control device 100 or information on the steering angle or the steering torque input, and changes the direction of the steered wheels.

< brake device 320 >

The brake device 320 is, for example, an electric servo brake device having a brake caliper, a hydraulic cylinder for transmitting hydraulic pressure to the brake caliper, an electric motor for generating hydraulic pressure in the hydraulic cylinder, and a brake control unit. The brake control unit of the electric servo brake device controls the electric motor in accordance with information input from the travel control unit 160, and outputs a brake torque corresponding to a brake operation to each wheel. The electric servo brake device may have a mechanism for transmitting a hydraulic pressure generated by an operation of the brake pedal to the hydraulic cylinder via the master cylinder as a backup.

The brake device 320 is not limited to the electric servo brake device described above, and may be an electronically controlled hydraulic brake device. The electronically controlled hydraulic brake device controls the actuator in accordance with information input from the travel control unit 160, and transmits the hydraulic pressure of the master cylinder to the hydraulic cylinder. Further, the braking device 320 may include regenerative braking performed based on a running motor that the running driving force output device 300 may include.

< distance detecting part >

In the present embodiment, the automatic driving control unit 120 constitutes a distance detection unit together with the detection device DD and the vehicle sensor 60. The distance detection unit outputs detection information based on information output from the detection device DD and the vehicle sensor 60. The probe information includes: a combination of sensors (sensor configuration) that detect an object from among a plurality of sensors included in the detection device DD and the vehicle sensor 60; a time at which detection is started by the combination of the sensors and continuously continues from this time, i.e., a detection time; and a detection distance determined based on the output of each sensor that detects the object and the detection time. The detection distance is determined during travel based on the result of measurement performed in advance when the object is detected by the combination of the sensors in a situation where the true value of the distance from the object is known.

The operation of the vehicle control device 100 configured as described above will be described below.

(description of the principle)

First, problems of the prior art will be described.

Fig. 5 to 8 are diagrams for explaining problems of the conventional technique.

< subject 1 >

The following problems exist when determining an action plan using a sensor: the recognition distance is not sufficient, and the recognition distance is not increased in a state where the reliability of the sensor is kept high (problem 1).

Fig. 5 is a diagram showing a target speed value curve in the case where deceleration is performed up to the target stop position. Fig. 5 (a) is a graph showing a target speed value curve in the case of deceleration at 0.6G, with the horizontal axis showing the vehicle position and the vertical axis showing the speed. The thick solid line in fig. 5 (a) is a target speed value curve calculated when deceleration is performed at 0.6G, and the thick broken line in fig. 5 (a) is a target speed value curve when the shift is started from the speed of 130kph (error detection start) and after the shift is performed at the speed of 50kph (allowable error end). The vehicle position of speed 0 is the target stop position.

Fig. 5 (b) shows a detection rate p (D | E) which is a detection probability held by a sensor (for example, a vehicle sensor such as a vehicle speed sensor, or a detection device DD such as a radar or a camera) when determining the target speed value of fig. 5 (a), and fig. 5 (c) shows a false detection rate held by a sensor when determining the target speed value of fig. 5 (a)Here, E represents a state where an obstacle is present,indicating a state where no obstacle is present. In addition, D represents a state in which an obstacle is detected.

As shown in fig. 5 (b) and (c), the tendency is:false detection rate when detection rate p (D | E) risesAnd decreases.

Fig. 5 (d) is a diagram showing an example of a deceleration pattern of an action pattern in the case of automatic driving, and is determined by a predetermined algorithm taking the detection rate and the false detection rate into consideration.

Fig. 6 and 7 are diagrams illustrating behavior based on the reliability of the sensor in determining the target speed value curve of fig. 5. Fig. 6 is an example of a case where the determination of the action plan of fig. 5 is performed by setting the excessive detection side in which the detection value of the sensor is used up to a region with low reliability. Fig. 7 is an example of a case where the determination of the action plan of fig. 5 is performed by setting the detection value of the sensor in a region with high reliability.

The horizontal axes in fig. 6 and 7 indicate the vehicle position, D2 is about 120 to 160m, D1.5 is about 60 to 100m, and D1 is about 30 to 50 m. The vertical axes of fig. 6 and 7 show the target speed of the vehicle, V1.5 being about 50 to 70kph, and V2 being about 120 to 140 kph.

As shown in fig. 6, a case where the driving environment condition of the automatic driving system is ODD (Operational Design Domain) and the vehicle can be driven at a speed V2 will be described. In the determination of the action plan of fig. 5, if the excessive detection side setting in which the detection value of the sensor is used is set up until the region with low reliability, the detection distance (the distance of Gen2 ← Gen 1) is long, but as indicated by the broken line range a in fig. 6, the false braking increases because the false detection rate held by the sensor is large. If the region with low reliability is used as it is, the erroneous braking increases, and the merchantability decreases.

On the other hand, if the specification is set to brake in a region where the reliability of the detection value of the sensor is high, the detection distance becomes short and does not reach V2 allowable in the ODD as shown by × mark b in fig. 7. In this way, if a highly reliable region is determined and used, the cruising speed needs to be reduced, and the ODD of V2 cannot be realized.

As described above, there is a problem that the recognition distance is insufficient and the recognition distance is not increased in a state where the reliability of the sensor is kept high.

Fig. 8 is a diagram illustrating directionality intended to solve the problem in fig. 6 and 7.

The vehicle starts deceleration from a remote low reliability position within a range not affecting comfort, and stops reliably in a high reliability section. As shown by an arrow c in fig. 8, the braking is gradually increased according to the reliability.

The "confidence" of continuity is used positively in the course of action. That is, by extending the stopping distance, braking is started from a region with low reliability, and thus V2 may be reached.

To solve the problem 1, the present invention provides a reliability calculation method and a method for deriving a concrete cooperation method of an action plan.

< subject 2 >

The following problems exist: it is impossible to clarify the safety level, to quantify the reliability (accuracy, detection rate) of the sensor, and to quantify the accuracy of the operation scheme (problem 2).

In order to solve the problem 2, the present invention provides a formula capable of quantitatively evaluating the reliability (accuracy, detection rate) of a sensor and its behavior scheme, so that a region which can be called safety is clarified.

Basic thinking mode

The present invention introduces a "collision rate" into the action plan. That is, it is considered that the action plan has the "collision rate" similarly to the recognition of the "recognition rate", and the "collision rate" is introduced into the action plan.

Fig. 9 is an explanatory diagram for explaining a collision rate for preventing a collision (collision) by decelerating to a forward traveling vehicle position (target stop position). Fig. 9 (a) shows a collision rate map for preventing a collision by decelerating to a front traveling vehicle position. In fig. 9 (a), the horizontal axis represents position and the vertical axis represents velocity. Fig. 9 (b) shows the collision rate p (C | S) of fig. 9 (a) by gradation expressiont) Fig. 9 (c) shows the peak of the collision rate at the speed 0 in fig. 9 (a).

(iii) of FIG. 9a) Shown collision rate p (C | S)t) Is calculated based on the detection rate and accuracy of the sensor, the algorithm of the action plan, and the accuracy of the action plan. The broken-line range d in fig. 9 (a) indicates "no collision" as long as the vehicle travels at a position and a speed within the range.

The broken line range e of fig. 9 (b) indicates "no collision" 10-7The probability of (c).

As shown in fig. 9 (C), the collision rate p (C | S) at speed 0t) Determined by the standard deviation of error σ of the sensor (not shown). When the distance is 6 σ, 10 is obtained-7The probability of collision.

The algorithm for comprehensively evaluating the accuracy, the detection rate and the action scheme of the sensor can be used as a material with excellent identification algorithm.

Fig. 10A is a diagram illustrating a basic concept of the present invention.

Correlation technique of collision Rate and the like

< parameter of Collision Rate >

Fig. 10A shows a target velocity value curve (thick solid line) of the vehicle 1 when the vehicle 1 decelerates to the front traveling vehicle position (target stop position D) and collision avoidance is desired. The horizontal axis of fig. 10A represents the vehicle position and the vertical axis represents the speed. The vehicle 1 may not be stopped at the target stop position D for various reasons, and actually takes a position and a speed shown by an area of a dotted grid in fig. 10A.

As shown on the right side of fig. 10A, the collision rate f indicating the possibility that the vehicle 1 will collide with the preceding vehicle is determined by the following three parameters. That is, the parameters are an indication value achievement probability density distribution 1001, a deviation amount 1002 of the indication value, and a probability distribution 1003 obtained based on the fusion accuracy reliability. The indicated value achievement probability density distribution 1001 and the deviation 1002 of the indicated value are parameters related to the behavior pattern after the collision rate is introduced, and the probability distribution 1003 obtained based on the fusion accuracy reliability is a parameter indicating the recognition accuracy of the sensor. The outline of each parameter is described below.

Indicating values to achieve a probability density distribution 1001 is one of three parameters that determine the collision rate. The instruction value achievement probability density distribution 1001 indicates the magnitude and the degree of certainty of the difference between the instruction value of the travel control unit 160 (vehicle behavior control unit) and the position where the host vehicle 1 actually arrives.

The deviation 1002 of the instruction value is one of three parameters that determine the collision rate.

The deviation amount 1002 of the instruction value is a parameter for determining how far the target stop position should be shifted forward with respect to the preceding vehicle.

The probability distribution 1003 obtained based on the fusion accuracy reliability depends on the combination of sensors such as a camera, a radar, and a Lidar, the detection time, and the detection distance. The probability distribution 1003 obtained based on the fusion accuracy reliability depends on distance detection patterns such as redundancy of sensors, AND detection (see fig. 18 a AND 27 described later), OR detection (see fig. 18 b AND 28 described later), AND HALF AND detection (see fig. 18 c AND 29 described later).

Here, the relationship between the indication value achievement probability density distribution 1001, the fusion accuracy reliability, the probability distribution 1003 obtained based on the fusion accuracy reliability, the collision rate f, and the deviation amount 1002 of the indication value will be described in detail with reference to fig. 10A, 10B, 10C, and 12.

Indicating values to achieve a probability density distribution

The instruction value achievement probability density distribution 1001 indicates a distribution of a position at which the vehicle 1 actually stops with respect to a target stop position when a certain deceleration is instructed to the vehicle 1. The indicated value achievement probability density distribution 1001 shown in fig. 10A indicates a distribution of the stop positions of the vehicle 1 when the vehicle 1 is intended to stop at the position D. The instruction value achievement probability density distribution 1001 has a characteristic that is measured in advance through a flow of measuring a stop position when a certain deceleration is instructed to the vehicle 1.

The behavior pattern generation unit 200 determines the allowable maximum deceleration in the behavior pattern according to the automated driving mode and the like. The distribution of the instruction value achievement probability density distribution previously measured through the above-described flow, which is traveling at the same or approximately the same deceleration as the maximum deceleration, is regarded as the instruction value achievement probability density distribution 1001 in the action plan to be executed later.

fusion accuracy confidence

The fusion accuracy reliability is determined based on a combination of sensors (sensor configuration) that detect an object among the plurality of sensors and a detection time that is a time when the detection is started by the combination of sensors and continuously continues to be detected from this time. Generally, the greater the number of sensors that detect the object or the longer the duration of detection, the higher the fusion accuracy reliability. The fusion accuracy reliability differs depending on the combination of sensors that detect the object.

Fig. 10B shows an example of fusion accuracy reliability for the combination of 3 sensors, Camera, Lidar and Radar, and the detection time thereof. In fig. 10B, the combination of sensors that detect the object is arranged such that the reliability increases as the vertical axis is higher, and the horizontal axis indicates the time during which the object is continuously detected by the sensors.

For example, when the detection time is D due to the sensor configuration a (Camera), the fusion accuracy reliability is Low, when the detection time is D due to the sensor configuration f (Lidar + Camera), the fusion accuracy reliability is Mid, and when the detection time is D due to the sensor configuration g (Radar + Lidar + Camera), the fusion accuracy reliability is High. The fusion accuracy reliability may be obtained by dividing the positive detection rate at each detection time measured for each sensor configuration into three levels of Low, Mid, and High based on a predetermined threshold, or by directly processing the positive detection rate with a numerical value.

Probability distribution based on fusion accuracy confidence

The probability distribution obtained based on the fusion accuracy reliability indicates what distribution the detection distances measured by the plurality of sensors are with respect to the true value. Fig. 10C shows the relationship between the fusion accuracy reliability, the width of the probability distribution 1003 obtained based on the fusion accuracy reliability, the indicated value achievement probability density distribution 1001, the collision rate f, and the deviation amount 1002.

For example, when focusing on some cases where the fusion accuracy reliability is Mid or the like, the width of probability distribution 1003 obtained based on the fusion accuracy reliability is large when the dispersion of the detection distance from the true value is large, and the width of probability distribution 1003 obtained based on the fusion accuracy reliability is small when the dispersion of the detection distance from the true value is small. The probability distribution 1003 and the width thereof obtained based on the fusion accuracy reliability can be obtained in advance through a flow of testing what the error distribution of the probe distance is in a state where the true value of the distance is known.

Focusing on the case where the width of probability distribution 1003 obtained based on the fusion accuracy reliability is "medium", the height of probability distribution 1003 obtained based on the fusion accuracy reliability becomes High when the fusion accuracy reliability is High, and the height of probability distribution 1003 obtained based on the fusion accuracy reliability becomes Low when the fusion accuracy reliability is Low. The height of the probability distribution 1003 obtained based on the fusion accuracy reliability is determined so that the area of the distribution is approximately proportional to the detection rate of the sensor. The width and height of the probability distribution 1003 obtained based on the fusion accuracy reliability are characteristics obtained in advance through a process of obtaining a detection rate characteristic for testing whether or not a detection target is detected in a situation where the presence or absence of the detection target is clear.

Deviation amount of indication value

The collision rate f indicating the possibility of collision of the vehicle 1 with the preceding vehicle is determined by a relative positional relationship between the instruction value achievement probability density distribution 1001 and the probability distribution 1003 obtained based on the fusion accuracy reliability, and the deviation 1002 of the instruction value defines the relative positional relationship.

That is, the deviation 1002 of the instruction value indicates the difference between the positions of the two distributions, i.e., the instruction value achievement probability density distribution 1001 and the probability distribution 1003 obtained based on the fusion accuracy reliability, which are the vertices, and the collision rate f is determined by the difference.

In other words, the deviation amount 1002 of the instruction value can be determined so as to be the collision rate (target collision rate) set as the target. The collision rate f can be obtained by convolving the instruction values into a probability density distribution 1001 and a probability distribution 1003 obtained based on the fusion accuracy reliability when the deviation amount 1002 of the instruction values deviates, but the deviation amount 1002 of the instruction values can be determined so that the collision rate matches the target collision rate. The nine graphs shown in fig. 10C represent how the deviation 1002 that is the same collision probability changes when the fusion accuracy reliability and the probability distribution 1003 obtained based on the fusion accuracy reliability are given to the probability density distribution 1001 that is achieved for a certain instruction value. The target collision rate is a fixed value predetermined as a design, and is a value determined by the detection rate of the detection object, the severity of the collision with the detection object, and the like.

In fig. 10C, focusing on some cases where the Fusion reliability is Mid or the like, the amount of deviation 1002 of the instruction value becomes small when the width of the probability distribution 1003 obtained based on the Fusion accuracy reliability is narrow, and the amount of deviation 1002 of the instruction value becomes large when the width of the probability distribution 1003 obtained based on the Fusion accuracy reliability is large. When the width of the probability distribution 1003 obtained based on the fusion accuracy reliability is medium, the amount of shift 1002 of the instruction value becomes large when the fusion accuracy reliability is High, and the amount of shift 1002 of the instruction value becomes small when the fusion accuracy reliability is Low.

Width of probability distribution obtained based on fusion precision reliability

The width of the probability distribution 1003 obtained based on the fusion accuracy reliability varies depending on the sensor configuration (type and combination of sensors), the detection time, the distance from the object, and the like. In general, the width of probability distribution 1003 obtained based on the fusion accuracy reliability is "large" when the distance from the object is long, and becomes "narrow" when the distance is close. In addition, when comparison is performed at the same distance from the object, for example, "wide" is performed in the case of the sensor configuration a (camera), "medium" is performed in the case of the sensor configuration c (Radar), and "narrow" is performed in the case of the sensor configuration d (Radar + Lidar). In addition, generally, the longer the detection time, the more "narrow".

Estimation of the width of a probability distribution based on fusion accuracy reliability during travel

As described above, the width of the probability distribution 1003 obtained based on the fusion accuracy reliability can be obtained in advance through a flow of testing what the error distribution of the probe distance is in a state where the true value of the distance is known. That is, the width of probability distribution 1003 obtained based on the fusion accuracy reliability can be obtained for each combination of the sensor configuration (type and combination of sensors), the detection time, and the detection distance. Further, a table (not shown) having the sensor configuration (type and combination of sensors), the detection time, and the detection distance as inputs and the width of the probability distribution 1003 obtained based on the fusion accuracy reliability as an output can be constructed and stored in a storage device (Read Only Memory (ROM)) of the vehicle control device. Thus, the width of the probability distribution 1003 obtained based on the fusion accuracy reliability at present can be estimated by referring to the detection probability distribution width table based on the combination of the current sensor configuration, the detection time, and the detection distance during traveling.

The above-described detection probability distribution width table describes a table in which the width of the probability distribution 1003 obtained based on the fusion accuracy reliability is obtained from the three parameters of the sensor configuration, the detection time, and the detection distance, but in order to reduce the table capacity, two parameters of the fusion accuracy reliability (High/Mid/Low) and the detection distance may be used as input parameters, or two parameters of the sensor configuration and the detection time may be referred to as input parameters, so as to reduce the input information to the table. In addition, in order to reduce input information, the precision (quantization width) of each parameter can be adjusted in addition to the number of parameters. These options are associated with a loss of precision and expense.

Calculation sequence of deviation amount during travel

Fig. 12 is a diagram illustrating how the deviation 1002 is obtained from the sensor configuration and the detection time when the indicated value is given to the object away from a certain distance during traveling to obtain the probability density distribution 1001 and the target collision rate.

In fig. 12, the horizontal axis represents the width of the probability distribution obtained based on the fusion accuracy reliability, and the vertical axis represents the deviation amount. In fig. 12, a probability density distribution 1001 and a target collision rate are obtained for the assigned instruction values, and a deviation amount 1002 obtained from a probability distribution 1003 obtained based on the fusion accuracy reliability measured for the combination of the sensor configuration and the detection time shown in fig. 10B is plotted on the vertical axis, and a width of the probability distribution 1003 obtained based on the fusion accuracy reliability is plotted on the horizontal axis.

In fig. 12, the pictures in the case where the fusion accuracy reliability is High, Mid, and Low are surrounded by the dotted line regions High, Mid, and Low, respectively. In fig. 12, for example, a combination in which the target object is detected by the sensor configuration e (Radar + Camera) and the detection time is B is denoted by reference symbol e-B in fig. 12. The same reference numerals are used for drawings of other combinations.

As shown in fig. 12, the deviation amount has a linear characteristic having a slope different for each fusion accuracy reliability with respect to the width of the probability distribution 1003 obtained based on the fusion accuracy reliability. The higher the fusion accuracy reliability, the larger the slope of the characteristic.

Since the indicated value achievement probability density distribution 1001 is also different depending on the indicated deceleration, the characteristic shown in fig. 12 changes depending on the deceleration, but shows approximately the same tendency. Here, the deviation amount can be calculated by simple calculation during running by storing the parameters (such as the slope) of the linear characteristic for each fusion accuracy reliability in a table (deviation amount characteristic parameter table) (not shown) for each deceleration.

Specifically, one of the linear characteristics shown in fig. 12 is selected by referring to the deviation amount characteristic parameter table based on the maximum deceleration and the fusion accuracy reliability (High/Mid/Low) in the current operation scenario. Then, the deviation amount 1002 can be calculated by substituting the width of the probability distribution 1003 obtained based on the fusion accuracy reliability obtained by referring to the detection probability distribution width table into the selected linear characteristic.

In the above-described calculation method, the width of the probability distribution 1003 obtained based on the fusion accuracy reliability estimated separately is substituted into one of the linear characteristics shown in fig. 12 to calculate the deviation amount 1002. In contrast, as a normal form, a table having deceleration, sensor configuration, detection time, and detection distance as inputs and a deviation amount as an output may be provided to find the deviation amount 1002 by referring to the table only once.

As can be understood from the description so far, the probability density distribution 1001 can be achieved by estimating the instruction value based on the deceleration. Further, probability distribution 1003 obtained based on the fusion accuracy reliability can be estimated based on the sensor configuration, the detection time, and the detection distance. Then, the deviation 1002 to the target collision probability can be calculated from the widths of the probability density distribution 1001 and the probability distribution 1003 obtained based on the fusion accuracy reliability, which are obtained from the estimated indication values. Then, a deviation amount table (not shown) having the deceleration, the sensor configuration, the detection time, and the detection distance as inputs and the deviation amount 1002 as an output can be constructed using Read Only Memory (ROM) or the like.

However, in the case of this configuration, the capacity of the deviation amount table increases or decreases according to the deceleration given as an input to the deviation amount table, the number of combinations of sensors, the quantization width of the detection time, the quantization width of the detection distance, and the like, and the capacity increases as the accuracy increases. That is, whether to calculate or use table reference as a method of acquiring the deviation amount 1002 based on the linear characteristic shown in fig. 12 is related to the gain and loss of accuracy, calculation time, and cost.

Referring back to fig. 10A, an example of a flow of calculation of deviation 1002 by vehicle control device 100 will be described below.

As shown in fig. 10A, probability distribution 1003 obtained based on the fusion accuracy reliability is distributed around the front traveling vehicle. At this time, it is assumed that the own vehicle 1 intends to stop at a position of reference numeral D in fig. 10A in order to prevent a collision. The indication values at this time are distributed in probability density distribution 1001 centered on position D shown in fig. 10A. In the example shown in fig. 10A, the indication value achievement probability density distribution 1001 and the probability distribution 1003 obtained based on the fusion accuracy reliability partially overlap. As described above, the overlapping area indicates the collision rate in the stop operation. In addition, since the overlap is changed by changing the offset 1002, the collision rate also varies.

Since the actual position of the preceding vehicle cannot be known by the vehicle 1 during traveling, the position of the center axis of the probability distribution 1003 obtained based on the fusion accuracy reliability cannot be known either. However, as described above, the width of probability distribution 1003 obtained based on the fusion accuracy reliability can be estimated by obtaining the fusion accuracy reliability from the combination of sensors (sensor configuration) that detect the object among the plurality of sensors and the combination of the sensors, and by referring to the data that has been measured in advance by obtaining the fusion accuracy reliability from the combination of sensors and the detection time that is the time when the detection is started by the combination of sensors and the detection is continued continuously from this time onward. As described above, by referring to the data that has been measured in advance based on the currently set maximum deceleration, it is possible to estimate that the probability density distribution 1001 is achieved by the instruction value in the present stopping operation.

Further, assuming that the indication value estimated in the present stopping operation reaches the probability density distribution 1001, the deviation 1002 to give a certain target collision rate can be determined by substituting the fusion accuracy reliability and the width of the probability distribution 1003 obtained based on the fusion accuracy reliability into the data that has been measured in advance, as described with reference to fig. 12.

Note that the deviation amount 1002 of the instruction value is determined so that the collision rate obtained by the convolution when the instruction value is made into the probability density distribution 1001 and the probability distribution 1003 obtained based on the fusion accuracy reliability is shifted by the deviation amount 1002 of the instruction value coincides with the target collision rate. However, if a deviation amount table in which deceleration, sensor configuration, probe time, and probe distance are input and a deviation amount is output is stored in Read Only Memory (ROM) and the table is referred to during traveling as described above, information indicating that the probability density distribution 1001 is achieved and the probability distribution 1003 obtained based on the fusion accuracy reliability is already represented in the table, and therefore, calculation for convolution is not necessary during traveling. Therefore, it is not necessary to estimate the indicated values to obtain the probability density distribution 1001 and the probability distribution 1003 obtained based on the fusion accuracy reliability, or to use them in the calculation.

Here, the reason why the instruction value is obtained as the probability density distribution 1001 based on the maximum deceleration set at present will be described. When the deceleration becomes larger, the indication value achievement probability density distribution 1001 becomes wider, and the set deviation amount becomes larger. By assuming the maximum deceleration in the action plan, the safe deviation amount appropriate to the action plan can be obtained.

The above operation will be described below with reference to fig. 11.

< flow of indicator value determination for action plan >

Fig. 11 is a flow of determining the amount of deviation of the indication value of the behavior pattern based on the characteristics shown in fig. 12. An example of an embodiment in which the amount of shift 1002 of the instruction value is determined based on the characteristics shown in fig. 12 will be described below. The deviation amount 1002 of the instruction value specifies the stop position, which is the target for stopping the vehicle 1, as the deviation of the position indicated by the probe distance from the distance detection unit in the behavior pattern generated by the behavior pattern generation unit 200.

STEP1;

The target collision rate setting unit 210 sets the target collision rate. The target collision rate is a predetermined fixed value predetermined as a design, and is a value determined in consideration of the detection rate of the detection object, the severity of the collision with the detection object, and the like.

STEP2;

The Fusion accuracy reliability estimating unit 240 determines the Fusion accuracy reliability based on the current sensor configuration and the detection time. As described above with reference to fig. 10B, the Fusion accuracy reliability is determined based on the positive detection rate measured in advance for the combination of the sensors. The fusion accuracy reliability is determined based on a combination of sensors (sensor configuration) that detect an object among the plurality of sensors, and a detection time that is a time when the detection is started by the combination of sensors and continuously continues to be detected from this time. Generally, the greater the number of sensors that detect the object, or the longer the duration of detection of the object, the higher the fusion accuracy reliability. The fusion accuracy reliability varies depending on the combination of sensors that detect the object.

STEP3;

The instruction value achievement probability density distribution estimation unit 220 and the instruction value deviation amount calculation unit 230 determine candidates of an instruction value table (probability density, deviation amount). The instruction value achievement probability density distribution estimation unit 220 determines a candidate of an instruction value achievement probability density distribution 1001 based on the maximum deceleration set at present. As described above, the candidate indicating that the value achieves the probability density distribution 1001 is determined based on the characteristics measured in advance through the preliminary flow of measuring the stop position when a certain deceleration is indicated.

The indicated value deviation amount calculation unit 230 determines a candidate of the deviation amount of the indicated value based on the fusion accuracy reliability determined by the fusion accuracy reliability estimation unit 240 and the candidate of the indicated value achievement probability density distribution 1001.

This step corresponds to selecting any one of the broken line regions High, Mid, and Low in fig. 12. For example, when the fusion accuracy reliability is Mid, the deviation amount corresponding to the broken line area Mid in fig. 12 becomes a candidate.

STEP4;

The fusion accuracy reliability estimating unit 240 estimates the width of the probability distribution obtained based on the fusion accuracy reliability. The instruction value deviation amount calculation unit 230 selects one of the deviation amount candidates selected by STEP3 based on the estimated width of the probability distribution 1003 obtained based on the fusion accuracy reliability. This step corresponds to the following: for example, when the detection time is B and detected by the sensor configuration e (Radar + Camera), the Fusion accuracy reliability is determined to be Mid, and the estimated width of the probability distribution 1003 based on the Fusion accuracy reliability is substituted for the characteristic shown in fig. 12 when the Fusion accuracy reliability is Mid, and as a result, the deviation 1002 corresponding to the reference numeral e-B in fig. 12 is selected.

As described above, for example, the width of the probability distribution 1003 obtained based on the fusion accuracy reliability is obtained by inputting the current sensor configuration and the detection time to a detection probability distribution width table (not shown) storing data of the width of the probability distribution measured in advance and outputting the table.

STEP5;

The action plan generating unit 200 specifies the instruction value table using the deviation 1002 obtained by STEP 4. The action plan generating unit 200 thus determines the position to be the target for stopping the vehicle 1.

The above-described flows of STEP1 to STEP4 are examples, and are not limited thereto. For example, as described above, the deviation amounts can be determined by the STEP3 and the STEP4 by storing a deviation amount table (not shown) in which deceleration, sensor configuration, detection time, and detection distance are input and deviation amounts are output in a Read Only Memory (ROM) and referring to the deviation amount table by the action plan generating unit 200. In this embodiment, the widths of the instruction value achievement probability density distribution 1001 and the probability distribution 1003 obtained based on the fusion accuracy reliability are reflected in the data stored in the deviation amount table, and it is not necessary to separately obtain the widths of the instruction value achievement probability density distribution 1001 and the probability distribution 1003 obtained based on the fusion accuracy reliability during running. The embodiment described later is a mode configured on the premise that the deviation amount to be given to the target collision rate is obtained by using a method similar to the deviation amount table described above from the deceleration, the sensor configuration, the detection time, and the detection distance.

< collision rate >

Collision ratio p (C | S) represented by a collision ratio mapt) Calculated according to the calculation model shown in equation (1). The calculation process of this calculation is explained in detail later.

Digital type (1)

St: state at time t (position, velocity)

St+1: state (position, velocity) at time t +1 is the next state

p(C|St+1): collision rate in next state (position, velocity)

Exists in state StTime of course observationAction alpha of time occurrenceDWill go to the next state St+1Probability of transition

Observed when the state of the object is dProbability of (2)

< flow chart >

Fig. 13 is a flowchart of vehicle control performed by the vehicle control device 100 according to the embodiment.

This flow is repeatedly executed at a predetermined cycle by the automatic driving control unit 120 (see fig. 2).

Step S11

The automatic driving control unit 120 (distance detection unit) detects a distance to an obstacle to be avoided based on an output of the detection device DD (distance detection unit).

Step S12

The collision rate map setting unit 250 determines the collision rate map based on the predetermined target collision rate, the maximum deceleration set by the present behavior pattern generating unit 200, the combination of sensors (sensor configuration) that detect the object at present among the plurality of sensors of the detection device DD of the distance detecting unit, the detection distance detected by the sensors, and the detection time continuously detected by the sensors. The target collision rate is a predetermined fixed value predetermined as a design, and is a value determined in consideration of the detection rate of the detection object, the severity of the collision with the detection object, and the like.

This collision rate map is created on the premise that the deviation amount determined so that the collision rate calculated based on the overlap between the indicated value estimated from the maximum deceleration reaches the probability density distribution 1001 and the probability distribution 1003 estimated from the sensor configuration, the detection distance, and the detection time and based on the fusion accuracy reliability is equal to the target collision rate. In other words, the collision rate map is created on the premise that the target stop position is set so that the collision rate at that position is the target collision rate based on the target collision rate, the maximum deceleration, the sensor configuration, the detection distance, and the detection time.

The collision rate map thus obtained approximately obtains the collision rate with respect to the position and the velocity corresponding to the discrete mesh points. Discrete grid points mean discrete "states" respectively.

Step S13

The action plan generating unit 200 plans an action plan based on the collision rate and the target collision rate in the current state (position, speed) of the host vehicle 1 on the collision rate map.

Step S14

The travel control unit 160 controls at least the speed of the host vehicle 1 based on the behavior pattern, and ends the processing of this flow.

The automated driving control unit 120 executes steps S11 to S14 every time a predetermined time elapses.

The following describes features of the collision probability map used in the vehicle control described above.

< Collision Rate map >

Fig. 14 is a diagram showing the collision probability map 1000. Fig. 14 (a) shows a collision rate map 1000 in a case where the vehicle is decelerated to the front traveling vehicle position to prevent a collision. Fig. 14 (a) shows position on the horizontal axis and velocity on the vertical axis. Fig. 14 (b) shows the collision rate shown in fig. 14 (a) as the reliability in observation in terms of gradation. Fig. 14 (c) shows the peak of the collision probability at the speed 0 in the collision probability map 1000 of fig. 14 (a).

The collision probability map 1000 shown in fig. 14 is a theoretical map created on the basis of the premise that the actual position of the preceding vehicle is known, for explaining how an action plan can be created with reference to the collision probability map.

When the detection rate and accuracy of the sensor, the accuracy of the behavior pattern, and the collision rate in the final state are determined, it is possible to calculate the collision rate for each of the states including the speed of the vehicle and the position of the vehicle with respect to the object. Based on the calculated collision rate, a collision rate map 1000 shown in fig. 14 (a) is created.

Of the collision rates (0 to 1) shown in fig. 14 (b), for example, a value smaller than or equal to the human collision rate is used as the target collision rate (see reference numeral i in fig. 14 (b)). Here, the "collision rate of people" means "a collision rate of people (drivers) occurring when driving the vehicle 1 in the same road environment". The collision rate is set as the target collision rate based on the following consideration: in order to make the case of driving by automatic driving safer by comparing the case of driving by automatic driving with the case of driving by a person, the collision rate should be made lower than the collision rate occurring when driving by a person.

A target speed value curve for traveling at the target collision rate can be drawn on the collision rate map 1000 in fig. 14 (a) by connecting states (position, speed) in which the collision rate is the target collision rate. Arrows (→) of the collision rate map shown in fig. 14 (a) are target collision rates.

The collision rate at speed 0 corresponds to probability distribution 1003 obtained based on the fusion accuracy reliability, and has a normal distribution shown in fig. 14 (c). When the standard deviation of the normal distribution is represented by σ, the standard deviation is 10 at a position 6 σ away-7The probability of collision is considered not to be a collision.

The invention aims to realize safety rate and comfort at the same time. Therefore, a proper behavior pattern is set in the region of the collision probability map 1000 shown in fig. 14 (a).

When the collision rate is lower than the 1 st threshold value lower than the target collision rate (see "low collision rate region" in fig. 14 (a)), the brake is performed carefully, that is, the set speed is maintained as much as possible. Emphasis is placed on comfort, which is not jerky but allows sudden braking.

If the collision rate is lower than the target collision rate and equal to or higher than the 2 nd threshold value which is equal to or higher than the 1 st threshold value (see "high collision rate region" in fig. 14 (a)), the vehicle is frequently braked. Safety is emphasized, and sudden braking is not performed although the safety is not stable. Fig. 14 (a) shows an example of the case where the 2 nd threshold value is equal to the 1 st threshold value.

The gradation expression of the collision probability map 1000 shown in fig. 14 (a) represents an index for performing control so as not to become equal to or greater than a pre-designed collision probability. Further, the control side can perform control so as not to fall into a region with a rich or lean density when performing automatic driving control.

< Algorithm and accuracy of action scheme >

The algorithm and accuracy of the course of action is explained.

The accuracy of the algorithm for achieving the action plan is found to correspond to the equation (1) to beThe obtained portion.

Fig. 15 is a diagram illustrating a relationship between an action pattern and a collision risk. The same portions as those in fig. 10A are denoted by the same reference numerals.

The vehicle control device 100 (see fig. 2) controls the vehicle 1 so as to stop at the position D in fig. 15. However, it is difficult to stop at the position D due to several factors such as road surface conditions and vehicle body failure. Whether parking at position D can be achieved is represented by a probability density distribution. In the case of the present embodiment, the instruction value achievement probability density distribution estimation unit 220 (see fig. 4) estimates the instruction value achievement probability density distribution 1001.

As shown in fig. 15, the collision risk is indicated by a portion (see reference numeral j in fig. 15) indicating that the value achievement probability density distribution 1001 overlaps with the collision region 1200. If it is desired to make the security more secure, the portion (see reference numeral j in fig. 15) may be further reduced. As a method for reducing this, it is possible to make the probability density distribution closer to the median (reduce factors such as road surface conditions and vehicle body failure), but this method is difficult, and therefore the position D is set closer to the front than the collision region 1200.

< reliability of sensor (detection rate, accuracy) >)

The reliability (detection rate, accuracy) of the sensor is described.

Determining the reliability of the sensor to correspond to the relation in equation (1)The obtained portion.

Fig. 16 is a diagram illustrating an error distribution of the sensor. The same reference numerals are given to the same parts as those of fig. 15. The "sensor error distribution" shown in the lower part of fig. 16 indicates the distribution of errors occurring in the result of detection of the object by the sensor. The distribution is centered on the position of the object. This distribution is the probability distribution 1003 obtained based on the Fusion accuracy reliability described above.

As shown in fig. 16, the possibility of collision can be reduced by having the margin M at the braking position.

Impact of action scenarios and sensors on impact risk

Illustrating the impact of the action scheme and sensors on the collision risk.

Fig. 17 is a diagram showing the impact that an action plan and a sensor may have on the collision risk. Fig. 17 (a) is a collision rate graph in the case where the maximum braking is-0.6G and the standard deviation of the error of the sensor is σ 1. The solid white line in fig. 17 (a) shows an example of the target collision rate.

Fig. 17 (b) shows the target collision rate in the case where the behavior pattern setting of the collision rate map of fig. 17 (a) is changed. If the maximum braking allowable by the behavior pattern is increased, the capability of the vehicle 1 to cope with an obstacle is improved, and the collision rate is decreased. In this case, the combination of the position and the velocity having the target collision rate on the collision rate map moves upward and rightward (toward the object side, toward the high velocity side). That is, the low collision probability region is enlarged. As an example, as shown by an arrow k in fig. 17 (b), if the behavior pattern is changed to the maximum braking of-1.0G, the deceleration G used for braking can be increased, and the state of low collision risk can be expanded. Therefore, the time for braking can be delayed to the critical point.

Fig. 17 (c) shows how the collision probability map of fig. 17 (a) changes when the sensor performance is low. When the standard deviation of the error of the sensor is σ 2(σ 1 < σ 2) and the sensor performance is low, the collision rate is estimated to be high in the present embodiment, and the low collision rate region in the collision rate map is made small. This is shown by the arrow l in fig. 17 (c). In this case, the behavior pattern is generated so as to reduce the travel speed, whereby the collision risk can be reduced.

< method for utilizing reliability (detection rate) of a plurality of sensors >

A method of using the reliability (detection rate) of a plurality of sensors will be described.

Fig. 18 is a diagram illustrating a method of using reliability (detection rate) of a plurality of sensors. Fig. 18 (a) shows an example in the case where a plurality of sensors are detected by AND, fig. 18 (b) shows an example in the case where a plurality of sensors are detected by OR, AND fig. 18 (c) shows an example in the case where braking is performed according to the detection states of a plurality of sensors.

The plurality of sensors are referred to as sensor 1 and sensor 2. Here, D1Representing the situation in which the object is detected by the sensor 1, D2Indicating the state in which the object is detected by the sensor 2, and E indicating the state in which the object is present. The probability that the sensor 1 detects the object in the state where the object is present is represented as p (D)1I E). The probability that the sensor 2 detects the object in the state where the object is present is represented as p (D)2I E). The probability that both of the sensor 1 and the sensor 2 detect the object in the state where the object is present is represented by p (D)1∩D2I E). The probability of the fact that the sensor 1 detects the object and the sensor 2 does not detect the object in the state where the object is present is expressed asThe probability of the fact that the sensor 1 does not detect the object and the sensor 2 detects the object in the state where the object is present is expressed asThe probability that both of the sensor 1 and the sensor 2 do not detect the object in the state where the object is present is expressed asWith respect to the probability p (D) of simultaneous detection by both sensors1∩D2I E), when the numerical expression (2) is establishedSituation D1I E and State D2I E is not independent.

Digital type (2)

p(D1∩D2|E)≠p(D1|E)p(D2|E)…(2)

In the example of the AND detection in fig. 18 (a), the brake with the deceleration of 0.6G is performed at a rate of 0.4 to 1.0 as a whole, AND the cruise is performed at a rate of 0.6. In the case of AND detection, braking is performed only when the object is detected by both sensors at the same time. The probability of the event being in the presence of the object is p (D)1∩D2I E). When both or one of the sensors does not detect the object, the cruise is performed. The probability that both or one of the sensors does not detect the object in the state where the object is present is

An example of an action scheme using AND detection is explained later by fig. 27.

In the example of detection by OR in fig. 18 (b), the brake with the deceleration of 0.6G is performed at a rate of 0.9 with respect to 1.0 as a whole, and the cruise is performed at a rate of 0.1 with respect to 1.0 as a whole. When the detection is performed by OR, the brake is applied when one OR both of the sensors detects the object. The probability of the fact that the object is detected by one or both of the sensors in the presence of the object is p (D)1∩D2|E)、The cruise is performed when neither sensor detects the object. The probability of the event in the state where the object exists is

An example of an activity scheme using OR detection is subsequently illustrated by fig. 28. Further, the merits OR applicable conditions of the action scheme using the AND detection AND the action scheme using the OR detection are not involved at all.

In the case where braking is performed according to the detection state of the sensor in fig. 18 (c), the brake braking ratio of 0.6G is 0.4 and the probability p (D) thereof is given as an example1∩D2I E), the brake braking proportion of 0.1G is 0.3 and the probability thereofBrake braking ratio of 0.05G 0.2 and probability thereofAnd cruise ratio of 0.1 and its probabilityCombinations of (a) and (b).

An example of an action pattern for performing braking according to the detected state will be described later as HALF AND detection in fig. 29. The method of using the reliability (detection rate) of a plurality of sensors shown in fig. 18 is an example.

As described above, in the present embodiment, the logic of the behavior scheme for achieving both safety and comfort is sought by using a method in which the detection states of the two sensors are different and the strength of braking is changed according to the respective detection rates.

< Collision ratio based on Algorithm alpha >

The change in collision rate and anxiety rate obtained by the algorithm α is described.

Fig. 19 is a state transition diagram illustrating changes in collision rate and anxiety rate obtained by the algorithm α.

The algorithm α in the range of the broken line in fig. 19 is obtainedThe total collision rate of (2) is a predetermined collision rate PcAction A of the houri. Here D → AiMeans to adopt action A when the object is detectediIndicating that action A is taken when the object is not detected1

Fig. 19 and the following numerical expressions are shown as follows: and (4) action: a. theiAnd the state: siCollision: c, fast: u, detection: d, not detectingThe presence of obstacles: e, absence of obstacles:

the positive detection rate of the sensor is input to the algorithm α: p (D | E) and false detection rate of the sensor:

collision rate p obtained based on algorithm alphaα(C | E) is represented by the following equation (3), and action A is selectediThe collision rate p (C | A) in the case of (2)i) Represented by the following equation (4).

And the collision rate is taken to be PcAction A at detection of the closest valueiBased on the collision rate p (C | E) obtained by the above algorithm α and the selection of action AiThe collision rate p (C | A) in the case of (2)i) And is represented by equation (5).

The formula (5) shows that the total collision rate is obtained as the predetermined collision rate PcAction A of the closest valueiAlgorithm a of (1). When the predetermined collision rate Pc to be the target is determined, the method of the algorithm α is determined.

Numerical expressions (3) - (5)

Collision rate of algorithm alpha

Selects action AiThe collision rate in the case of

Behavior when the collision rate becomes Pc

FIG. 19 shows the state SinitAccording to action A1~A3To state S1~S3Which state transition in (S)1|A1)、p(S2|A1)、p(S3|A1)、p(S1|A2)、p(S2|A2)、p(S3|A2) And p (S)1|A3)、p(S2|A3)、p(S3|A3). In other words, state S1~S3Can be regarded as slave state S according to the events occurring with the above probabilitiesinitThe target state of the transition.

By determining the above-mentioned state S1~S3And determines according to the state S1~S3Probability of collision p (C | S) of occurrence1)、p(C|S2)、p(C|S3) And according to state S1~S3Incidence of anxiety p (U | S)1)、p(U|S2)、p(U|S3)。

FIG. 20 shows an action A according to that shown in FIG. 191~A3To state S1、S2、S3Probability of transition p (S)1|A1)、p(S2|A1)、p(S3|A1)、p(S1|A2)、p(S2|A2)、p(S3|A2) And p (S)1|A3)、p(S2|A3)、p(S3|A3) Specific examples of (3) are as follows.

Based on algorithm alphaThe obtained collision rate p α (C | E) is expressed by the equation (3), and the action A is selectediCollision rate p (C | A) of timei) Represented by the following equation (4).

And the collision rate is taken to be PcAction A at detection of the closest valueiBased on the above-mentioned collision rate p obtained by the algorithm αα(C | E) and action A was selectediCollision rate p (C | A) of timei) And is represented by equation (5).

Selects action AiAnxiety ratio p (UA) of timei) The uneasiness rate of the algorithm alpha is expressed by the numerical expression (6)Represented by the numerical formula (7).

Number formula (6), (7)

Selects action AiThe rate of anxiety in the case of (1)

Uneasiness rate of algorithm alpha

< formation of State transition Process for action >

There is illustrated the evaluation of collision rates derived from algorithms by a mesh system constructed based on the formation of state transitions for actions.

First, the formation of a state transition process for an action is described.

FIG. 20 shows a transition to state S by taking action in a certain state1~S3An example of a state transition is performed. Such state transition is performed in each state, thereby forming a state transition process.

Fig. 21 is a diagram illustrating a state transition process for an action. In fig. 21, the horizontal axis X represents position and the vertical axis V represents velocity. Each state in the state transition process has a collision rate, and the state transition is performed to another state until the target stop position (collision position) is reached or the collision is avoided while decelerating or cruising. Therefore, the state transition process is expressed in such a manner that the state is shifted to the lower right.

The ● notation in fig. 21 indicates a state, and the dashed arrow in fig. 21 indicates that the state diverges and shifts with the movement of the position. In each layer (row direction) in the state, the distance from the target stop position (collision position) approaches with the movement of the vehicle, and therefore the detection rate of the sensor (including the rate of grasping the road surface condition) increases.

In fig. 22 and 23, thick arrows indicate transitions of interest, and thin arrows indicate transitions subsequent to the thick arrows.

For example, in the case of fig. 21, in a state shown by a broken line range m in fig. 21, the collision reaches the position of the object and has a velocity, and thus the collision occurs and the collision rate becomes 1. In the state shown by the broken-line range n in fig. 21, the speed becomes 0 (stop at speed 0) before reaching the collision position, and thus the collision rate becomes 0 for avoiding a collision. In the state shown by the broken line range o in fig. 21, at the collision position, the speed is 0, and thus the collision rate becomes 0.5.

Next, the change of the detection rate in the state transition process to the action of fig. 21 will be described.

Fig. 22 is a graph illustrating a change in detection rate during a state transition with respect to the action of fig. 21. The detection rate of the sensor is increased as each layer (column direction) in the state approaches the target stop position (collision position), and is set to 0.2, 0.4, 0.6, 0.8, 1.0, and 1.0.

Fig. 22 (a) shows an example of transition from the initial state (in the right lateral direction of fig. 21) without decreasing the speed in the state transition process of fig. 21.

Fig. 22 (b) shows an example in which the speed is slightly decreased from the initial state (in the right horizontal lower middle direction of fig. 21) and the transition is made in the state transition process of fig. 21.

Fig. 22 (c) shows an example in which the speed is reduced from the initial state and the transition is made (in the right horizontal direction of fig. 21) in the state transition process of fig. 21.

As is clear from comparison of (a) to (c) of fig. 22, the collision probability decreases as the deceleration speed increases, but if the deceleration speed is too fast, the collision rate is calculated in a state where the sensor detection rate is low (with low reliability).

In this way, by forming a state transition process for an action, the collision rate obtained by the algorithm can be evaluated by these mesh systems.

Next, an algorithm in the state transition process for an action will be described.

Fig. 23 is a diagram illustrating an algorithm in the state transition process for the action of fig. 21.

In the state transition process for an action, it is not necessary that all its local processes must be the same algorithm.

For example, as shown by a one-dot chain line range p in fig. 23, when the sensor detects that the vehicle suddenly enters the front of the traveling lane of the host vehicle or a falling object or an obstacle is generated in the course of the behavior transition in fig. 21, the maximum braking is performed regardless of the behavior transition in fig. 21, and the collision damage is reduced.

< continuous processing of collision rate based on Algorithm alpha >

The continuity processing of the collision rate obtained based on the algorithm α is explained.

Fig. 24 (a) is a diagram illustrating a continuity process of the collision rate obtained by the algorithm α.

Fig. 24 (a) represents the following model: relative to the state SinitDownward state SiAlgorithm alpha of transfer selected action AiPlus external disturbances (disturbances) due to various external factors, whereby the state does not have a direction SiTransition but state transition has a continuous probability distribution.

In FIG. 24 (a), the result is action AiTo a certain state SiThe probability p (s | A) of transition has a standard deviation sσIs normally distributed. The result is a state SiLower rate of collision p (C | S) and state SiThe following anxiety ratio p (us) also has the same value as shown in fig. 24 (a)The distribution characteristic of continuity. In this way, by assuming a probabilistic model assuming external interference, it is possible to more accurately deal with a probabilistic situation of real observation.

On the other hand, fig. 24 (b) shows a model in the case where the external disturbance is not considered, and in this case, since there is no external disturbance, the distribution of p (s | a) has a discontinuous characteristic without spreading. Similarly, the collision rate p (C | s) and the anxiety rate p (U | s) are also discontinuous characteristics, and it is not possible to appropriately cope with a probabilistic situation of real observation.

A method of processing the probability in the model of fig. 24 (a) will be described below. FIG. 24 (a) and the following formulae, A (A)i) Representing an action, S representing a status (S)initIn an initial state), C indicates a collision, U indicates a slowness, D indicates detection of an obstacle by the distance detection unit,indicating that the obstacle is not detected by the distance detection unit, E indicating that the obstacle is present,indicating that there is no obstacle present and,indicating probe information (presence or absence of detection, probe distance, etc.) obtained by the distance detector. Furthermore, A1Indicating a cruise action.

Algorithm α calculation for the broken line range in fig. 24 (a)Action A when the total collision rate of (c) is the closest value to the predetermined collision rate Pci

The positive detection rate of the distance detection unit is input to the algorithm α: p (D | E), false detection rate of the distance detection unit:detection information of distance detection unit

The collision rate p (C | A) when action A is selected is expressed by equation (8), and action A when an obstacle is detectedDAnd actions when no obstacle is detectedRepresented by the formula (9).

The total collision rate p (C | E) is expressed by equation (10) from equations (8) and (9).

Numerical expressions (8) - (10)

Collision rate when action is selected

p(C|A)=∫p(s|A)p(C|s)ds…(8)

Algorithm

Total collision rate

The collision rate p (C | a) when action a is selected as shown in equation (8) is expressed as follows: the probability p (s | a) of the state to which the action a transits and the collision rate in the state s are information having a continuity distribution, and the information is obtained by multiplying the probability p (s | a) and the collision rate in the state s and integrating the probability p and the collision rate in the state s. In the embodiment of the present application, the collision rate is continuously processed based on a model assuming that the external disturbance is applied to the action a in this way.

Protocol for action plan

An action plan generated by integrating the detection error and the detection rate of the sensor will be described.

FIG. 25 is a schematic diagram showing an outline of an action plan, in which the abscissa axis represents an actual obstacle distance d, and the ordinate axis represents an observed obstacle distance

As shown in FIG. 25, the course of action has the distance to the observed obstacleThe "hard braking" used in the short case, and the "preparatory braking" used in the far case. Action plan generated by integrating detection errors and detection rates of sensors

FIG. 26 is a schematic diagram showing the behavior of FIG. 25 and the results of a cruise action performed when no obstacle is detected by a matrix consisting of detected obstacle D and undetected obstacle DAnd the presence and absence of an obstacle EAnd (4) forming. In fig. 26, the | mark indicates that the obstacle is detected in a missing manner although it is located at a short distance, and it indicates that the obstacle is not present but is erroneously detected as being present at a short distance. If the sensor detects an obstacle, the brake action is executed, and if the obstacle is not detected, the cruise action is executed.

In the presence of an obstacle E and detection of an obstacle D

The "positive detection" is an action plan for performing sudden braking or preliminary braking according to the observed obstacle distance.

At the observed obstacle distanceIf the distance is longer than the actual obstacle distance d, the vehicle may be subjected to a preliminary braking (which may be remedied by a subsequent action plan).

Observed obstacle distanceDistance d from actual obstacleIn recent years, sudden braking is performed too early.

Observed obstacle distanceIf the actual obstacle distance d is approximately equal to the actual obstacle distance d, the brake is made to be a proper sudden brake or a proper preparatory brake in accordance with the actual obstacle distance d. Presence of an obstacle E and no detection of an obstacleIn the case of

In this case, since the obstacle is not recognized, the cruise operation is performed as the action plan. This is "missed detection" and there is a potential risk of causing a collision accident.

When the actual obstacle distance d is a short distance, the cruise is immediately possible to collide with, and it is necessary to perform collision avoidance quickly.

In the case where the actual obstacle distance d is far, it is then cruising with a possibility of collision (with a possibility that it can be remedied by a later course of action).

No obstaclesAnd detects the condition of the obstacle D

The action plan executes a braking action because an obstacle is erroneously recognized although it is "false detection". At the observed obstacle distanceIn the case of a short distance, the vehicle is suddenly braked meaninglessly, and the vehicle is moved to an observed obstacle distanceIn the case of a long distance, the brake is meaningless.

No obstaclesAnd no obstacle detectedIn the case of

Is "not detected". In this case, since no obstacle is recognized, the action pattern executes the cruise action. It becomes the correct cruise.

In this way, when the obstacle E exists, the obstacle is not detectedIn the case of (2), although the obstacle is at a short distance, detection leakage (reference ═ mark) occurs, resulting in a reduction in safety.

In addition, no obstacles existIf the obstacle D is detected, the presence of the obstacle in a short distance is erroneously detected (see a four-star) although there is no obstacle, resulting in a reduction in comfort and reassurance.

The present embodiment suppresses the above-described "missing detection" and "erroneous detection".

< AND detection >

FIG. 27 is a schematic diagram illustrating AND detection in a redundant activity scheme utilizing two sensors. Fig. 27 shows the result of the case where the action pattern of fig. 25 is executed using AND detection by a matrix obtained by detecting an obstacle D by one sensor1And no obstacle detectedDetected obstacle D based on another party's sensor2And no obstacle detectedAnd the presence of an obstacle E and the absence of an obstacleAnd (4) forming.In the case of AND detection, only when an obstacle is detected by both sensors at the same time, that is, only in D1And D2In the case of simultaneous occurrence, a braking action is employed, and in the other case a cruising action is employed. In fig. 27, the | mark indicates that detection is missed although the obstacle is at a short distance, and it indicates that the obstacle is not present but the presence of the obstacle is erroneously detected at a short distance.

The AND detection shown in fig. 27 corresponds to an example of the case where two sensors are detected as an AND as described in (a) of fig. 18.

Presence of obstacle E and detection of obstacle D1And no obstacle detectedIn the case of

This is "a contradictory state of two sensors" in which it is determined that no obstacle or the like is present. In AND detection, even if one sensor detects an obstacle (D)1) It is also determined that sensor information is incorrect due to a failure or the like in one of the two sensors. In this case, there is a risk that the braking is not accurate and an accident may occur. Thus, a cruise action is employed.

When the actual obstacle distance d is a short distance, the cruise is immediately possible to collide with, and it is necessary to perform collision avoidance quickly.

When the actual obstacle distance d is long, the cruise control system becomes a cruise control system having a possibility of a collision.

Presence of an obstacle E and no detection of an obstacleAnd no obstacle detectedIn the case of

In this case, a cruise action is employed. This is "missed detection/erroneous cruising" in which it is determined that no obstacle or the like is present, and has a risk of causing a collision accident.

When the actual obstacle distance d is a short distance, the cruise is immediately possible to collide with, and it is necessary to perform collision avoidance quickly.

When the actual obstacle distance d is long, the cruise control system becomes a cruise control system having a possibility of a collision.

Presence of an obstacle E and no detection of an obstacleAnd detecting an obstacle D2In the case of

This is the "contradictory state of the sensor" in which it is determined that no obstacle or the like is present. In AND detection, even if one sensor detects an obstacle (D)2) It is also determined that sensor information is incorrect due to a failure or the like in one of the two sensors. In this case, there is a risk that the braking is not accurate and an accident may occur. Thus, a cruise action is employed.

When the actual obstacle distance d is a short distance, the cruise is immediately possible to collide with, and it is necessary to perform collision avoidance quickly.

When the actual obstacle distance d is long, the cruise control system becomes a cruise control system having a possibility of a collision.

No obstaclesAnd detects an obstacle D1And detecting an obstacle D2In the case of

In this case, a braking operation is adopted. This is "mis-detection/mis-braking" at the observed obstacle distanceIn the case of a short distance, the vehicle is suddenly braked unnecessarily, and the distance between the observed obstacles is increasedIn the long-term case, it becomes unnecessary to perform the preliminary braking.

Thus, in AND detection, the AND detection is performedObstacle D1And detecting an obstacle D2Otherwise, the cruise operation is adopted. This cruise operation is a cruise operation with a possibility of collision when the obstacle E is present. On the other hand, in the absence of obstaclesIn the case of (2), an obstacle D is detected1And detecting an obstacle D2The cruise operation used in the case other than (1) becomes the correct cruise.

< OR detection >

Fig. 28 is a schematic diagram illustrating OR detection in a redundant activity scheme utilizing two sensors. Fig. 28 shows the result of the case where the action pattern of fig. 25 is executed using OR detection in the form of a matrix of detected obstacles D obtained by one sensor1And no obstacle detectedDetected obstacle D based on another party's sensor2And no obstacle detectedAnd the presence of an obstacle E and the absence of an obstacleAnd (4) forming. In the case of OR detection, when an obstacle is detected by one OR both of the sensors, that is, in the case of D1And/or D2The braking action is used in the case of occurrence and the cruising action is used in the other cases. In fig. 28, the | mark indicates that detection is missed although the obstacle is at a short distance, and it indicates that the obstacle is not present but the presence of the obstacle is erroneously detected at a short distance.

The OR detection shown in fig. 28 corresponds to an example of the case where two sensors are detected as an OR as described in (b) of fig. 18.

Presence of an obstacle E and no detection of an obstacleAnd no obstacle detectedIn the case of

In this case, a cruise action is employed. This is "missed detection/erroneous cruising" in which it is determined that no obstacle or the like is present, and has a risk of causing a collision accident.

When the actual obstacle distance d is a short distance, the cruise is immediately possible to collide with, and it is necessary to perform collision avoidance quickly.

When the actual obstacle distance d is long, the cruise control system becomes a cruise control system having a possibility of a collision.

No obstaclesAnd detects an obstacle D1And detecting an obstacle D2In the case of

In this case, a braking operation is adopted. This is "mis-detection/mis-braking" at the observed obstacle distanceIn the case of a short distance, the vehicle is suddenly braked unnecessarily, and the distance between the observed obstacles is shortenedIn the long-term case, the brake is unnecessarily applied.

No obstaclesAnd detects an obstacle D1And no obstacle detectedIn the case of

In this case, a braking operation is adopted. At the observed obstacle distanceIn the case of a short distance, the vehicle is suddenly braked unnecessarily, and the distance between the observed obstacles is shortenedIn the long-term case, the brake is unnecessarily applied.

No obstaclesAnd no obstacle detectedAnd detecting an obstacle D2In the case of

In this case, a braking operation is adopted. At the observed obstacle distanceIn the case of a short distance, the vehicle is suddenly braked unnecessarily, and the distance between the observed obstacles is shortenedIn the long-term case, the brake is unnecessarily applied.

In this way, in the OR detection, an obstacle is not detected only when it is not detectedAnd no obstacle detectedCruise action is employed in the simultaneous situation. When the cruise operation is performed in the presence of an obstacle E, the cruise operation is a "missed detection/false cruise" cruise operation, and the cruise operation is performed without an obstacle EWhen the cruise control is performed in the above state, the cruise control is performed accurately.

When the above-described AND detection (see fig. 27) AND OR detection (see fig. 28) are compared, the AND detection emphasizes the sensingThe device detects the state and captures a sensor failure as a factor that may cause a collision. Therefore, delay in determination of the possibility of collision due to neglecting sensor failure can be prevented, and further improvement in reliability can be expected. In the case of AND detection, when a conflicting state of the sensor occurs due to a sensor failure, the cruise operation is selected. Thereby, no obstacle existsWithout loss of comfort. However, there is a possibility that safety may be impaired in the presence of the obstacle E.

In the case of OR detection, when a contradictory state of the sensors occurs due to a sensor failure, the braking operation is selected. Thereby being free of obstaclesThe comfort is lost. However, safety is ensured in the presence of the obstacle E.

< HALF AND detection >

FIG. 29 is a schematic diagram illustrating HALF AND detection in a redundant activity scheme utilizing two sensors. The same portions as those of the AND detection of fig. 27 are denoted by the same reference numerals, AND redundant description thereof is omitted.

The HALF AND detection shown in fig. 29 shows an example of a case where braking is performed according to the detection states of the two sensors as described in fig. 18 (c).

In the AND detection of fig. 27, the obstacle D is detected1And no obstacle detectedAnd no obstacle detectedAnd detecting an obstacle D2In the case of (2), the "contradictory state of the sensor" is recognized. In HALF AND detection of FIG. 29, only the above condition is satisfied AND observedMeasured distance to an obstacleIf the threshold value is equal to or greater than the predetermined threshold value, the "contradictory state of the sensor" is recognized as the contradictory state.

As shown in fig. 29, in HALF AND detection, an obstacle D is detected1And no obstacle detectedAnd no obstacle detectedAnd detects an obstacle D2And observed obstacle distanceIf the threshold value is equal to or greater than the predetermined threshold value, the vehicle is regarded as "contradictory state of the sensor", and it is determined that no obstacle is present and the cruise operation is adopted. The result of this case is the same as the case of AND detection. That is, in the case where the actual obstacle distance d is a short distance, the cruise with the possibility of collision is performed immediately in the presence of the obstacle E, and the cruise with the possibility of collision is performed subsequently in the case where the actual obstacle distance d is a long distance. In the absence of obstaclesBecomes the correct cruise.

As shown in fig. 29, in HALF AND detection, an obstacle D is detected1And no obstacle detectedAnd no obstacle detectedAnd detects an obstacle D2And observed obstacle distanceIf the value is less than the predetermined threshold value, it is determined that an obstacle is present and the preliminary braking operation is adopted. When the actual obstacle distance d is short, the pre-braking operation that may possibly collide is performed in the presence of the obstacle E, and when the actual obstacle distance d is long, the pre-braking operation is performed accurately. In the absence of obstaclesBecomes useless preliminary braking.

Thus, in HALF AND detection, the obstacle D is detected1And no obstacle detectedAnd no obstacle detectedAnd detecting an obstacle D2Based on the observed distance to the obstacleThe cruise operation or the preliminary braking operation is adopted in the magnitude relation with the predetermined threshold value. Thus, when a contradictory state of the sensor occurs due to a sensor failure, no obstacle is presentAnd observed obstacle distanceEnsuring comfort when an obstacle E is present and an observed obstacle distanceAnd ensuring safety when the value is less than a prescribed threshold value.

In addition, in OR detection of FIG. 28Non-obstacleIn the detection of an obstacle D1And no obstacle detectedAnd no obstacle detectedAnd detects an obstacle D2According to the observed distance of the obstacleAs a result of the "sudden braking" or the "preliminary braking" being adopted, the braking system becomes "meaningless sudden braking" and "meaningless preliminary braking". In HAL FAND detection, the distance to an obstacle is observed in the above-mentioned caseThe result of using the "preparatory braking" or "cruising" is to make the "unnecessary preparatory braking" or "correct cruising". Thus, compared to OR detection, the following effects are obtained: the increased possibility of unnecessary sudden braking and unnecessary preparatory braking is reduced without compromising comfort as much as possible.

In the AND detection shown in fig. 27, six major marks indicate that detection is missed although an obstacle is at a short distance, AND two major marks indicate that an obstacle is not present but erroneously detected as being present at a short distance, for a total of eight. In the OR probe shown in fig. 28, the star marks are two and the star marks are six, and eight marks are still present. In the HALF AND detection shown in fig. 29, the star mark is four AND the star mark is two, for a total of six.

From this result, it can be said that the HALF AND detection is superior in comfort compared to the AND detection, AND is superior in comfort but insufficient in safety compared to the OR detection. On the other hand, from a comprehensive viewpoint, that is, if a comparison is made with the total number of the ∑ marks AND the ∑ marks that are the subject, it can be said that the HALF AND probe provides the results after improvement of the AND probe AND the OR probe.

The above describes an action scheme using AND detection, OR detection, HALF AND detection. The following describes a method of determining the collision probability in each state during traveling in detail.

Incidence of events

Next, a method of determining the event occurrence rate will be described. The event occurrence rate is a collision rate when the event belongs to "collision". The event occurrence rate is a concept of generalizing the collision rate.

Fig. 30 is a state transition diagram of a collision occurrence state.

In FIG. 30, ScThe state of the present is shown as such,representing observed values, alpha representing actions, SnIndicates the next state, and C indicates the event (collision). The state transition diagram of fig. 30 represents the following: in the present state ScObserved valueAnd correspondingly initiates an action alpha (deceleration), thereby leading to the next state SnAnd a shift, which results in a collision. Based on the state transition model, from state ScThe probability p (C | Sc) of collision is represented by equation (11).

Digital type (11)

αAction

Sc: current state of the art

Sn: next state

C: state of affairs (Collision)

Observed value

For observed valuesActions to be taken

p (C | S): probability of occurrence of event C in state S

Slave state ScDue to observation ofThe actions to be performedTo the S directionnProbability of transition

In a state ScUnder the observation thatProbability of (2)

< method for determining Collision Rate >

Fig. 31 is a diagram illustrating a method of determining a collision rate based on the conditions known at the location where the obstacle is actually located. Fig. 31 (a) shows a trajectory of a position and a speed when the vehicle is decelerated from a current position and speed and stops at a target stop position set in front of an obstacle, and the horizontal axis shows the position and the vertical axis shows the speed. Fig. 31 (a) corresponds to fig. 10A.

Fig. 31 (b) shows the collision rate at speed 0, fig. 31 (c) shows the error distribution of the object detection by the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120), and fig. 31 (d) shows the error distribution of the object detection by the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120)Detection rate of object detection. In fig. 31, (b) to (d) show positions on the horizontal axis and occurrence rates on the vertical axis. In (a) - (d) of FIG. 31, p (∈ g)α) Representing uncertainty epsilon of the course of actionαThe probability of (a) of (b) being,a value indicating whether an obstacle is observed (the value takes 1 in the case of observed, the value takes 0 in the case of unobserved),denotes the observed obstacle distance, xcIndicates the present position, xoIndicating the position of the obstacle, vcIndicating the speed in the current position, dMIndicating that the setting is at the observed obstacle distanceAnd a target position for stopping the vehicle.

Fig. 32 is a state transition diagram showing a collision occurrence process in fig. 31.

In fig. 32, the action (deceleration) is α, the current state is Sc, and the observed value is ScThe next state is SnThe situation (collision) is C. According to (a) of FIG. 31, the present state ScFrom the current velocity vcAnd position xcTo define and represent as (v)c,xc) Observed value ofBy indicating detected/undetected conditionsIndicating distance to objectTo define and represent asNext state SnFrom the speed v of the next statenAnd position xnTo define and represent as (v)n,xn)。

The locus of position and velocity (thick solid line) shown in fig. 31 (a) has a probability distribution p (e) of uncertaintyα)。

Herein, epsilonαRepresenting uncertainty in the course of action for the next state transition.

Current state ScAt a velocity v ofcAnd position xc:Sc=(vc,xc)。

Observed valueTo, in case of detection/non-detection:distance from object:

a stopping distance margin dM

In the present state Sc ═ vc,xc) Lower intention to stop at the target stop positionThe deceleration a of the deceleration performed at the time is represented by equation (12). Indicates the next state SnVelocity v ofnAnd position xnAre expressed by the numerical expressions (13) and (14). Δ T in expressions (13) and (14) represents a time difference between the current state and the next state. Next state SnIs from the present state ScThe state is obtained when the vehicle is decelerated by the deceleration a for Δ T time.

Numerical expressions (12) - (14)

V is shown in FIG. 31 (b)cThe collision rate in 0 is defined by having a standard deviation of σdIs characteristic of the cumulative distribution of the normal distribution. That is, the collision rate p (C | v) of the velocity 0c=0,xc) Represented by equation (15).

The error distribution of the detection of the object by the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120) shown in fig. 31 (c) is based on the error distributionAnd (4) showing.

In this case, the amount of the solvent to be used,the probe distance observed in this state depends on xcDistribution of (2).

As shown in fig. 31 (d), the detection rate based on the detection of the object by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) isWhich varies according to the distance from the detection.

vcCollision rate p (C | v) in 0c=0,xc) Represented by equation (15). σ in numerical formula (15)dIs the standard deviation of the normal distribution of the collision rate described above, and erf is a gaussian error function. Uncertainty epsilon of algorithm (deceleration) alphaαProbability of p (e)α) Has a normal distribution and is represented by equation (16). In other words, p (∈ p)α) Watch with clockShowing the uncertainty epsilon representing the difference between the deceleration actually performed by the vehicle and the indication of deceleration aαThe probability taken. σ in the numerical formula (16)αIs the standard deviation of a normal distribution representing uncertainty. In addition, the state (x)c,vc) In the middle of the vehicle, the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120) detects the distanceThe probability (error distribution) of detecting the object at the position(s)Represented by the formula (17), state (x)c,vc) The detection rate of the object detected by the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120)Represented by equation (18). P in the formula (18)max、PminThe maximum detection rate and the minimum detection rate are respectively based on the detection of the object by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120). dsAnd deIs a parameter indicating a position in which the detection rate of the detection of the object from the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) in the zone area is determined from PminChange to Pmax(see FIG. 31 (d)).

Numerical expressions (15) - (18)

< calculation of Collision Rate p (C | s) >)

Next, calculation of the collision rate p (C | s) for each position and velocity will be described.

Fig. 33 to 35 are diagrams showing states on grids in a two-dimensional space of position and velocity in order to calculate the collision rate p (C | s) for each position and velocity.

1. Preparing a grid

Fig. 33 is a diagram showing a grid.

The grid shown in fig. 33 is prepared. To obtain the collision rate p (C | s) for each position and velocity, the states are divided into grids (see fig. 33), and the calculation is performed in order from the known position.

If the following conditions are set: index of grid (index): gx,Gv

Size of the grid: gxsize,Gvsize

State (position, velocity): x, v

Maximum value of position and velocity: x is the number ofmax、vmax

The index → the state value is expressed by expressions (19) and (20), and the state value → the index is expressed by expressions (21) and (22) (see fig. 33).

Numerical expressions (19) - (22)

Index → state value

State value → index

2. Conditions for providing collision ratio at the ends of the mesh of the map

Fig. 34 is a diagram illustrating the case of the providing conditions at the end portions of the meshes of the collision ratio map.

The collision rate is 1 at a position exceeding the object.

p(Gxmax,Gv)=1

Collision rate p (G) of velocity 0xAnd 0) is a cumulative distribution of the distribution detected based on the object obtained by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120), and is represented by equation (23) based on equation (15).

Digital type (23)

xoPosition of the obstacle

σdStandard deviation of error distribution of object detection by distance detection unit

3. Determining collision rates in order from the ends of the meshes of the collision rate map

Fig. 35 is a diagram illustrating the determination of the collision probability in order from the end of the mesh of the collision probability map.

As shown by arrows q and r in fig. 35, the collision probability is sequentially obtained from the ends of the meshes in the collision probability map. Reference symbol s in fig. 35 indicates the meaning of calculation for obtaining the collision probability in a certain state.

Here, the collision rate on the mesh is expressed as p (C | S) ═ pc(Gx,Gv)。

Now it isState S ofcLower collision rate p (C | S)c) The collision rate p on the mesh is expressed by the equation (24)c(Gxc,Gvc) Represented by equation (25).

Numerical type (24), (25)

The calculation of the collision rate p (C | s) for each position and velocity is described above.

Method for determining collision rate from next state in an approximate manner

Next, a method of obtaining the collision probability in an approximate manner from the next state will be described.

Fig. 36 to 38 are diagrams for explaining a method of approximately determining the collision probability from the next state.

1. Determining the current speed and position from the position of the grid

Fig. 36 is a view corresponding to fig. 31 (a).

From FIG. 36, the current velocity v is calculated according to the numerical expressions (26) and (27)cAnd position xc

Numerical type (26), (27)

2. Setting parametersεα

3. The required deceleration is found.

The required deceleration a is calculated according to equation (28).

Digital type (28)

4. Determining the time to the next state

Fig. 37 is a diagram for explaining a method of determining the time to reach the next state. x is the number ofstepRepresenting the spacing of the grid in the axial direction of the representation position, vstepThe intervals of the grid in the axial direction representing the speed are represented. In addition, Δ TxGrid line x ═ x indicating vehicle arrivalc+xstepTime to, Δ TvGrid line v ═ v indicating arrival of vehiclec-vstepThe time until that. FIG. 37 (a) shows Δ Tx<ΔTvIn the case of (1), FIG. 37 (b) shows Δ Tx≥ΔTvThe case (1).

First, as shown in fig. 37 (a) and (b), x is calculated according to expressions (29) and (30)c+xstepΔ T of the intersection ofxAnd is formed with vc-vstepΔ T of the intersection ofv

Numerical type (29), (30)

Only at δ ═ vc 2+2axstepWhen greater than 0, adopt

Then, from the calculated Δ Tx、ΔTvIn (1), the smaller one is selected according to equation (31).

Digital type (31)

ΔT=min(ΔTv,ΔTx)…(31)

5. Find the next state

The next state v is obtained from the numerical expressions (32) and (33)n、xn

Number formula (32), (33)

vn=vc+aΔT…(32)

xn=xc+vnΔT+1/2×aΔT2…(33)

6. The collision rate of the next state is obtained from grid points near the next state

Fig. 38 is a diagram showing a method for obtaining the collision probability in the next state from grid points in the vicinity of the next state, and (a) shows Δ Tx<ΔTvIn the case (b) represents Δ Tx≥ΔTvThe case (1).

As shown in (a) and (b) of FIG. 38, the next state (g) is followedxn,gvn) (see ● marks in fig. 38 (a) and (b))xc,Gvc-1)、(Gxc+1,Gvc-1)、(Gxc+1,Gvc) The collision rate of the next state is obtained according to the formula (34)c(gxn,gvn). Here, mxIs to be from the present state (G)xc,Gvc) To the next state (g)xn,gvn) Distance to is xstepNormalized values, therefore take values from 0 to 1. m isvIs to be driven from the next state (g)xn,gvn) To (G)xc+1,GvcThe speed difference up to-1) is in vstepNormalized values, therefore take values from 0 to 1. In addition, the distance between the grids is 1, so the adjacent states can be calculated as +1 or-1.

Digital type (34)

7. Side clearing (sweet) parameterεαEdge accumulation

Based on the calculated collision rate p of the next statec(gxn,gvn) By means of the numerical expression (35) while clearing the parametersεαThe collision rate p of the present state is obtained by summing the calculationsc(Gxc,Gvn)。

Digital type (35)

It is desirable to note the following. P of the numerical formula (35)c(gxn,gvn) Given by equation (34), m of equation (34)x、mvFrom the next state (g)xn,gvn) Determine the next state (g)xn,gvn) Is given by the numerical expressions (32) and (33), the numerical expressions (32) and (33) become functions of the deceleration a and the delta T, and the delta T is obtained by the numerical expression (31) according to the delta Tv、ΔTxAnd, accordingly,. DELTA.Tv、ΔTxThe function of deceleration a shown by equations (29) and (30) is given. The deceleration a is represented by the equation (28) including the uncertainty ε of the deceleration aαIs reduced to zero. That is, the next state (g)xn,gvn) By adding an uncertainty e to the deceleration aαIs obtained by the calculation of (1).

That is, p of the numerical formula (35)c(gxn,gvn) Becomes an uncertainty epsilon reflecting the deceleration aαThe value of (c). And in the formula (35), the summation object includes the pair pc(gxn,gvn) Multiplied by the uncertainty epsilon of the deceleration aαProbability of P (ε)α) Term (ii) including for uncertainty εαThe clearing of (1). Therefore, the collision rate p in the current state given by equation (35)c(Gxc,Gvc) Becomes an uncertainty epsilon reflecting the deceleration aαThe value of (c). This enables the collision rate to be continuously handled.

It is desirable to note that the detection rate obtained by multiplying the summation target of equation (35) by the detection of the object by the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120)The case (1). Detection rate based on detection of object by distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120)The characteristic represented by equation (18) is shown in fig. 31 (d). The detection rate of the object detection by the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120)In a low region, that is, a region distant from the object, the collision rate p is expressed by equation (35)c(Gxc,Gvc) The evaluation was low. In the region where the collision rate is evaluated to be low on the collision rate map, as described with reference to fig. 8, the action plan generating unit 200 can generate an action plan in which deceleration is started within a range that does not affect comfort.

< method for obtaining approximate Collision Rate map >

The collision probability map set in step S12 of the flow of the embodiment of the present application is calculated approximately by equations (16) to (18), (23), (26) to (35) assuming the situation shown in fig. 36. That is, the collision rate map is calculated assuming that an obstacle exists within a detection distance detected by a sensor configuration in which an obstacle is now detected. Hereinafter, this collision rate map is referred to as an "approximate collision rate map".

The term "approximate collision rate map" used herein is different from the "collision rate map" in which the true position of the object is known as described above, and is assumed to be a detection distance detected by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) as shown in fig. 36The calculation of the presence of obstacles is used for ease of illustration.

FIG. 36, number (28)And dMThe detection distance is a distance detected by a combination (sensor configuration) of sensors that detect an obstacle from among the plurality of sensors of the detection device DD, and the deviation amount determined by the indicated value deviation amount calculation unit 230, among the detection information output by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120).

The collision rate at each position where the velocity v is 0, which is an end of the approximate collision rate map used in the calculation, is given by equation (23). However, the true value x of the position of the objectoSubstituting the value of the detection distance. The position of the other end of the approximate collision ratio mapAnd the collision rate at v ≠ 0 is assumed to be 1.

Collision rate and position based on each position in velocity v-0And the collision rate at v ≠ 0, the collision rate at a point on the approximate collision rate map is approximately obtained according to equation (34).

Of formula (35)The equation (18) is applied to each grid point on the approximate collision probability map. Pair x in equation (18)cThe position x given by equation (19) is given.

Of formula (35)Is calculated using equation (17). However, the true value x of the position of the objectoSubstituting the detection distance. Of formula (35)This corresponds to "probability distribution 1003 based on fusion accuracy reliability". As described above, the probability distribution 1003 obtained based on the fusion accuracy reliability can obtain the measurement value obtained as a result of the process of measuring the characteristic of object detection obtained by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) in the case where the true value of the distance from the object is known. The width of the probability distribution 1003 obtained based on the fusion accuracy reliability depends on the probe distance, and becomes a normal distribution that is wider as the probe distance increases. Therefore, the parameter σ of the normal distribution characteristic expressed by the equation (17)dThe measured value is obtained in advance as a function of the detection distance of the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120), and the value is set according to the detection distance. In addition, the equation (23) for giving the collision rate at each position where the velocity v is 0 also uses σ thereofdThe value of (c).

P (∈ of equation (35))α) Is given by equation (16). Parameter σ of normal distribution characteristic expressed by equation (16)αCan be obtained in advance by measurement in advance. Using the measured value, the value calculated by equation (16) is defined as p (∈)α) The value of (c) is used.

P of the numerical formula (35)c(gxn,gvn) Is given by equation (34).

The deviation amount (that is, d) of the instruction value used for the calculation of the "approximate collision rate mapM) The probability distribution is obtained from a predetermined target collision rate, an indicated value, a probability density distribution 1001, and a probability distribution 1003 obtained based on the fusion accuracy reliability. The instruction value achievement probability density distribution 1001 is measured in advance through a flow of measuring a stop position when a certain deceleration is instructed to the vehicle 1, and is regarded as a function of the deceleration. The probability distribution 1003 obtained based on the fusion accuracy reliability is a characteristic measured in advance through a flow of how the distribution of the error of the detection distance in the state where the true value of the distance is known is tested, and is regarded as the detection information output from the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120)(sensor configuration, detection time, and detection distance).

As is apparent from the above description, the "approximate collision rate map" can be regarded as a function of the deceleration and the detection information (sensor configuration, detection time, and detection distance) output by the distance detection section (the detection device DD, the vehicle sensor 60, and the automatic driving control section 120).

Method for obtaining an approximate collision rate map during driving in the embodiment

In the embodiment, a plurality of "approximate collision rate maps" are created assuming a plurality of situations in advance, and stored in the collision rate map storage unit 1010, and the collision rate map setting unit 250 selects the "approximate collision rate map" that is close to the current situation while the host vehicle 1 is traveling.

As described above, the "approximate collision rate map" is regarded as a function of the deceleration, and the detection information (sensor configuration, detection time, and detection distance) output by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120). Therefore, a table (approximate collision rate map) (not shown) having deceleration, sensor configuration, detection time, and detection distance as input parameters and having an "approximate collision rate map" as an output is created in advance and stored in the collision rate map storage unit 1010. During the travel of the host vehicle 1, the collision rate map setting unit 250 inputs the maximum deceleration set by the current behavior pattern generation unit 200, the sensor configuration indicating the combination of the currently detected obstacles obtained from the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120), the detection time continuously detected by the sensor combination, and the detection distance into the approximate collision rate map stored in the collision rate map storage unit 1010, and determines the "approximate collision rate map" output from the table as the "approximate collision rate map" currently used.

By using the approximate collision rate map, it is possible to reduce the consumption of computing resources and set the "approximate collision rate map" at a high speed.

In this approximate collision rate graph, as the amount of information of the input parameter increases (as the quantization width decreases), a "approximate collision rate graph" with higher accuracy can be stored, but the capacity increases. The amount of information input should be determined by the gain or loss of precision and capacity.

Further, the table capacity can be reduced by not storing collision rates at all the grid points on the "approximate collision rate map". For example, the collision rate in a region having a collision rate higher than the target collision rate to a certain extent and a region having a collision rate lower than the target collision rate can be treated as fixed values, and the data format can be determined so as not to be stored in the collision rate map storage unit 1010.

Unlike the above-described embodiment in which the approximate collision rate map is stored in the collision rate map storage unit 1010 and the map close to the current situation is selected while the host vehicle 1 is traveling, the collision rate map setting unit 250 may calculate the current maximum deceleration, which is currently set, the current sensor configuration obtained from the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120), the detection time, the detection distance, and various characteristics of the host vehicle 1, which are measured in advance, in real time during traveling. According to this embodiment, the collision map storage unit 1010 can be omitted. In this embodiment, the approximate collision probability map can be calculated by expressions (16) to (18), (23), and (26) to (35) as described as the method of calculating the approximate collision probability map using the deviation amount determined by the instruction value deviation amount calculation unit 230 described in relation to STEP4 described above.

In the case of the embodiment in which the "approximate collision rate map" is calculated in real time, the calculated "approximate collision rate map" may be stored in the collision rate map storage unit 1010 as an element of the approximate collision rate map, and the stored approximate collision rate map may be read and used when a similar situation is encountered later. The approximate collision rate map with a low frequency of use may be deleted from the approximate collision rate map table in the collision rate map storage unit 1010. According to this embodiment, the capacity required for the collision map storage unit 1010 can be reduced.

In the embodiment in which the "approximate collision rate map" is calculated in real time, the collision rates at all the grid points on the "approximate collision rate map" may not be calculated, so that the consumption of calculation resources can be reduced, and the capacity for storing the "approximate collision rate map" in the collision rate map storage unit 1010 can be reduced.

The method of obtaining the "approximate collision probability map" in the embodiment is described in detail above.

The vehicle control device 100 of the embodiment includes: an action plan generating unit 200 that generates an action plan for automatic driving of the vehicle 1; a vehicle behavior control unit 160 that controls at least the speed of the host vehicle 1 based on the behavior pattern; and a distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120) that detects an object and outputs detection information relating to detection of the object, wherein the behavior pattern generation unit 200 sets a maximum deceleration of the host vehicle 1 during automatic driving, and the behavior pattern generation unit 200 includes a collision rate map setting unit 250 that determines a two-dimensional map, that is, a collision rate map, indicating a distribution of a collision rate indicating a possibility of collision between the host vehicle 1 and the obstacle in a two-dimensional space of a position and a speed when the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120) detects the obstacle, the collision rate map being prepared on the premise of a target stop position set based on a combination of a predetermined target collision rate, the maximum deceleration, and the detection information, and the behavior pattern generation unit 200 is prepared on the basis of the collision rate map, The predetermined target collision rate and the current position and speed of the own vehicle 1 generate a current behavior pattern.

In the related art, the speed of the vehicle is controlled based on the position error probability distribution as in patent document 1. Further, as in patent document 2, the possibility of collision with the own vehicle is determined. Any technique performs speed control of the vehicle based on the position error probability distribution or the collision possibility, and does not have a correlation with the behavior pattern of the automatic driving. Therefore, when determining an action plan using a sensor, there is a problem as follows: the recognition distance is not sufficient, and the recognition distance is not increased in a state where the reliability of the sensor is kept high (problem 1). There is also a problem that the safety level cannot be clarified, the reliability (accuracy, detection rate) of the sensor, and the accuracy of the operation scheme cannot be quantified (problem 2).

In contrast, in the present embodiment, a collision rate map, which is a two-dimensional map showing a distribution of collision rates in a two-dimensional space of position and speed, is introduced, the collision rate map showing a possibility of collision between the host vehicle 1 and the obstacle is created, and a current behavior pattern is created based on a relationship between a collision rate possessed by the position and speed of the host vehicle 1 on the collision rate map and the position and speed having a target collision rate on the collision rate map. Thus, the relationship with the position and the speed having the target collision rate can be grasped in the two-dimensional space of the position and the speed, and an action plan taking the safety rate and the comfort into consideration can be generated in the automatic driving.

The collision rate map of the present embodiment defines a plurality of grid points in the two-dimensional space, determines a collision rate for each grid point, and the collision rate at each grid point represents a probability of collision between the host vehicle 1 and the obstacle when the vehicle behavior control unit 160 instructs the host vehicle 1 to decelerate from a position and a speed corresponding to the grid point and stop at the target stop position with the maximum deceleration as an upper limit.

According to this configuration, the action pattern generating unit 200 can generate the action pattern based on the collision rate on the assumption of the currently set maximum deceleration.

When the current position and speed of the host vehicle 1 are within the low collision probability region (see fig. 14) shown on the collision probability map, an action plan for allowing sudden braking while maintaining the speed of automatic driving is generated as a current action plan, the low collision rate region is a region in which the collision rate is lower than a 1 st threshold value lower than the prescribed target collision rate, when the current position and speed of the vehicle are within a high collision rate region (see fig. 14) on the collision rate map, an action plan for executing preliminary braking is generated as the current action plan, the high collision rate region is a region in which the collision rate is lower than the predetermined target collision rate and is equal to or higher than the 2 nd threshold value which is equal to or higher than the 1 st threshold value, and the preliminary braking is performed by repeating braking for a short time to avoid sudden braking.

By doing so, the behavior pattern generation unit 200 generates a behavior pattern of sudden braking that allows sudden braking while maintaining the set speed of autonomous driving in a low collision rate region (see fig. 14) where the collision rate is lower than a 1 st predetermined threshold value lower than the target collision rate in the two-dimensional space of the collision rate map, and generates a behavior pattern of preliminary braking that avoids sudden braking by repeating braking for a short time in a high collision rate region (see fig. 14) where the collision rate is lower than the target collision rate and is equal to or higher than a 2 nd threshold value equal to or higher than the 1 st predetermined threshold value. By controlling the automatic driving of the vehicle 1 in this way, the safety and the comfort can be achieved at the same time.

The distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120) according to the present embodiment includes a plurality of sensors, and the detection information includes: a sensor configuration that represents a combination of sensors that detect the obstacle from among the plurality of sensors; forming a detected detection distance by the sensor; and a detection time continuously detected by the sensor, the target stop position being determined so that a collision rate calculated by convolution of an instruction value achievement probability density distribution 1001 and an error distribution (probability distribution 1003 obtained based on fusion accuracy reliability) is equal to the predetermined target collision rate, the instruction value achievement probability density distribution 1001 being a probability density distribution of a position where the vehicle actually stops when the vehicle is instructed to the host vehicle 1 from the vehicle behavior control unit 160 to stop the vehicle at the target stop position, the error distribution being distributed around the position indicated by the detection distance and indicating a difference between a true distance from the obstacle and the detection distance, the instruction value achievement probability density distribution 1001 being calculated in advance based on a stop operation performed by the host vehicle 1 based on the deceleration specified for the host vehicle 1, The error distribution (probability distribution 1003 obtained based on the fusion accuracy reliability) is a distribution estimated for the maximum deceleration based on the vehicle stop characteristics of the host vehicle 1, and is estimated for the sensor configuration, the detected distance, and the detected time based on the distance detection characteristics of the distance detection units (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) measured in advance using the distance detection units (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) in a situation where the true value of the distance from the object is known.

The error distribution of the probability distribution indicating the difference between the detected distance detected by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) and the true distance to the obstacle can be estimated by a sensor configuration indicating a combination of sensors that detect the obstacle out of the plurality of sensors of the distance detection unit, a detected distance detected by the sensor configuration, and a detection time continuously detected by the sensor configuration. The overlap between the error distribution and the indicated value in the maximum deceleration reaching the probability density distribution represents the collision rate of the host vehicle 1. By determining the target stop position so that the collision rate becomes, for example, a predetermined target collision rate equal to or lower than a collision rate expected in manual driving, it is possible to determine the collision rate map so that the case of automatic driving is safer when the case of automatic driving is compared with the case of human driving.

With the above configuration, the action plan generating unit 200 can determine the collision rate map of the target stop position using the characteristics actually measured with respect to the host vehicle 1 and the distance detecting unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120).

In the vehicle control device 100 according to the present embodiment, the grid points (G) on the collision rate map are set by the collision rate map setting unit 250 in the collision rate mapxc,Gvc) Collision rate p in (1)c(Gxc,Gvc) Is obtained by equation (35).

In the formula (35), the first and second groups,representing grid points (G)xc,Gvc) The detection rate of the obstacle by the distance detection section (the detection device DD, the vehicle sensor 60, and the automatic driving control section 120),representing grid points (G)xc,Gvc) In the middle of the time, the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120) detects the distanceIs detected, pc(gxn,gvn) The vehicle behavior control unit 160 controls the host vehicle 1 to move from the grid point (G)xc,Gvc) A point (g) on the collision rate map to which the host vehicle 1 is likely to shift when instructed to decelerate a to stop at the target stop positionxn,gvn) Collision rate of (c), p (e)α) Represents the following uncertainty εαProbability taken, the uncertainty ∈αWhich represents the difference between the deceleration actually performed by the vehicle 1 and the indication of the deceleration a.

Point (g) on the collision probability mapxn,gvn) By adding an uncertainty e to the deceleration aαThe collision probability at the grid points at which the speed on the collision probability map is 0 is given based on the result of measuring the characteristics of the distance detection unit in advance. The position on the collision rate map is the detection distance and a predetermined value is given to the collision rate at grid points other than the velocity of 0.

pc(gxn,gvn) Is determined by basing on the point (g)xn,gvn) The collision probability at the nearby grid points is calculated by approximation.

The summation operation of equation (35) is repeated from the grid point corresponding to the speed 0 and the position detection distance in the direction of increasing the speed and/or the direction of approaching the own vehicle 1 from the detection distance, thereby obtaining the grid point (G) of the collision rate mapxc,Gvc) The collision rate of (c).

The collision rate map thus obtained is an uncertainty epsilon of the deceleration aαThe value taken into account. From the collision rate mapThe situation of state transition during actual driving is more approximate.

The summation target of logarithmic expression (35) is multiplied by the detection rate of the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120)). When the detection distance is long, the detection rate of the distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120) is highTake the lower value. Detection rate at distance detection unit (detection device DD, vehicle sensor 60, and automatic driving control unit 120)In a low region, that is, a region distant from the object, the collision rate is evaluated to be low by equation (35). In the region where the collision rate is evaluated to be low on the collision rate map, the action plan generating unit 200 can select to start deceleration within a range that does not affect the comfort as described with reference to fig. 8.

In the present embodiment, the probability p (e)α) Has a normal distribution with a standard deviation determined based on the characteristics of the own vehicle 1 measured in advance.

With such a configuration, the collision probability map obtained based on the uncertainty of the actual deceleration operation of the host vehicle 1 can be obtained.

The action plan generating unit 200 of the present embodiment includes a collision rate map storage unit 1010 that stores a plurality of collision rate maps obtained from the result of a flow of measuring a stop position when a certain deceleration is instructed to the own vehicle 1, the result of a flow of measuring a characteristic of object detection obtained by the distance detecting unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) in a situation where a true value of the distance to the object is known, and the predetermined target collision rate. The collision rate map setting unit 250 selects one of the plurality of collision rate maps stored in the collision rate map storage unit 1010 based on the maximum deceleration and the detection information, and determines it as a collision rate map to be used.

With this configuration, the collision probability map can be determined without performing calculation for obtaining the collision probability map during traveling, and thus, the consumption of calculation resources can be reduced.

The collision rate map setting unit 250 of the present embodiment may determine the collision rate map by calculating in real time during traveling the parameters indicating the characteristics of the host vehicle 1 and the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) derived from the result of the flow of measuring the stop position when a certain deceleration is instructed to the host vehicle 1 and the result of the flow of measuring the characteristics of the object detection by the distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) in a situation where the true value of the distance to the object is known.

The action plan generating unit 200 according to the present embodiment may further include a collision rate map storage unit 1010 that stores the collision rate map calculated by the collision rate map setting unit 250 in real time during traveling, and the collision rate map setting unit 250 may determine the collision rate map stored in the collision rate map storage unit 1010 as the collision rate map to be used when the collision rate map corresponding to the predetermined target collision rate, the maximum deceleration, and the probe information is stored in the collision rate map storage unit 1010.

With this configuration, the storage resources of the collision rate map storage unit 1010 storing the collision rate map can be cancelled or deleted.

The distance detection unit (the detection device DD, the vehicle sensor 60, and the automatic driving control unit 120) according to the present embodiment may include a plurality of sensors for detecting an obstacle, and the detection information may include: a sensor configuration that represents a combination of sensors that detect the obstacle from among the plurality of sensors; and a detected distance configured by the sensors, wherein the behavior pattern generation unit 200 generates, as a current behavior pattern, a behavior pattern in which the cruise operation is performed while the obstacle is considered to be absent when the obstacle is detected by only some of the sensors and the detected distance is equal to or greater than a predetermined distance threshold value, and the behavior pattern generation unit 200 generates, as a current behavior pattern, a behavior pattern in which the preliminary braking is performed while the obstacle is considered to be present when the obstacle is detected by only some of the sensors and the detected distance is less than the predetermined distance threshold value.

With this configuration, when a conflicting state of the sensor occurs due to a sensor failure, no obstacle is presentAnd observed obstacle distanceEnsuring comfort when the distance between obstacles E and observed obstacles is larger than a predetermined thresholdAnd ensuring safety when the value is less than a prescribed threshold value.

The above-described embodiments are described in detail for easy understanding of the present invention, and are not limited to having all the configurations described. In addition, a part of the configuration of one embodiment may be replaced with the configuration of another embodiment, and the configuration of another embodiment may be added to the configuration of one embodiment. In addition, some of the configurations of the embodiment examples can be added, deleted, and replaced with other configurations.

The vehicle control device and the vehicle control method according to the present invention can also be realized by a program for causing a computer to function as the vehicle control device and the vehicle control method. The program may be stored in a recording medium that can be read by a computer.

86页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:弯道速度控制模块和方法以及包括该模块的发动机控制单元

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!