Obstacle avoidance trajectory planning for autonomous vehicles

文档序号:1966917 发布日期:2021-12-14 浏览:17次 中文

阅读说明:本技术 自动驾驶车辆的避障轨迹规划 (Obstacle avoidance trajectory planning for autonomous vehicles ) 是由 于宁 朱帆 薛晶晶 于 2020-02-26 设计创作,主要内容包括:公开了一种用于操作ADV的计算机实施的方法。规划ADV沿其驾驶的第一轨迹(S1401)。ADV将沿第一轨迹自动驾驶(S1402)。基于从安装在ADV上的多个传感器获得的传感器数据来检测ADV的受影响区域中的障碍物(S1403)。确定障碍物在受影响区域中的预期驻留时间(S1404)。基于障碍物在受影响区域中的预期驻留时间来确定是规划第二轨迹还是等待障碍物离开受影响区域(S1405)。规划ADV沿其驾驶的第二轨迹,并且ADV将沿第二轨迹自动驾驶,或者ADV将等待障碍物离开受影响区域并随后沿第一轨迹自动驾驶(S1406)。(A computer-implemented method for operating an ADV is disclosed. A first trajectory along which the ADV is driving is planned (S1401). The ADV will automatically drive along the first trajectory (S1402). Obstacles in an affected area of an ADV are detected based on sensor data obtained from a plurality of sensors mounted on the ADV (S1403). An expected dwell time of the obstacle in the affected area is determined (S1404). It is determined whether to plan the second trajectory or wait for the obstacle to leave the affected area based on an expected dwell time of the obstacle in the affected area (S1405). A second trajectory along which the ADV is driving is planned and the ADV will automatically drive along the second trajectory, or the ADV will wait for the obstacle to leave the affected area and then automatically drive along the first trajectory (S1406).)

1. A computer-implemented method for operating an autonomous vehicle (ADV), the method comprising:

sensing a driving environment around the ADV based on sensor data obtained from a plurality of sensors mounted on the ADV, including determining an affected region of the ADV;

planning a first trajectory based on the driving environment to automatically drive the ADV through at least a portion of the affected area;

in response to detecting an obstacle located within the affected area, determining an expected dwell time of the obstacle, the expected dwell time representing an amount of time that the obstacle is expected to stay within the affected area; and

based on the expected dwell time, determining whether to plan a second trajectory in accordance with the obstacle staying within the affected area or to wait for a period of time to allow the obstacle to leave the affected area in order to control the ADV in accordance with the first trajectory.

2. The method of claim 1, further comprising: determining that the obstacle blocks at least a portion of the first trajectory, wherein the expected dwell time is determined if the obstacle blocks at least a portion of the first trajectory within the affected area.

3. The method of claim 1, further comprising: in response to determining that the expected dwell time is greater than a predetermined threshold, planning the second trajectory to replace the first trajectory.

4. The method of claim 1, wherein determining the expected dwell time of the obstacle comprises: calculating a probability of a dwell time of the obstacle in the affected area using a probability density function, and wherein the expected dwell time of the obstacle in the affected area is determined based on the probability of dwell time.

5. The method of claim 4, wherein the probability density function is defined as follows:

f(x)=λe-λx

where x represents the amount of time that the obstacle is expected to stay within the affected area, and λ is determined based on the driving environment.

6. The method of claim 5, wherein the probability of residence time is determined by calculating an integral of the probability density function given a particular residence time candidate.

7. The method of claim 4, further comprising:

determining a first Estimated Time of Arrival (ETA) based on a first track length of the first track and an average velocity of the ADV associated with the first track further in accordance with the expected dwell time; and

determining a second ETA based on a second track length of the second track and an average speed of the ADV associated with the second track, wherein determining whether to drive the ADV according to the first track or the second track is based on the first ETA and the second ETA.

8. The method of claim 7, wherein if the second ETA is shorter than the first ETA, driving the ADV with the second trajectory; otherwise, the ADV waits and drives according to the first trajectory.

9. A non-transitory machine-readable medium having instructions stored therein, which when executed by a processor, cause the processor to perform operations of operating an autonomous vehicle (ADV), the operations comprising:

sensing a driving environment around the ADV based on sensor data obtained from a plurality of sensors mounted on the ADV, including determining an affected region of the ADV;

planning a first trajectory based on the driving environment to automatically drive the ADV through at least a portion of the affected area;

in response to detecting an obstacle located within the affected area, determining an expected dwell time of the obstacle, the expected dwell time representing an amount of time that the obstacle is expected to stay within the affected area; and

based on the expected dwell time, determining whether to plan a second trajectory in accordance with the obstacle staying within the affected area or to wait for a period of time to allow the obstacle to leave the affected area in order to control the ADV in accordance with the first trajectory.

10. The machine-readable medium of claim 9, wherein the operations further comprise: determining that the obstacle blocks at least a portion of the first trajectory, wherein the expected dwell time is determined if the obstacle blocks at least a portion of the first trajectory within the affected area.

11. The machine-readable medium of claim 9, wherein the operations further comprise: in response to determining that the expected dwell time is greater than a predetermined threshold, planning the second trajectory to replace the first trajectory.

12. The machine-readable medium of claim 9, wherein determining the expected dwell time of the obstacle comprises: calculating a probability of a dwell time of the obstacle in the affected area using a probability density function, and wherein the expected dwell time of the obstacle in the affected area is determined based on the probability of dwell time.

13. The machine-readable medium of claim 12, wherein the probability density function is defined as:

f(x)=λe-λx

where x represents the amount of time that the obstacle is expected to stay within the affected area, and λ is determined based on the driving environment.

14. The machine-readable medium of claim 13, wherein the probability of dwell time is determined by calculating an integral of the probability density function given a particular dwell time candidate.

15. The machine-readable medium of claim 12, wherein the operations further comprise:

determining a first Estimated Time of Arrival (ETA) based on a first track length of the first track and an average velocity of the ADV associated with the first track further in accordance with the expected dwell time; and

determining a second ETA based on a second track length of the second track and an average speed of the ADV associated with the second track, wherein determining whether to drive the ADV according to the first track or the second track is based on the first ETA and the second ETA.

16. The machine readable medium of claim 15, wherein if the second ETA is shorter than the first ETA, driving the ADV with the second trajectory; otherwise, the ADV waits and drives according to the first trajectory.

17. A data processing system comprising:

a processor; and

a memory coupled to the processor to store instructions that, when executed by the processor, cause the processor to perform operations to operate an autonomous vehicle (ADV), the operations comprising:

sensing a driving environment around the ADV based on sensor data obtained from a plurality of sensors mounted on the ADV, including determining an affected region of the ADV;

planning a first trajectory based on the driving environment to automatically drive the ADV through at least a portion of the affected area;

in response to detecting an obstacle located within the affected area, determining an expected dwell time of the obstacle, the expected dwell time representing an amount of time that the obstacle is expected to stay within the affected area; and

based on the expected dwell time, determining whether to plan a second trajectory in accordance with the obstacle staying within the affected area or to wait for a period of time to allow the obstacle to leave the affected area in order to control the ADV in accordance with the first trajectory.

18. The system of claim 17, wherein the operations further comprise: determining that the obstacle blocks at least a portion of the first trajectory, wherein the expected dwell time is determined if the obstacle blocks at least a portion of the first trajectory within the affected area.

19. The system of claim 17, wherein the operations further comprise: in response to determining that the expected dwell time is greater than a predetermined threshold, planning the second trajectory to replace the first trajectory.

20. The system of claim 17, wherein determining the expected dwell time of the obstacle comprises: calculating a probability of a dwell time of the obstacle in the affected area using a probability density function, and wherein the expected dwell time of the obstacle in the affected area is determined based on the probability of dwell time.

Technical Field

Embodiments of the present disclosure generally relate to operating an autonomous vehicle. More particularly, embodiments of the present disclosure relate to trajectory planning methods for autonomous vehicles (ADVs).

Background

Vehicles operating in an autonomous driving mode (e.g., unmanned) may relieve occupants, particularly the driver, from some driving-related duties. When operating in an autonomous driving mode, the vehicle may be navigated to various locations using onboard sensors, allowing the vehicle to travel with minimal human interaction or in some cases without any passengers.

Motion planning and control are key operations in autonomous driving. ADVs may need to drive in both road driving scenarios with lane boundaries and free space driving scenarios without lane boundaries. Conventional motion planning methods for road driving scenarios may require topological maps and specific road boundaries. Therefore, conventional motion planning methods for road driving scenarios have difficulty dealing with complex scenarios such as parking, three-point turning, and obstacle avoidance with a combination of forward and backward trajectories. Conventional free space path planning methods are slow in generating trajectories in real time and may result in poor obstacle avoidance performance.

Drawings

Embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which like reference numerals refer to similar elements.

FIG. 1 is a block diagram illustrating a networked system according to one embodiment.

FIG. 2 is a block diagram illustrating an example of an autonomous vehicle according to one embodiment.

Fig. 3A-3B are block diagrams illustrating an example of a perception and planning system for use with an autonomous vehicle, according to one embodiment.

FIG. 4 is a block diagram illustrating an example of a routing module and a planning module, according to one embodiment.

FIG. 5A is a process flow diagram illustrating an example of operating in an on-lane mode according to one embodiment.

FIG. 5B is a process flow diagram illustrating an example of operating in open space mode according to one embodiment.

Fig. 6 is a block diagram illustrating an example of a routing module and a planning module including an obstacle avoidance module according to one embodiment.

Fig. 7 is a process flow diagram illustrating an example of obstacle avoidance according to one embodiment.

Fig. 8A to 8D show examples in which an ADV is driven in a first driving region having a lane boundary and/or in a second driving region that is an open space without a lane boundary.

Fig. 9A to 9B illustrate detailed operations of operating in the open space mode according to an embodiment.

Figure 10 shows an example of an ADV encountering an obstacle that impedes the movement of the ADV.

FIG. 11 is a graph illustrating a probability density function of dynamic obstacle residence time in an affected area.

Fig. 12 is a flow chart illustrating an example of a process for an ADV operating in one of an on-lane mode or an open space mode, according to one embodiment.

Figure 13 is a flow chart illustrating an example of a process for an ADV to operate in an open space mode, according to one embodiment.

Fig. 14 is a flow chart illustrating an example of a process for ADV to avoid obstacles, according to one embodiment.

Detailed Description

Various embodiments and aspects of the disclosure will be described with reference to details discussed below, and the accompanying drawings will illustrate the various embodiments. The following description and drawings are illustrative of the disclosure and are not to be construed as limiting the disclosure. Numerous specific details are described to provide a thorough understanding of various embodiments of the present disclosure. However, in certain instances, well-known or conventional details are not described in order to provide a concise discussion of embodiments of the present disclosure.

Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment.

According to some embodiments, a new method for obstacle avoidance is disclosed. The method includes determining a probability of a dwell time of an obstacle (e.g., a vehicle, pedestrian, animal, etc.) in an affected area of the ADV, calculating an expected dwell time (Tw) of the obstacle in the affected area, and when the ADV encounters an obstacle that impedes movement of the ADV, making a decision whether the ADV should wait or be replanned.

According to one embodiment, a computer-implemented method for operating an ADV is disclosed. Obstacles in an affected area of the ADV are detected based on sensor data obtained from a plurality of sensors mounted on the ADV while the ADV is controlled to automatically drive along a first trajectory. An expected dwell time of the obstacle in the affected area is determined. Determining whether to plan the second trajectory or wait for the obstacle to leave the affected area based on an expected dwell time of the obstacle in the affected area. Based on the determination of whether to plan the second trajectory or wait for the obstacle to leave the affected area, a second trajectory for the ADV to drive along is planned and the ADV is controlled to drive automatically along the second trajectory or the ADV is controlled to wait for the obstacle to leave the affected area and then drive automatically along the first trajectory.

In one embodiment, the method further comprises operating in an open space mode in a driving area type that is an open space without lane boundaries. In one embodiment, determining the expected residence time of the obstacle in the affected area comprises: determining a probability of a dwell time of the obstacle in the affected area using a probability density function, and wherein the expected dwell time of the obstacle in the affected area is determined based on the probability of dwell time.

In one embodiment, the method further comprises: determining a first estimated arrival time for the ADV to wait for the obstacle to leave the affected area and then to automatically drive along a first trajectory based on an expected dwell time of the obstacle in the affected area; determining a second estimated time of arrival for the ADV to auto-drive along a second trajectory; and determining a ratio of the first estimated time of arrival to the second estimated time of arrival, wherein determining whether to plan the second trajectory or wait for the obstacle to leave the affected area is further based on the ratio of the first estimated time of arrival to the second estimated time of arrival.

In one embodiment, the method further comprises determining to plan a second trajectory in response to a ratio of the first estimated time of arrival to the second estimated time of arrival being greater than 1.

In one embodiment, planning the first trajectory or the second trajectory of the ADV includes: searching for the first route or the second route based on a search algorithm; generating a first reference line or a second reference line based on the first route or the second route; determining a first set of candidate trajectories or a second set of candidate trajectories based on the first reference line or the second reference line; and planning the first trajectory or the second trajectory by selecting the first trajectory or the second trajectory from the first set of candidate trajectories or the second set of candidate trajectories.

In one embodiment, the search algorithm comprises a modified a-star search algorithm. In one embodiment, the method further comprises: generating a first virtual road boundary or a second virtual road boundary based on the width of the ADV and the first reference line or the second reference line; and generating a first mesh or a second mesh within the first virtual road boundary or the second virtual road boundary, wherein the first set of candidate trajectories or the second set of candidate trajectories is determined based on the first mesh or the second mesh.

Fig. 1 is a block diagram illustrating an autonomous vehicle network configuration according to one embodiment of the present disclosure. Referring to fig. 1, a network configuration 100 includes an autonomous vehicle 101 that may be communicatively coupled to one or more servers 103-104 via a network 102. Although one autonomous vehicle is shown, multiple autonomous vehicles may be coupled to each other and/or to servers 103-104 via network 102. The network 102 may be any type of network, such as a wired or wireless Local Area Network (LAN), a Wide Area Network (WAN) such as the Internet, a cellular network, a satellite network, or a combination thereof. The servers 103-104 may be any type of server or cluster of servers, such as a network or cloud server, an application server, a backend server, or a combination thereof. The servers 103 to 104 may be data analysis servers, content servers, traffic information servers, map and point of interest (MPOI) servers or location servers, etc.

An autonomous vehicle refers to a vehicle that may be configured to be in an autonomous driving mode in which the vehicle navigates through the environment with little or no input from the driver. Such autonomous vehicles may include a sensor system having one or more sensors configured to detect information related to the operating environment of the vehicle. The vehicle and its associated controller use the detected information to navigate through the environment. Autonomous vehicle 101 may operate in a manual mode, in a fully autonomous mode, or in a partially autonomous mode.

In one embodiment, autonomous vehicle 101 includes, but is not limited to, a perception and planning system 110, a vehicle control system 111, a wireless communication system 112, a user interface system 113, and a sensor system 115. Autonomous vehicle 101 may also include certain common components included in a common vehicle, such as: engines, wheels, steering wheels, transmissions, etc., which may be controlled by the vehicle control system 111 and/or the sensory and programming system 110 using a variety of communication signals and/or commands, such as, for example, acceleration signals or commands, deceleration signals or commands, steering signals or commands, braking signals or commands, etc.

The components 110-115 may be communicatively coupled to each other via an interconnect, bus, network, or combination thereof. For example, the components 110-115 may be communicatively coupled to one another via a Controller Area Network (CAN) bus. The CAN bus is a vehicle bus standard designed to allow microcontrollers and devices to communicate with each other in applications without a host. It is a message-based protocol originally designed for multiplexed electrical wiring within automobiles, but is also used in many other environments.

Referring now to fig. 2, in one embodiment, the sensor system 115 includes, but is not limited to, one or more cameras 211, a Global Positioning System (GPS) unit 212, an Inertial Measurement Unit (IMU)213, a radar unit 214, and a light detection and ranging (LIDAR) unit 215. The GPS system 212 may include a transceiver operable to provide information regarding the location of the autonomous vehicle. The IMU unit 213 may sense position and orientation changes of the autonomous vehicle based on inertial acceleration. Radar unit 214 may represent a system that utilizes radio signals to sense objects within the local environment of an autonomous vehicle. In some embodiments, in addition to sensing an object, radar unit 214 may additionally sense a speed and/or heading of the object. The LIDAR unit 215 may use a laser to sense objects in the environment in which the autonomous vehicle is located. The LIDAR unit 215 may include one or more laser sources, laser scanners, and one or more detectors, among other system components. The camera 211 may include one or more devices used to capture images of the environment surrounding the autonomous vehicle. The camera 211 may be a still camera and/or a video camera. The camera may be mechanically movable, for example, by mounting the camera on a rotating and/or tilting platform.

The sensor system 115 may also include other sensors, such as: sonar sensors, infrared sensors, steering sensors, throttle sensors, brake sensors, and audio sensors (e.g., microphones). The audio sensor may be configured to collect sound from the environment surrounding the autonomous vehicle. The steering sensor may be configured to sense a steering angle of a steering wheel, wheels of a vehicle, or a combination thereof. The throttle sensor and the brake sensor sense a throttle position and a brake position of the vehicle, respectively. In some cases, the throttle sensor and the brake sensor may be integrated into an integrated throttle/brake sensor.

In one embodiment, the vehicle control system 111 includes, but is not limited to, a steering unit 201, a throttle unit 202 (also referred to as an acceleration unit), and a brake unit 203. The steering unit 201 is used to adjust the direction or forward direction of the vehicle. The throttle unit 202 is used to control the speed of the motor or engine, which in turn controls the speed and acceleration of the vehicle. The brake unit 203 decelerates the vehicle by providing friction to decelerate the wheels or tires of the vehicle. It should be noted that the components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.

Referring back to fig. 1, wireless communication system 112 allows communication between autonomous vehicle 101 and external systems such as devices, sensors, other vehicles, and the like. For example, the wireless communication system 112 may be in direct wireless communication with one or more devices, or in wireless communication via a communication network, such as with the servers 103-104 through the network 102. The wireless communication system 112 may use any cellular communication network or Wireless Local Area Network (WLAN), for example, using WiFi, to communicate with another component or system. The wireless communication system 112 may communicate directly with devices (e.g., passenger's mobile device, display device, speaker within the vehicle 101), for example, using infrared links, bluetooth, etc. The user interface system 113 may be part of a peripheral device implemented within the vehicle 101, including, for example, a keypad, a touch screen display device, a microphone, and speakers, among others.

Some or all of the functions of the autonomous vehicle 101 may be controlled or managed by the perception and planning system 110, particularly when operating in an autonomous mode. The awareness and planning system 110 includes the necessary hardware (e.g., processors, memory, storage devices) and software (e.g., operating systems, planning and routing programs) to receive information from the sensor system 115, the control system 111, the wireless communication system 112, and/or the user interface system 113, process the received information, plan a route or path from the origin to the destination, and then drive the vehicle 101 based on the planning and control information. Alternatively, the sensing and planning system 110 may be integrated with the vehicle control system 111.

For example, a user who is a passenger may specify a start location and a destination for a trip, e.g., via a user interface. The perception and planning system 110 obtains trip related data. For example, the sensing and planning system 110 may obtain location and route information from an MPOI server, which may be part of the servers 103-104. The location server provides location services and the MPOI server provides map services and POIs for certain locations. Alternatively, such location and MPOI information may be cached locally in a persistent storage of the sensing and planning system 110.

The perception and planning system 110 may also obtain real-time traffic information from a traffic information system or server (TIS) as the autonomous vehicle 101 moves along the route. It should be noted that the servers 103 to 104 may be operated by third party entities. Alternatively, the functionality of the servers 103-104 may be integrated with the perception and planning system 110. Based on the real-time traffic information, MPOI information, and location information, as well as real-time local environmental data (e.g., obstacles, objects, nearby vehicles) detected or sensed by sensor system 115, perception and planning system 110 may plan an optimal route and drive vehicle 101, e.g., via control system 111, according to the planned route to safely and efficiently reach the designated destination.

Server 103 may be a data analysis system to perform data analysis services for various customers. In one embodiment, data analysis system 103 includes a data collector 121 and a machine learning engine 122. The data collector 121 collects driving statistics 123 from various vehicles (autonomous vehicles or regular vehicles driven by human drivers). The driving statistics 123 include information indicative of driving commands issued (e.g., throttle, brake, steering commands) and responses of the vehicle captured by sensors of the vehicle at different points in time (e.g., speed, acceleration, deceleration, direction). The driving statistics 123 may also include information describing the driving environment at different points in time, such as a route (including a start location and a destination location), MPOI, road conditions, weather conditions, and so forth.

Based on the driving statistics 123, the machine learning engine 122 generates or trains a set of rules, algorithms, and/or predictive models 124 for various purposes. In one embodiment, the algorithm 124 may include: an algorithm or model for determining a starting point and an end point of a route along which the ADV is driven; an algorithm for determining whether each of the start point and the end point is within a first driving region having a lane boundary or within a second driving region that is an open space having no lane boundary; an algorithm for dividing the route into a first route segment and a second route segment based on determining whether each of the start point and the end point is within the first driving area or the second driving area; and an algorithm for operating in one of an on-lane mode or an open space mode to plan a first trajectory for a first road segment and operating in one of an on-lane mode or an open space mode to plan a second trajectory for a second road segment, depending on whether the starting point or the ending point is within the first driving area or the second driving area. Algorithm 124 may then be uploaded onto an ADV (e.g., model 313 of fig. 3A) for real-time use during autonomous driving.

Fig. 3A and 3B are block diagrams illustrating an example of a perception and planning system for use with an autonomous vehicle, according to one embodiment. The system 300 may be implemented as part of the autonomous vehicle 101 of fig. 1, including but not limited to the perception and planning system 110, the control system 111, and the sensor system 115. Referring to fig. 3A-3B, the awareness and planning system 110 includes, but is not limited to, a positioning module 301, an awareness module 302, a prediction module 303, a decision module 304, a planning module 305, a control module 306, and a routing module 307. The routing module 307 may include an on-lane mode module I308 a and an open space mode module I309 a. The planning module 305 may include an on-lane mode module II 308b and an open space mode module II 309 b.

Some or all of the modules 301 to 309II may be implemented in software, hardware, or a combination thereof. For example, the modules may be installed in persistent storage 352, loaded into memory 351, and executed by one or more processors (not shown). It should be noted that some or all of these modules may be communicatively coupled to or integrated with some or all of the modules of the vehicle control system 111 of fig. 2. Some of the modules 301 to 309II may be integrated together into an integrated module.

The location module 301 determines the current location of the autonomous vehicle 300 (e.g., using the GPS unit 212) and manages any data related to the user's trip or route. The positioning module 301 (also known as a map and route module) manages any data related to the user's journey or route. The user may, for example, log in via a user interface and specify a starting location and destination for the trip. The positioning module 301 communicates with other components of the autonomous vehicle 300, such as map and route information 311, to obtain trip related data. For example, the location module 301 may obtain location and route information from a location server and a map and poi (mpoi) server. The location server provides location services and the MPOI server provides map services and POIs for certain locations and may thus be cached as part of the map and route information 311. The location module 301 may also obtain real-time traffic information from a traffic information system or server as the autonomous vehicle 300 moves along the route.

Based on the sensor data provided by sensor system 115 and the positioning information obtained by positioning module 301, perception module 302 determines a perception of the surrounding environment. The perception information may represent what an average driver would perceive around the vehicle the driver is driving. Perception may include, for example, lane configuration in the form of an object, a traffic light signal, a relative position of another vehicle, a pedestrian, a building, a crosswalk, or other traffic-related indicia (e.g., a stop sign, a yield sign), and so forth. The lane configuration includes information describing one or more lanes, such as, for example, the shape of the lane (e.g., straight or curved), the width of the lane, the number of lanes in the road, one-way or two-way lanes, merge or split lanes, exit lanes, and so forth.

The perception module 302 may include a computer vision system or functionality of a computer vision system to process and analyze images captured by one or more cameras to identify objects and/or features in an autonomous vehicle environment. The objects may include traffic signals, road boundaries, other vehicles, pedestrians, and/or obstacles, etc. Computer vision systems may use object recognition algorithms, video tracking, and other computer vision techniques. In some embodiments, the computer vision system may map the environment, track objects, and estimate the speed of objects, among other things. The perception module 302 may also detect objects based on other sensor data provided by other sensors, such as radar and/or LIDAR.

For each object, the prediction module 303 predicts how the object will behave in this case. The prediction is performed based on perception data that perceives the driving environment at a point in time that takes into account a set of map/route information 311 and traffic rules 312. For example, if the object is a vehicle in the opposite direction and the current driving environment includes an intersection, the prediction module 303 will predict whether the vehicle is likely to move straight ahead or turn. If the perception data indicates that the intersection has no traffic lights, the prediction module 303 may predict that the vehicle may need to be completely parked before entering the intersection. If the perception data indicates that the vehicle is currently in a left-turn only lane or a right-turn only lane, the prediction module 303 may predict that the vehicle will be more likely to turn left or right, respectively.

For each subject, the decision module 304 makes a decision on how to treat the subject. For example, for a particular object (e.g., another vehicle in a crossing route) and metadata describing the object (e.g., speed, direction, turn angle), the decision module 304 decides how to encounter the object (e.g., cut, yield, stop, exceed). The decision module 304 may make such a decision according to a rule set, such as traffic rules or driving rules 312, which may be stored in persistent storage 352.

The routing module 307 is configured to provide one or more routes or paths from a starting point to a destination point. For a given trip from a start location to a destination location, such as a given trip received from a user, the routing module 307 obtains route and map information 311 and determines all possible routes or paths from the start location to the destination location. In one embodiment, the routing module 307 may generate a reference line in the form of a topographical map that identifies each route from the starting location to the destination location. A reference line refers to an ideal route or path that is not disturbed by anything else, such as other vehicles, obstacles, or traffic conditions. That is, if there are no other vehicles, pedestrians, or obstacles on the road, the ADV should accurately or closely follow the reference line. The terrain map is then provided to a decision module 304 and/or a planning module 305. The decision module 304 and/or the planning module 305 checks all possible routes to select and modify one of the best routes according to other data provided by other modules, such as traffic conditions from the positioning module 301, driving environment sensed by the sensing module 302, and traffic conditions predicted by the prediction module 303. Depending on the particular driving environment at the point in time, the actual path or route used to control the ADV may be close to or different from the reference line provided by the routing module 307.

Based on the decisions for each of the perceived objects, the planning module 305 routing module 307 plans a path or route for the autonomous vehicle and driving parameters (e.g., distance, speed, and/or turn angle) with reference to a line. In one embodiment, the planning module 305 may use the reference lines provided by the routing module 307 as a basis. In other words, for a given object, the decision module 304 decides what to do with the object, and the planning module 305 determines how to do. For example, for a given object, decision module 304 may decide to exceed the object, while planning module 305 may determine whether to exceed on the left or right side of the object. Planning and control data is generated by the planning module 305, including information describing how the vehicle 300 will move in the next movement cycle (e.g., the next route/path segment). For example, the planning and control data may instruct the vehicle 300 to move 10 meters at a speed of 30 miles per hour (mph), and then change to the right lane at a speed of 25 mph. In one embodiment, the planning module 305 may generate reference lines based on the route provided by the routing module 307.

Based on the planning and control data, the control module 306 controls and drives the autonomous vehicle by sending appropriate commands or signals to the vehicle control system 111 according to the route or path defined by the planning and control data. The planning and control data includes sufficient information to drive the vehicle from a first point to a second point of the route or path at different points in time along the route or route using appropriate vehicle settings or driving parameters (e.g., throttle, brake, steering commands).

In one embodiment, the planning phase is performed in a plurality of planning cycles (also referred to as drive cycles), for example, in cycles of 100 milliseconds (ms) each time interval. For each of the planning or driving cycles, one or more control commands will be issued based on the planning and control data. That is, for every 100ms, the planning module 305 plans the next route segment or path segment, e.g., including the target location and the time required for the ADV to reach the target location. Alternatively, the planning module 305 may also specify a particular speed, direction, and/or steering angle, etc. In one embodiment, the planning module 305 plans a route segment or a path segment for the next predetermined period of time (such as 5 seconds). For each planning cycle, the planning module 305 plans a target location for the current cycle (e.g., the next 5 seconds) based on the target locations planned in the previous cycle. The control module 306 then generates one or more control commands (e.g., throttle, brake, steering control commands) based on the current cycle of planning and control data.

It should be noted that the decision module 304 and the planning module 305 may be integrated as an integrated module. The decision module 304/planning module 305 may include a navigation system or functionality of a navigation system to determine a driving path of an autonomous vehicle. For example, the navigation system may determine a range of speeds and heading directions for affecting movement of the autonomous vehicle along the following paths: the path substantially avoids perceived obstacles while advancing the autonomous vehicle along a roadway-based path to a final destination. The destination may be set based on user input via the user interface system 113. The navigation system may dynamically update the driving path while the autonomous vehicle is running. The navigation system may combine data from the GPS system and one or more maps to determine a driving path for the autonomous vehicle.

Fig. 4 is a block diagram illustrating an example of the routing module 307 and the planning module 305 according to one embodiment. FIG. 5A is a process flow diagram illustrating an example of operating in an on-lane mode according to one embodiment. FIG. 5B is a process flow diagram illustrating an example of operating in open space mode according to one embodiment. Referring to fig. 4, 5A and 5B, according to one embodiment, the routing module 307 includes, but is not limited to, a determination module 401, an on-lane mode module I308 a including a first search module 402, and an open space mode module I309 a including a second search module 403. The determination module 401 is configured to determine a start point and an end point of a route along which the ADV is to be driven. The determination module 401 is further configured to determine whether each of the starting point and the ending point is within a first driving region of a first type having lane boundaries or within a second driving region of a second type being an open space without lane boundaries. The determination module 401 is further configured to divide the route into a first route segment and a second route segment based on determining whether each of the start point and the end point is within the first driving area or the second driving area. The routing module 307 and/or the planning module 305 operate in one of an on-lane mode or an open space mode to plan a first trajectory for a first route segment and in one of an on-lane mode or an open space mode to plan a second trajectory for a second route segment depending on whether the starting point or the ending point is within the first driving area or the second driving area.

Referring to fig. 4 and 5A, in one embodiment, the first search module 402 is configured to search for a first route segment or a second route segment based on an a-star search algorithm. The reference line module 405 is configured to generate a reference line based on the first route segment or the second route segment. The grid module 406 is configured to generate a grid based on the reference lines. The trajectory module 407 is configured to determine a set of candidate trajectories based on the grid and select a trajectory from the set of candidate trajectories to control ADV to autonomously drive according to the trajectory.

Referring to fig. 4 and 5B, in one embodiment, the second search module 403 is configured to search for the first route segment or the second route segment based on the improved a-star search algorithm. The reference line module 408 is configured to generate a reference line based on the first route segment or the second route segment. The virtual boundary line module 409 is configured to generate a virtual road boundary based on the width of the ADV and the reference line. The mesh module 410 is configured to generate a mesh based on reference lines within the virtual road boundary. The trajectory module 411 is configured to determine a set of candidate trajectories based on the grid and select a trajectory from the set of candidate trajectories to control ADV to autonomously drive according to the trajectory.

Fig. 6 is a block diagram 600 illustrating an example of the planning module 305 including an obstacle avoidance module, according to one embodiment. Fig. 7 is a process flow diagram illustrating an example of obstacle avoidance according to one embodiment. Referring to fig. 6 and 7, according to one embodiment, the planning module 305 may include an obstacle avoidance module 601. Obstacle avoidance module 601 may include a dwell time module 602, an Estimated Time of Arrival (ETA) module 603, and a determination module 604.

The perception module 302 may be configured to detect obstacles in an affected area of an ADV based on sensor data obtained from a plurality of sensors mounted on the ADV. The dwell time module 602 is configured to determine an expected dwell time of the obstacles in the affected area. The ETA module 603 is configured to determine a first estimated arrival time at which the ADV waits for the obstacle to leave the affected area and then to autopilot along a first trajectory, and to determine a second estimated arrival time at which the ADV autopilots along a second trajectory. The determination module 604 is configured to determine whether to plan the second trajectory or wait for the obstacle to leave the affected area based on an expected dwell time of the obstacle in the affected area. The determination module 604 is further configured to plan a second trajectory for the ADV to drive along and control the ADV to automatically drive along the second trajectory, or control the ADV to wait for the obstacle to exit the affected area and then automatically drive along the first trajectory, based on the determination of whether to plan the second trajectory or wait for the obstacle to exit the affected area.

In one embodiment, the ADV is configured to operate in an open space mode in a driving area that is an open space without lane boundaries. In one embodiment, a probability of a dwell time of the obstacle in the first area is determined, and wherein the expected dwell time of the obstacle in the affected area is determined based on the probability of dwell time. In one embodiment, a ratio of the first estimated time of arrival to the second estimated time of arrival is determined, wherein determining whether to plan the second trajectory or wait for the obstacle to leave the affected area is further based on the ratio of the first estimated time of arrival to the second estimated time of arrival.

In one embodiment, in response to the ratio of the first estimated time of arrival to the second estimated time of arrival being greater than 1, the determination module 604 is configured to determine to plan the second trajectory. In one embodiment, to plan the second trajectory, the second search module 403 is configured to search for another first route segment or another second route segment based on the modified a-star search algorithm. The reference line module 408 is configured to generate another reference line based on the another first route segment or the another second route segment. The virtual boundary line module 409 is configured to generate another virtual road boundary based on the width of the ADV and another reference line. The mesh module 410 is configured to generate another mesh based on another reference line within another virtual road boundary. The trajectory module 411 is configured to determine a further set of candidate trajectories based on the further grid and to select a second trajectory from the further set of candidate trajectories for controlling the ADV to autonomously drive according to the second trajectory.

Fig. 8A to 8D show examples of driving of an ADV810 in a first driving region 801 with lane boundaries and/or in a second driving region 802 as an open space without lane boundaries. ADV810 may need to drive in both road driving scenarios with lane boundaries and free space driving scenarios without lane boundaries. It is important for ADV810 to have the ability to autonomously drive in road driving scenarios (e.g., drive in a driving region with specified lane boundaries) and in free space scenarios (e.g., drive in a driving region that is open free space without lane boundaries), and even to intelligently switch between these two different driving regions. Conventional motion planning methods for road driving scenarios may require topological maps and specific lane boundaries. Therefore, the conventional motion planning method for road driving scenes has difficulty in dealing with complicated scenes such as parking, three-point turning, and obstacle avoidance with a combination of forward and backward trajectories. Conventional free space path planning methods are slow in generating trajectories in real time and may result in poor obstacle avoidance performance. For road driving scenarios, DP and QP have been used for trajectory planning. However, sometimes an autonomous vehicle may have to drive from a starting point to an end point only in a specified free space area without lane boundaries. For free space scenes, the Reeds-Shepp path has been used in conjunction with a hybrid A-Star search algorithm for path planning to generate the desired trajectory. Unfortunately, this free space path planning method is too slow to generate trajectories in real time and may result in poor obstacle avoidance performance.

Currently, an a-star search algorithm is used to find a navigation path from a start point to an end point, then a reference line is generated based on the navigation path, and then real-time path planning is performed using DP and/or QP. However, this search algorithm works well only for road scenes with topological maps and specific road boundaries. Such search algorithms are difficult to handle complex scenarios such as parking with a combination of forward and backward trajectories, three-point turns, and obstacle avoidance. Efforts have been made to increase the size of nodes in such search algorithms to reduce the time consumption of path searching. However, this approach may sometimes lead to poor results, e.g., the expected path may not be found even if all nodes have been searched. Current path planning does not smooth the trajectory and uses a rough trajectory directly, which the vehicle may have difficulty following.

For a given start point and end point, in current methods, path planning of the trajectory is performed only once, rather than planning every cycle, which may avoid large time consumption for real-time path planning. But current methods are not sufficient for obstacle avoidance because the ADV does not change trajectory to avoid collisions with obstacles.

According to some embodiments, a new method for trajectory planning applicable to both urban roads and free space areas with specified road boundaries is disclosed herein. The method combines an A-search algorithm with a hybrid A-search algorithm to search navigation paths according to different types of driving scenes. For urban roads, an a-search algorithm may be used to obtain navigation paths, and for open free space, a hybrid a-search algorithm may be used. The navigation path with the a-star search algorithm and/or the hybrid a-star search algorithm may be used to generate a reference line for real-time trajectory planning using the DP or QP algorithm. The trajectory planning method may be used to handle complex driving tasks, such as driving from urban roads to free space areas or driving from free space to urban roads. The method also has good performance in the aspect of obstacle avoidance.

As shown in fig. 8A to 8D, there are four cases according to whether each of the start point Ps and the end point Pe of the ADV810 is within the first driving region 801 having a lane boundary or within the second driving region 802 that is an open space without a lane boundary. Fig. 8A is a diagram 800a showing a first case where the starting point Ps 803a is within the first driving region 801 and the ending point Pe 804a is within the second driving region 802. Fig. 8B is a diagram 800B showing a second case where the starting point Ps 803B is within the second driving region 802 and the end point Pe 804B is within the first driving region 801. Fig. 8C is a diagram 800C showing a third case where the starting point Ps 803C is within the first driving region 801 and the ending point Pe 804C is within the first driving region 801. Fig. 8D is a diagram 800D showing a fourth case where the starting point Ps 803D is within the second driving region 802 and the ending point Pe 804D is within the second driving region 802.

Trajectory planning methods are provided to address these four cases. In this method, there are two processes, a first process and a second process. In a first process, the ADV810 is configured to operate in an on-lane mode. In a second process, the ADV810 is configured to operate in an open space mode.

As shown in fig. 8A, in the first case, the start point Ps 803a is in a road scene (e.g., the first driving region 801), and the end point Pe 804a is in a free space (e.g., the second driving region 802). A route having a start point Ps 803a and an end point Pe 804a may be divided into a first route segment and a second route segment by an intermediate point. In one embodiment, a closest point Pe '805a (Xe', Ye ', Phie') to an end point Pe in a road scene (e.g., first driving region 801) may be determined. The closest point Pe'805a may be an intermediate point that divides the route into two route segments. A first route segment from the start point Ps 803a to the closest point Pe '805a may be performed using process 1, and trajectory planning of a second route segment from the closest point Pe ' (Ps ')805a to the end point Pe 804a may be performed using process 2. In a first instance, the ADV810 is configured to operate in an on-lane mode to plan a first trajectory for a first route segment and in an open space mode to plan a second trajectory for a second route segment.

As shown in fig. 8B, in the second case, the start point Ps 803B is in a free open space (e.g., the second driving region 802), and the end point Pe 804B is in a road scene (e.g., the first driving region 801). The route having the start point Ps 803b and the end point Pe 804b may be divided into a first route segment and a second route segment by an intermediate point. In one embodiment, the closest point Pe '805b (Xe', Ye ', Phie') to the starting point Ps in the road scene (e.g., the first driving region 801) may be determined. The closest point Ps'805b may be an intermediate point that divides the route into two route segments. A first route segment from the start point Ps 803a to the closest point Ps ' (Pe ')805b may be performed using process 2, and trajectory planning of a second route segment from the closest point Ps '805b to the end point Pe 804b may be performed using process 1. In a second case, the ADV810 is configured to operate in an open space mode to plan a first trajectory of a first route segment and in an on-lane mode to plan a second trajectory of a second route segment.

As shown in fig. 8C, in the third case, the start point Ps 803C is in the road scene (e.g., the first driving region 801), and the end point Pe 804C is in the road scene (e.g., the first driving region 801). In this case, it is not necessary to divide the route from the start point Ps 803c to the end point Pe 804 c. In one embodiment, trajectory planning from the start point Ps 803c to the end point Pe 804c may be performed using process 1. In one embodiment, the route having the start point Ps 803c and the end point Pe 804c may be divided into a first route segment and a second route segment by any intermediate point. The ADV810 is configured to operate in an on-lane mode to plan a first trajectory for a first route segment and to operate in an on-lane mode to plan a second trajectory for a second route segment.

As shown in fig. 8D, in the fourth case, the start point Ps 803D is in a free open space (e.g., the second driving region 802), and the end point Pe 804c is in a free open space (e.g., the second driving region 802). In this case, it is not necessary to divide the route from the start point Ps 803d to the end point Pe 804 d. In one embodiment, the trajectory planning from the start point Ps 803d to the end point Pe 804d may be performed using process 2. In one embodiment, the route having the start point Ps 803d and the end point Pe 804d may be divided into a first route segment and a second route segment by any intermediate point. The ADV810 is configured to operate in an open space mode to plan a first trajectory of a first route segment and to operate in an open space mode to plan a second trajectory of a second route segment.

Referring to fig. 5A and 8A-8D, in a first process, the ADV810 is configured to operate in an on-lane mode. In one embodiment, the first search module 402 is configured to search for a route of the first route segment or the second route segment based on an a-star search algorithm. The route is a navigation route from a start point to an end point. The a star (a star) search algorithm is an information search algorithm. Starting from the start node of the graph, a star aims to find a route or path to the destination node with the smallest cost (minimum travel distance, shortest time, etc.). The a-star search accomplishes this by maintaining a tree of paths starting at the starting node and expanding one edge of these paths at a time until their termination criteria are met. At each iteration of its main loop, the A-star determines which of its paths to expand based on the cost of the path and the cost estimate required to expand the path all the way to the target node. Specifically, a-star selects a path that minimizes

f(n)=g(n)+h(n)

Where n is the next node on the path, g (n) is the cost of the path from the starting node to the nth node, and h (n) is a heuristic function that estimates the cost of the shortest path from n to the destination node. The a-star search terminates when it chooses that the expanded path is a path from the start node to the target node or if there is no suitable expanded path. The heuristic function is problem-specific.

In a first process, a route of a first route segment or a second route segment is searched, for example, by a first search module 402, using an a-star search algorithm. This route may be referred to as route 1. A reference line may be generated based on route 1, for example, by reference line module 405, and may be referred to as reference line 1. Then, a grid may be generated by the grid module 406 from the reference line 1. A series of candidate trajectories based on polynomial curves may be created by the trajectory module 407. The trajectory module 407 may then be further configured to obtain an expected trajectory from the candidate trajectories using a dynamic planning algorithm. The control module 306 may then be configured to control the ADV810 to follow the generated trajectory and move to the end point Pe (e.g., 804a, 804b, 804c, 804 d).

Referring to fig. 5B and 8A-8D, in a second process, the ADV810 is configured to operate in an open space mode. In one embodiment, the second search module 403 is configured to search for a route for the first route segment or the second route segment based on a modified a-star or hybrid a-star search algorithm. The improved a-star or hybrid a-star search algorithm is a variation of the a-star search algorithm applied to the 3D motion state space of ADV, but has improved state update rules that capture continuous state data in the discrete search nodes of the a-star. As in the a-star search algorithm, the search space (x, y, θ) is discretized, but unlike conventional a-stars, which only allow access to cell centers, an improved or mixed-state a-star search algorithm associates each grid cell with a continuous 3D state of the vehicle. The modified or mixed-state a-star search algorithm uses a kinematic model of ADV. For example, three angles including maximum left turn, maximum right turn, and forward may be modeled in the modified or mixed state A-star search algorithm. For another example, the size (length, width, or height) of the ADV may be considered in the improved or mixed-state a-star search algorithm. The route or path resulting from the modified or mixed-state a-star search algorithm is drivable, rather than piecewise linear as in the case of the a-star search algorithm.

In a second process, the ADV810 operates in an open space mode. There are 6 main operations (operation 1 to operation 6) in the second process, and operation 7 is optional. Each operation is described below.

Operation 1: the modified a-star or hybrid a-star search algorithm is used, for example, by the second search module 403 to search for routes of the first route segment or the second route segment. As shown in fig. 9A, a hybrid a star or a modified a star may be used to search for a route path from Ps'805a to Pe 804a in a designated free space region 802 (e.g., a second driving region), which designated free space region 802 may be referred to as a free space region of interest. This route may be referred to as route 2. Route 2 may include a forward motion from Ps'805a to P1806 a and a backward motion from P1806 a to Pe 804 a. There may be two obstacles (e.g., 807, 808) in the free space region 802.

Operation 2: a reference line may be generated based on route 2 (e.g., by reference line module 408), and may be referred to as reference line 2 based on route 2.

Operation 3: a virtual road boundary (sample region or ROI of interest) may be created from the width of the ADV and the generation of the reference line 2 at operation 2. In one embodiment, the virtual boundary line module 409 is configured to generate a virtual road boundary based on the width of the ADV and the reference line 2.

For example,

sample region of interest: { - (1+ C) W/2, (1+ C) W/2 }. W is the width of ADV, and C is the lateral expansion ratio, which is a real number greater than 0.

As shown in fig. 9A, a virtual road boundary 901 may be generated by the virtual boundary line module 409 for the ADV810 in the open space 802.

And operation 4: a grid may be generated by the grid module 410 from the reference line 2 in the sample region of interest. As shown in fig. 9A, a grid 902 may be generated in open space 802 by grid module 410. Each grid cell is associated or assigned a cost using a predetermined cost function based on a relative position with respect to the ADV and/or one or more obstacles identified within the respective ROI.

Operation 5: a series of candidate trajectories based on a quadratic polynomial curve may be created by trajectory module 411. The trajectory module 411 is further configured to obtain an expected trajectory from the candidate trajectories using a dynamic planning algorithm. As shown in fig. 9B, a trajectory 908 comprising forward and backward movements from the start point 903 to the end point 904 may be generated by the trajectory module 411 in the open space 802.

Operation 6: the control module 306 is configured to control the ADV810 to follow the generated trajectory 908 and move to the end point Pe (e.g., 804a, 804b, 804c, 804d, 904).

Operation 7: the determination module 604 (as shown in fig. 6 and 7) is configured to make a decision whether the ADV810 should wait at the current location or return to operations 1 through 6 and replan the trajectory. Details of this operation of obstacle avoidance will be described in detail below.

Fig. 10 is a diagram 1000 illustrating an example of an ADV810 encountering an obstacle 807 blocking the movement of the ADV. For a given start point (e.g., Ps'903) and end point (e.g., Pe 904) in an open space driving area (e.g., 802), current path planning for a trajectory is performed only once, rather than planning each cycle, which may avoid the problem of large time consumption for real-time path planning. But current path planning is not sufficient for obstacle avoidance because the ADV810 does not change trajectory to avoid collisions with obstacles (e.g., vehicles, bicycles, pedestrians, or animals). A new obstacle avoidance method is disclosed herein. According to one embodiment, the process comprises: determining a probability of a dwell time of an obstacle (e.g., vehicle, bicycle, pedestrian or animal) in an affected area of the ADV; calculating an expected dwell time of the dynamic obstacle in the affected area; and when the ADV encounters an obstacle blocking the ADV's motion, making a decision whether the ADV should wait or be replanned. By the method, the problem of obstacle avoidance can be solved. ADV can plan an optimal path to avoid obstacles and to reach the endpoint time-effectively.

Referring to fig. 10, a first trajectory 908 along which to drive may be planned for ADV810, for example, by performing operations 1-5 as described above. ADV810 may move to end point Pe 904 following generated trajectory 908. During the route, the ADV810 may detect an obstacle 807 in the affected area 1001 of the ADV810 based on sensor data obtained from a plurality of sensors mounted on the ADV. For example, the affected area 1001 of the ADV810 may be an area that is sensible by a plurality of sensors mounted on the ADV 810. For example, the affected region 1001 of the ADV810 may be a region that affects the motion of the ADV810 along the trajectory 908 that the ADV is driving. The affected area 1001 may be rectangular, trapezoidal, triangular, circular in shape, or any other shape. For example, the affected area 1001 may have a size of 5m × 10m, 10m × 20m, or the like.

When ADV810 is configured to plan the first route, obstacle 807 may not block the motion of ADV810 along trajectory 908. However, the obstacle 807 may be a dynamic obstacle such as a vehicle, pedestrian or animal. The obstacle 807 may move toward the trajectory 908, as shown in fig. 10, and block the motion of the ADV810 along the trajectory 908.

FIG. 11 is a graph 1100 illustrating probability density functions of dynamic obstacle residence times in an affected area. The illustrated probability density function (also referred to as a probability density map or curve) is specifically configured based on a set of one or more parameters that describe or represent a particular driving scenario (e.g., parking lot, intersection, and shape and/or size of an area) and type of obstacles (e.g., vehicles, bicycles, pedestrians), their behavior, and so forth. As shown in fig. 11, the X-axis represents a time or predicted time for an obstacle to stay in an affected region 1001 (e.g., a region of interest or ROI), and the Y-axis represents a probability density of the obstacle to stay in the affected region 1001. Fig. 11 shows the probability density of an obstacle over time, during which the obstacle stays in the affected area. In probability theory, the Probability Density Function (PDF), or the density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted to provide a relative likelihood that the value of the random variable will equal that sample. In a more precise sense, the PDF is used to specify the probability that a random variable falls within a particular range of values. The probability is given by the integral of the PDF of the variable over that range on the X-axis. That is, it is determined based on the area under the density function but above the horizontal axis and between the minimum and maximum values within the range (e.g., the area between the curve and the X-axis and Y-axis over a period of time (e.g., from zero to a particular time)). The probability density function is everywhere non-negative and its integral over the whole space equals 1.

Referring to fig. 7, 10 and 11, in one embodiment, the expected dwell time of the obstacle 807 in the affected area 1001 may be determined. The residence time of a dynamic obstacle in the affected area of the ADV may be assumed to follow the probability density function as follows:

f(x)=λe-λx

where λ is a parameter of a dwell time probability density function associated with a dynamic obstacle (e.g., a moving obstacle). The value of λ may be estimated from historical driving data in the real world, which may be determined according to a particular driving scenario at a point in time or selected from a preconfigured set of λ parameters. The average or average dwell time under the same or similar driving scenarios may be denoted as 1/λ. The lambda parameter may be determined based on a large amount of prior driving statistics collected from various vehicles driving in the same or similar environment. The lambda parameter will determine the shape and size of the probability density curve, where fig. 11 shows the probability density for a particular lambda.

According to one embodiment, the probability of a dwell time in the range of 0 to Tw may be calculated by integrating the probability density function over a time range from 0 at the start time to the expected dwell time. The probability of residence time may be determined by:

P(0<x<Tw)=1-e-λTw

where Tw is the expected residence time of the dynamic obstacle.

In one embodiment, the probability of an acceptable residence time is greater than a predetermined probability threshold, such as P-0.8 or 80%. The expected dwell time of the dynamic obstacle 807 in the affected area 1001 may be calculated based on the probability of the dwell time. In one embodiment, the dwell time module 602 is configured to determine the expected dwell time of the obstacle 807 in the affected area 1001. In one embodiment, the expected residence time Tw may be calculated using the probability density function or probability density curve described above, given the λ parameter determined or selected at the point in time, using a predetermined or acceptable probability (e.g., 80%). In one embodiment, the lambda parameter may be dynamically calculated or determined via a look-up operation on a set of lambda parameters previously configured and maintained by the ADV based on a particular driving scenario. For example, the λ parameter may be determined based on the affected region or ROI (such as, for example, a parking lot, an intersection, etc.). Further, the λ parameter may be determined based on the type of obstacle of interest. The lambda parameter may be different if the obstacle is a vehicle, a bicycle, a pedestrian or a pet. The lambda parameter may differ based on the behavior of the obstacle. For example, if the obstacle is a fast or slow moving, heading, past movement trajectory or predicted trajectory, the λ parameter may be different.

If the obstacle 807 is a dynamic obstacle that may itself move, such as a vehicle, bicycle, pedestrian, cat, or dog, the expected dwell time of the dynamic obstacle 807 in the affected area 1001 may be calculated as:

if the obstacle 807 is a static obstacle that cannot move on its own, the expected dwell time is:

Tw=∞

in one embodiment, a first estimated time of arrival T1 at which ADV810 waits for obstacle 807 to leave affected area 1001 and then autopilots along first trajectory 908 may be determined, for example, by ETA module 603. A second estimated time of arrival T2 for ADV810 to re-plan and to automatically drive along the second trajectory 1002 may be determined by ETA module 603.

T1=T1m+T1w=L1/Ve+Tw,

T2=T2m=L2/Ve,

Where T1 and T2 are the first estimated time of arrival or arrival at the end of the current trajectory 908 and the second estimated time of arrival at which the second trajectory 1002 is re-planned, respectively. T1m represents the time it takes for ADV 807 to reach the end of current trajectory 908 without waiting. If the current trajectory 908 is blocked by an obstacle 807, then T1W represents a wait time for which the ADV 807 must wait if ADV 807 decides to move along the current trajectory 908. T2m represents the time it takes for ADV 807 to reach the end of the second track 1002. Ve is the average speed planned by the ADV 807 as it travels along the various trajectories. L1 and L2 represent the track lengths of the track 908 and the track 1002, respectively.

In one embodiment, the ratio of the first estimated time of arrival T1 to the second estimated time of arrival T2 may be determined, for example, by the determination module 604. The determination module 604 may be configured to determine whether to move along the second trajectory 1002 or to move along the first trajectory 908 by waiting for the obstacle 807 to leave the affected area 1001 based on a ratio of the first estimated time of arrival T1 to the second estimated time of arrival T2.

R=T1/T2=(L1/Ve+Tw)/(L2/Ve)

According to one embodiment, if the ratio R is greater than a predetermined threshold, such as, for example, R >1 (e.g., the second estimated time of arrival T2 is shorter than the first estimated time of arrival T1), the determination module 604 may determine to re-plan and return to operations 1 through 7. The determination module 604 may be configured to plan a second trajectory 1002 along which the ADV810 is driven and to control the ADV810 to automatically drive along the second trajectory 1002.

When R is not greater than a predetermined threshold, such as, for example, R < ═ 1 (e.g., the first estimated arrival time T1 is shorter than the second estimated arrival time T2), the determination module 604 may determine to continue waiting for the obstacle 807 to leave the affected area 1001 and then to drive automatically along the first trajectory 908.

In one embodiment, an ADV operating in an on-lane mode in a first driving zone (e.g., 801) may be configured to automatically switch to an open space mode in response to detecting that the ADV is to be driven in a second driving zone (e.g., 802). For example, referring back to fig. 8A, when ADV810 reaches point 805a, perception module 302 may detect that ADV810 is about to drive in a second driving zone (e.g., 802). In response, the routing module 307 and the planning module 305 may be configured to automatically switch from operating in the on-lane mode after the first procedure to operating in the open space mode after the second procedure.

Fig. 12 is a flow chart illustrating an example of a process for an ADV operating in one of an on-lane mode or an open space mode, according to one embodiment. Process 1200 may be performed by processing logic that may comprise software, hardware, or a combination thereof. For example, the process 1200 may be performed by the routing module 307 and the planning module 305. Referring to fig. 12, in operation 1201, processing logic determines a start point and an end point of a route along which an ADV is to be driven. In operation 1202, the processing logic determines whether each of the start point and the end point is within a first driving region of a first type having lane boundaries or a second driving region of a second type that is an open space without lane boundaries. In operation 1203, the processing logic divides the route into a first route segment and a second route segment based on determining whether each of the starting point and the ending point are within the first driving area or the second driving area. In operation 1204, the processing logic operates in one of an on-lane mode or an open space mode to plan a first trajectory for a first route segment and operates in one of an on-lane mode or an open space mode to plan a second trajectory for a second route segment depending on whether the start point or the end point is within the first driving area or the second driving area.

Figure 13 is a flow chart illustrating an example of a process for an ADV to operate in an open space mode, according to one embodiment. Process 1300 may be performed by processing logic that may include software, hardware, or a combination thereof. For example, the process 1300 may be performed by the routing module 307 and the planning module 305. Referring to fig. 13, in operation 1301, processing logic searches for a first route segment or a second route segment based on the improved a star search algorithm. In operation 1302, processing logic generates a reference line based on the first route segment or the second route segment. In operation 1303, the processing logic generates a virtual road boundary based on the width of the ADV and the reference line. In operation 1304, the processing logic generates a mesh within the virtual road boundary. In operation 1305, processing logic determines a set of candidate trajectories based on the grid. In operation 1306, processing logic selects a trajectory from the set of candidate trajectories to control ADV to autonomously drive according to the trajectory.

Fig. 14 is a flow chart illustrating an example of a process for ADV obstacle avoidance according to one embodiment. Process 1400 may be performed by processing logic that may include software, hardware, or a combination thereof. For example, the process 1400 may be performed by the awareness module 302, the routing module 307, the planning module 305, and the control module 306. Referring to fig. 14, in operation 1401, processing logic plans a first trajectory along which an ADV is driven. In operation 1402, the processing logic controls the ADV to autonomously drive along the first trajectory. In operation 1403, the processing logic detects obstacles in the affected area of the ADV based on sensor data obtained from a plurality of sensors mounted on the ADV while controlling the ADV to autonomously drive along the first trajectory. In operation 1404, processing logic determines an expected dwell time of the obstacle in the affected area. In operation 1405, the processing logic determines whether to plan a second trajectory or wait for the obstacle to leave the affected area based on an expected dwell time of the obstacle in the affected area. In operation 1406, the processing logic plans a second trajectory along which the ADV is to drive and controls the ADV to automatically drive along the second trajectory, or controls the ADV to wait for the obstacle to leave the affected area and then automatically drive along the first trajectory based on the determination of whether to plan the second trajectory or wait for the obstacle to leave the affected area.

It should be noted that some or all of the components as shown and described above may be implemented in software, hardware, or a combination thereof. For example, such components may be implemented as software installed and stored in a persistent storage device, which may be loaded into and executed by a processor (not shown) in order to perform the processes or operations described throughout this application. Alternatively, such components may be implemented as executable code programmed or embedded into dedicated hardware, such as an integrated circuit (e.g., an application specific integrated circuit or ASIC), a Digital Signal Processor (DSP) or Field Programmable Gate Array (FPGA), which is accessible via a respective driver and/or operating system from an application. Further, such components may be implemented as specific hardware logic within a processor or processor core as part of an instruction set accessible by software components through one or more specific instructions.

Some portions of the foregoing detailed description have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, considered to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities.

It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as those set forth in the appended claims, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.

Embodiments of the present disclosure also relate to apparatuses for performing the operations herein. Such a computer program is stored in a non-transitory computer readable medium. A machine-readable medium includes any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., computer) readable storage medium (e.g., read only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices).

The processes or methods depicted in the foregoing figures may be performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, etc.), software (e.g., embodied on a non-transitory computer readable medium), or a combination of both. Although the processes or methods are described above in terms of some sequential operations, it should be appreciated that some of the operations may be performed in a different order. Further, some operations may be performed in parallel rather than sequentially.

Embodiments of the present disclosure are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of embodiments of the disclosure as described herein.

In the foregoing specification, embodiments of the disclosure have been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the disclosure as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

39页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:自主矿车操作

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类