Autonomous vehicle communication

文档序号:173729 发布日期:2021-10-29 浏览:44次 中文

阅读说明:本技术 自主车辆的通信 (Autonomous vehicle communication ) 是由 S.潘迪特 J.默凯 C.赖特 于 2020-03-09 设计创作,主要内容包括:本公开的各方面提供了一种促进从自主车辆100到用户的通信的方法。例如,一种方法可以包括:当试图接起用户并且在用户进入车辆之前时,将车辆的当前位置和地图信息200输入到模型中,以便识别用于将车辆的位置通信给用户的通信动作的类型;基于通信动作的类型启用第一通信;从接收的传感器数据确定用户是否已经对第一通信作出响应;并且基于用户是否已经对通信作出响应的确定来启用第二通信。(Aspects of the present disclosure provide a method of facilitating communication from an autonomous vehicle 100 to a user. For example, a method may include: inputting the current location of the vehicle and map information 200 into the model when attempting to pick up the user and before the user enters the vehicle, so as to identify a type of communication action for communicating the location of the vehicle to the user; enabling a first communication based on the type of the communication action; determining from the received sensor data whether the user has responded to the first communication; and enabling the second communication based on the determination of whether the user has responded to the communication.)

1. A method of facilitating communication from an autonomous vehicle to a user, the method comprising:

entering, by one or more processors of the vehicle, a current location of the vehicle and map information into the model when attempting to pick up the user and before the user enters the autonomous vehicle;

identifying, using the model, a type of communication action for communicating the location of the vehicle to the user;

enabling, by the one or more processors, a first communication based on the type of the communication action; and is

After enabling the first communication, determining, by the one or more processors, from the received sensor data, whether the user is moving toward the vehicle.

2. The method of claim 1, wherein the type of communication action is an automatic generation of an audible communication by the vehicle, and enabling the first communication comprises instructing the vehicle to engage in the audible communication.

3. The method of claim 2, wherein the first communication is a vehicle horn.

4. The method of claim 1, wherein the type of communication action is automatically surfacing an option on a user's client computing device to enable the user to cause the vehicle to generate an audible communication.

5. The method of claim 1, wherein the type of communication action is automatic generation of a visual communication by a vehicle, and enabling the first communication comprises the vehicle engaging in the visual communication.

6. The method of claim 5, wherein the first communication is a vehicle flashing headlight.

7. The method of claim 1, wherein the type of communication action is automatically surfacing an option on a user's client computing device to enable the user to cause the vehicle to generate a visual communication.

8. The method of claim 1, wherein the received sensor data comprises location information generated by a client computing device of a user.

9. The method of claim 1, wherein the received sensor data comprises data generated by a perception system of a vehicle, the perception system comprising at least one sensor.

10. The method of claim 1, further comprising:

using the upgraded model of communication to determine a type of communication action for the second communication; and is

Enabling, by the one or more processors, the second communication based on the determination of whether the user is moving toward the vehicle, and wherein the type of communication action used for the second communication is also used to enable the second communication.

11. The method of claim 10, wherein the type of communication action for the second communication is automatically surfacing an option on the user's client computing device to enable the user to cause the vehicle to generate an audible communication.

12. The method of claim 10, wherein the type of communication action for the second communication is automatically surfacing an option on a client computing device of the user to enable the user to cause the vehicle to generate the visual communication.

13. The method of claim 1, wherein the first communication comprises the vehicle automatically flashing its lights and the second communication comprises the vehicle automatically ringing a horn.

14. The method of claim 1, wherein the first communication comprises a vehicle autohorn and the second communication comprises the vehicle autonomically requesting a customer service representative to connect with a client computing device of the user.

15. The method of claim 1, wherein the model is a machine learning model.

16. A method of training a model for facilitating communication from an autonomous vehicle to a user, the method comprising:

receiving, by the one or more computing devices, training data comprising a first training input indicative of a location of the vehicle, a second training input indicative of map information, a third training input indicative of a location of the user, a fourth training input characterizing sensor data identifying one or more objects in an environment of the vehicle, and a target output indicative of a type of communication;

training, by the one or more computing devices, the model on training data according to current values of parameters of the model to generate a set of output values indicative of a degree of appropriateness for the type of communication;

determining a difference value using the target output and the set of output values; and

the difference is used to adjust the current values of the parameters of the model.

17. The method of claim 16, wherein the training data corresponds to a request of a user to have the vehicle perform the type of communication in order to communicate with the user.

18. The method of claim 16, wherein the type of communication is an audible communication.

19. The method of claim 16, wherein the type of communication is a visual communication.

20. The method of claim 16, wherein the training data further comprises ambient lighting conditions.

Background

Autonomous vehicles, such as vehicles that do not require a human driver, may be used to assist in transporting passengers or items from one location to another. Such vehicles may operate in a fully autonomous driving mode in which the passenger may provide some initial input (such as a pick-up or destination location) and the vehicle maneuvers itself to that location.

When a person (or user) wants to physically transport between two locations via a vehicle, they can use any number of taxi services. To date, these services typically involve a human driver who is given scheduling instructions to a location to pick up and drop off a user. Typically, these locations are derived via physical signals (i.e., signaling the driver to park), a telephone call by which the user explains his or her actual location, or an in-person discussion between the driver and the user. These services, while useful, generally do not provide the user with accurate information about where pick-up or drop-down will occur.

Disclosure of Invention

Aspects of the present disclosure provide a method of facilitating communication from an autonomous vehicle to a user. The method includes inputting, by one or more processors of the vehicle, a current location of the vehicle and map information into the model when attempting to pick up the user and before the user enters the autonomous vehicle; identifying, using the model, a type of communication action for communicating the location of the vehicle to the user; enabling, by the one or more processors, a first communication based on the type of the communication action; and determining, by the one or more processors, from the received sensor data, whether the user is moving toward the vehicle after enabling the first communication.

In one example, the type of communication action is an automatic generation of an audible communication by the vehicle, and enabling the first communication includes instructing (instractict) the vehicle to make the audible communication. In this example, the first communication is a vehicle horn. In another example, the type of communication action is automatically surfacing options on the user's client computing device to enable the user to cause the vehicle to generate an audible communication. In another example, the type of communication action is an automatic generation of a visual communication by the vehicle, and enabling the first communication includes the vehicle engaging in the visual communication. In this example, the first communication is a vehicle flashing headlight. In another example, the type of communication action is to automatically present options on the user's client computing device to enable the user to cause the vehicle to generate a visual communication. In another example, the received sensor data includes location information generated by a client computing device of the user. In another example, the received sensor data includes data generated by a perception system of the vehicle, the perception system including at least one sensor. In another example, the method further includes using the upgraded model of communications to determine a type of communication action for the second communication, and enabling, by the one or more processors, the second communication based on the determination of whether the user is moving toward the vehicle, and wherein the type of communication action for the second communication is also used to enable the second communication. In this example, the type of communication action for the second communication is to automatically present an option on the user's client computing device to enable the user to cause the vehicle to generate an audible communication. Alternatively, the type of communication action for the second communication is to automatically present an option on the user's client computing device to enable the user to cause the vehicle to generate the visual communication. In another example, the first communication includes the vehicle automatically flashing its lights, and the second communication includes the vehicle automatically ringing a horn. In another example, the first communication includes a vehicle autoring, and the second communication includes the vehicle autorequesting that the customer service representative connect with the user's client computing device. In another example, the model is a machine learning model.

Another aspect of the present disclosure provides a method of training a model for facilitating communication from an autonomous vehicle to a user. The method includes receiving, by one or more computing devices, training data including a first training input indicative of a location of a vehicle, a second training input indicative of map information, a third training input indicative of a location of a user, a fourth training input characterizing sensor data identifying one or more objects in an environment of the vehicle, and a target output indicative of a type of communication; training, by one or more computing devices, a model on training data according to current values of parameters of the model to generate a set of output values indicative of a degree of appropriateness (level of impropreatess) for a type of communication; determining a difference value using the target output and the set of output values; and adjusting the current values of the parameters of the model using the difference values.

In one example, the training data corresponds to a request by a user to cause the vehicle to perform the type of communication to communicate with the user. In another example, the type of communication is an audible communication. In another example, the type of communication is a visual communication. In another example, the training data further includes ambient lighting conditions (ambient lighting conditions).

Drawings

FIG. 1 is a functional diagram of an example vehicle, according to an example embodiment.

Fig. 2 is an example of map information according to aspects of the present disclosure.

FIG. 3 is an example exterior view of a vehicle according to aspects of the present disclosure.

Fig. 4 is a schematic diagram of an example system according to aspects of the present disclosure.

Fig. 5 is a functional diagram of the system of fig. 4, according to aspects of the present disclosure.

Fig. 6 is an example of a client computing device and displayed options in accordance with aspects of the present disclosure.

Fig. 7 is an example of a client computing device and displayed options in accordance with aspects of the present disclosure.

Fig. 8 is an example of a client computing device and displayed options in accordance with aspects of the present disclosure.

Fig. 9 is an example of map information according to aspects of the present disclosure.

Fig. 10 is an example of map information according to aspects of the present disclosure.

Fig. 11 is an example flow diagram in accordance with aspects of the present disclosure.

Detailed Description

SUMMARY

The present technology relates to the use of audible and/or visual communications to facilitate pick-up and drop-off of passengers (or users) or cargo for an autonomous vehicle, or indeed, in any situation where a pedestrian needs to reach the vehicle. In many cases, an autonomous vehicle will not have a human driver who can communicate with people to help them find the vehicle (i.e., pick up) or the correct drop off location. As such, the autonomous vehicle may use various audible and/or visual communications to actively (proactively) attempt to communicate with the person in a useful and efficient manner. For example, a model may be generated to allow the vehicle to determine when audible and/or visual communications should be provided to a person and/or whether to present the person with an option to do so (surface).

To generate the model, an option may be provided to the person, e.g., via an application on the person's computing device (e.g., a mobile phone or other client computing device), to cause the vehicle to provide audible communications. This data may be recorded when a person uses the option. Each time an option is used, a message may be provided to the vehicle to cause the vehicle to communicate. The message may include information such as the date and time the request was generated, the type of communication to be made, and the location of the person. The message and other information may also be sent to the server computing system, for example, by the vehicle and/or the client computing device.

To allow the computing device of the vehicle to better communicate with the person, the messages and other information may then be processed by the server computing device to generate the model. For example, a model may be trained to indicate whether a certain type of communication is appropriate. If so, the type of communication may be made available and/or automatically generated by the vehicle's computing device as an option in an application on the person's client computing device.

To train the model, the location of the person, other information, and map information may be used as training inputs and the type of communication (from the message) may be used as training outputs. The more training data used to train the model, the more accurate the model will be in determining when to provide communications or when to provide options to provide communications. The model may be trained to distinguish when visual communication is appropriate as compared to when audible communication is appropriate.

In some instances, depending on the amount of available training data, the model may be trained for a particular purpose. For example, a model may be trained for a particular person or group of persons based on characteristics of the service history of the person or group of persons.

The trained models may then be provided to one or more vehicles in order to allow the computing devices of those vehicles to better communicate with humans. While the vehicle is approaching or waiting at the pick-up or drop-off location, the computing device 110 of the vehicle may use the model to determine whether the communication is appropriate and, if so, the type. This may occur, for example, depending on whether a person (or possible passenger) has a clear line of sight (sight) to the vehicle, and vice versa, based on the environment of the vehicle.

In one aspect, the model may be used to determine whether an option as discussed above should be surfaced in an application. Further, the presented options may only allow visual communication if the output of the model indicates that visual communication is more appropriate than audible communication. Likewise, the presented options may only allow visual communication if the output of the model indicates that audible communication is more appropriate than visual communication. In another aspect, rather than providing the user with options for audible or visual communication, the model may be used to determine whether the vehicle should automatically engage in audible or visual communication. Additionally or alternatively, the output of the model may be used to determine an initial action, and subsequent actions may be automatically taken depending on the initial action.

The user's response to subsequent actions may be used to build a model of the upgraded communication. For example, the results may be tracked for each case using subsequent actions. This information may then be analyzed to identify patterns that increase the likelihood that the user will enter the vehicle more quickly in response to the vehicle communication. The model of the upgraded communication may be trained to determine what the next action should be based on the previous or initial action to best facilitate the user's arrival at the vehicle. Likewise, the more training data used to train the model, the more accurate the model will be in determining how to upgrade from previous actions. As with the first model, one or more vehicles may then be provided with a trained, upgraded communication model in order to allow the computing devices of these vehicles to better communicate with people, including potential passengers.

The features described herein may allow an autonomous vehicle to improve pick-up and drop-off of passengers. For example, the user may use the presented options to communicate visually and/or audibly with the user in the vehicle by himself or by prompting. This makes it easier to identify the position of the vehicle relative to the user. Additionally or alternatively, the vehicle may use the model to proactively determine whether and how to communicate with the user, and how to upgrade those communications over time.

Example System

As shown in fig. 1, a vehicle 100 according to an aspect of the present disclosure includes various components. While certain aspects of the present disclosure are particularly useful in conjunction with a particular type of vehicle, the vehicle may be any type of vehicle, including but not limited to an automobile, a truck, a motorcycle, a bus, a recreational vehicle, and the like. The vehicle may have one or more computing devices, such as computing device 110 including one or more processors 120, memory 130, and other components typically found in a general purpose computing device.

Memory 130 stores information accessible by one or more processors 120, including instructions 134 and data 132 that may be executed or otherwise used by processors 120. The memory 130 may be of any type capable of storing information accessible by the processor, including a computing device readable medium, or other medium that stores data that may be read by an electronic device, such as a hard disk drive, memory card, ROM, RAM, DVD or other optical disk, and other writable and read-only memories. The systems and methods may include different combinations of the foregoing, whereby different portions of the instructions and data are stored on different types of media.

The instructions 134 may be any set of instructions that are directly executable (such as machine code) or indirectly executable (such as scripts) by a processor. For example, the instructions may be stored as computing device code on a computing device readable medium. In this regard, the terms "instructions" and "programs" may be used interchangeably herein. The instructions may be stored in an object code format for direct processing by a processor, or in any other computing device language, including as a collection of separate source code modules or scripts that are interpreted or pre-compiled as needed. The function, method and routine of the instructions are explained in more detail below.

Processor 120 may retrieve, store, or modify data 132 according to instructions 134. For example, although claimed subject matter is not limited by any particular data structure, data may be stored in a computing device register, in a relational database as a table, XML document, or flat file having a plurality of different fields and records. The data may also be formatted in any computing device readable format.

The one or more processors 120 may be any conventional processor, such as a commercially available CPU or GPU. Alternatively, one or more processors may be special purpose devices, such as an ASIC or other hardware-based processor. Although fig. 1 functionally shows the processor, memory, and other elements of the computing device 110 as being within the same block, those of ordinary skill in the art will appreciate that a processor, computing device, or memory may in fact be comprised of multiple processors, computing devices, or memories, which may or may not be housed within the same physical housing. For example, the memory may be a hard disk drive or other storage medium located in a different housing than the computing device 110. Thus, references to a processor or computing device are to be understood as including references to a collection of processors or computing devices or memories that operate in parallel or not.

Computing device 110 may include all of the components typically used in connection with computing devices, such as the processors and memories described above, as well as user inputs 150 (e.g., a mouse, keyboard, touch screen, and/or microphone) and various electronic displays (e.g., a monitor having a screen or any other electrical device operable to display information). In this example, the vehicle includes an electronic display 152 and one or more speakers 154 to provide an informational or audiovisual experience. In this regard, the electronic display 152 may be located within a cabin of the vehicle 100 and may be used by the computing device 110 to provide information to passengers within the vehicle 100. In some examples, electronic display 152 may be an interior display that is visible to persons outside the vehicle through windows or other transparent vehicle enclosures of the vehicle, and/or an interior display that is capable of projecting images through windows or other transparent vehicle enclosures to provide information to persons outside the vehicle. Alternatively, the electronic display 152 may be an externally mounted display (i.e., the underside of a roof pod that may be displayed through a glass roof (roof)) capable of projecting information to passengers inside the vehicle, and/or an externally mounted display that provides information to people outside the vehicle.

Computing device 110 may also include one or more wireless network connections 156 to facilitate communications with other computing devices, such as client and server computing devices described in detail below. The wireless network connection may include short-range communication protocols such as bluetooth, bluetooth Low Energy (LE), cellular connections, and various configurations and protocols including the internet, world wide web, intranets, virtual private networks, wide area networks, local area networks, private networks using communication protocols proprietary to one or more companies, ethernet, Wi-Fi, and HTTP, as well as various combinations of the foregoing.

In one example, the computing device 110 may be part of a communication system that is incorporated into an autonomous driving computing system in the vehicle 100. In this regard, the communication system may include or may be configured to send signals to cause audible communications to be played through the speaker 154. The communication system may also be configured to send signals for visual communication, such as by flashing or otherwise controlling the headlights 350, 352 of the vehicle (as shown in fig. 3) or by displaying information on the internal electronic display 152.

The autonomous control system 176 may include various computing devices, similar to the configuration of computing device 110, capable of communicating with various components of the vehicle to control the vehicle in an autonomous driving mode. For example, returning to fig. 1, autonomous control system 176 may communicate with various systems of vehicle 100, such as deceleration system 160, acceleration system 162, steering system 164, routing system 166, planner system 168, location system 170, and perception system 172, to control movement, speed, etc. of vehicle 100 in an autonomous driving mode according to instructions 134 of memory 130.

As an example, a computing device of autonomous control system 176 may interact with deceleration system 160 and acceleration system 162 to control the speed of the vehicle. Similarly, the autonomous control system 176 may use the steering system 164 to control the direction of the vehicle 100. For example, if the vehicle 100 is configured for use on a roadway (such as a car or truck), the steering system may include components that control the angle of the wheels to turn the vehicle. The autonomous control system 176 may also use a signaling system to signal the intent of the vehicle to other drivers or vehicles, for example, by illuminating turn or brake lights when needed.

Autonomous control system 176 may use routing system 166 to generate a route to a destination. The computing device 110 may use the planner system 168 in order to follow the route. In this regard, the planner system 168 and/or the routing system 166 may store detailed map information, such as highly detailed maps identifying roads, lane lines, intersections, crosswalks, speed limits, traffic signals, buildings, signs, real-time traffic information, shapes and heights of edge-stop vegetation, or other such objects and information.

Fig. 2 is an example of map information 200 for a road segment including an intersection 202 adjacent to a parking lot 210 of a building 220. The map information 200 may be a local version of map information stored in the memory 130 of the computing device 110. Other versions of map information may also be stored in the storage system 450, discussed further below. In this example, the map information 200 includes information identifying the shape, location, and other characteristics of lane lines 230, 232, 234, 236, lanes 240, 242, 244, 246, stop signs 250, 252, 254, 256, and so forth. In this example, the map information 200 also includes information identifying characteristics of the parking lot 210 and the building 220. For example as parking spaces 260, 262, 264, 266, 268, travelable areas 270, 272, 274, 276, etc. Further, in this example, the map information identifies entrances and exits 282, 284, 286 of the building 220. Although only a few features are depicted in the map information 200 of fig. 2, the map information 200 may include significantly more features and details to enable the vehicle 100 to be controlled in an autonomous driving mode.

Although the map information is depicted herein as an image-based map, the map information need not be entirely image-based (e.g., raster). For example, map information may include one or more road maps or graphical networks of information such as roads, lanes, intersections, and connections between these features that may be represented by road segments. Each feature may be stored as graphical data and may be associated with information such as the geographic location and whether it is linked to other relevant features (e.g., a stop sign may be linked to a road, intersection, etc.). In some examples, the associated data may include a grid (grid) based index of road maps to allow efficient lookup of certain road map features.

Autonomous control system 176 may use positioning system 170 to determine the relative or absolute position of the vehicle on a map or on the earth. For example, the positioning system 170 may include a GPS receiver to determine the latitude, longitude, and/or altitude location of the device. Other positioning systems, such as laser-based positioning systems, inertial assisted GPS, or camera-based positioning, may also be used to identify the location of the vehicle. The location of the vehicle may include an absolute geographic location, such as latitude, longitude, and altitude, as well as relative location information, such as relative to the location of other automobiles immediately surrounding it, which may often be determined with less noise than the absolute geographic location.

The positioning system 170 may also include other devices, such as accelerometers, gyroscopes, or additional direction/velocity detection devices, in communication with the computing device of the autonomous control system 176 to determine the direction and velocity of the vehicle or changes thereto. For example only, the acceleration device may determine its pitch, yaw, or roll (or changes thereof) relative to the direction of gravity or relative to a plane perpendicular to the direction of gravity. The device may also track the increase or decrease in speed and the direction of such changes. The provision of location and orientation data by a device as set forth herein may be automatically provided to computing device 110, other computing devices, and combinations of the foregoing.

The sensing system 172 also includes one or more components for detecting objects outside the vehicle, such as other vehicles, obstacles in the road, traffic signals, signs, trees, and so forth. For example, sensing system 172 may include a laser, sonar, radar, camera, and/or any other detection device that records data that may be processed by a computing device of autonomous control system 176. Where the vehicle is a passenger vehicle such as a minivan (minivan), the minivan may include a laser or other sensor mounted on the roof or other convenient location. For example, fig. 3 is an example exterior view of the vehicle 100. In this example, a top (roof-top) housing 310 and a dome housing 312 may include a LIDAR sensor as well as various cameras and radar units. Further, the housing 320 located at the front end of the vehicle 100 and the housings 330, 332 on the driver side and passenger side of the vehicle may each house LIDAR sensors. For example, the housing 330 is located in front of the driver's door 360. The vehicle 100 further comprises housings 340, 342 for radar units and/or cameras also located on the roof of the vehicle 100. Additional radar units and cameras (not shown) may be located at the front and rear ends of the vehicle 100 and/or at other locations along the roof or roof enclosure 310.

The autonomous control system 176 is capable of communicating with various components of the vehicle to control movement of the vehicle 100 according to the primary vehicle control code of the memory of the autonomous control system 176. For example, returning to fig. 1, autonomous control system 176 may include various computing devices in communication with various systems of vehicle 100, such as deceleration system 160, acceleration system 162, steering system 164, routing system 166, planner system 168, location system 170, perception system 172, and power system 174 (i.e., the vehicle's engines or motors), to control movement, speed, etc. of vehicle 100 according to instructions 134 of memory 130.

Various systems of a vehicle may be run (function) using autonomous vehicle control software to determine how to control the vehicle and to control the vehicle. As an example, the perception system software modules of perception system 172 may use sensor data generated by one or more sensors of the autonomous vehicle (such as a camera, LIDAR sensor, radar unit, sonar unit, etc.) to detect and identify objects and their characteristics. These characteristics may include location, type, orientation, velocity, acceleration, change in acceleration, magnitude, shape, and the like. In some instances, the characteristics may be input into a behavior prediction system software module that uses various behavior models based on object types to output predicted future behavior for the detected object. In other examples, the characteristics may be input into one or more detection system software modules, such as a traffic light detection system software module configured to detect a state of a known traffic signal, a construction zone detection system software module configured to detect a construction zone from sensor data generated by one or more sensors of the vehicle, and an emergency vehicle detection system configured to detect an emergency vehicle from sensor data generated by sensors of the vehicle. Each of these detection system software modules may use various models to output the likelihood that a construction zone or object is an emergency vehicle. Detected objects, predicted future behavior, various possibilities from the detection system software module, map information identifying the vehicle environment, positioning information from a positioning system 170 that identifies the vehicle's position and orientation, the vehicle's destination, and feedback from various other systems of the vehicle may be input into the planner system software module of the planner system 168. The planner system may use the input to generate a trajectory that the vehicle will follow over a future period of time based on the route generated by the routing module of the routing system 166. The control system software modules of the autonomous control system 176 may be configured to control movement of the vehicle (e.g., by controlling braking, acceleration, and steering of the vehicle) in order to follow the trajectory.

The autonomous control system 176 may control the vehicle in an autonomous driving mode by controlling various components. For example, as an example, the autonomous control system 176 may use data from the detailed map information and planner system 168 to navigate the vehicle to the destination location completely autonomously. The autonomous control system 176 may use the positioning system 170 to determine the location of the vehicle and the sensing system 172 to detect and respond to objects as needed to safely reach the location. Also, to do so, the computing device 110 may generate and cause the vehicle to follow these trajectories, such as by accelerating the vehicle (e.g., by supplying fuel or other energy to the engine or powertrain 174 by the acceleration system 162), decelerating (e.g., by reducing fuel supplied to the engine or powertrain 174, shifting gears, and/or by applying brakes by the deceleration system 160), changing direction (e.g., by turning front or rear wheels of the vehicle 100 by the steering system 164), and signaling such changes (e.g., by illuminating turn signals). Thus, acceleration system 162 and deceleration system 160 may be part of a transmission system that includes various components between the vehicle engine and the vehicle wheels. Likewise, by controlling these systems, autonomous control system 176 may also control the driveline of the vehicle to autonomously steer the vehicle.

The computing device 110 of the vehicle 100 may also receive information from or send information to other computing devices, such as those computing devices that are part of the transportation service and other computing devices. Fig. 4 and 5 are a schematic and functional diagram, respectively, of an example system 400, the example system 400 including a plurality of computing devices 410, 420, 430, 440 and a storage system 450 connected via a network 460. The system 400 also includes a vehicle 100 and vehicles 100A, 100B that may be configured the same as or similar to the vehicle 100. Although only a few vehicles and computing devices are depicted for simplicity, a typical system may include significantly more vehicles and computing devices.

As shown in fig. 5, each of the computing devices 410, 420, 430, 440 may include one or more processors, memory, data, and instructions. Such processors, memories, data, and instructions may be configured similar to the one or more processors 120, memories 130, data 132, and instructions 134 of the computing device 110.

Network 460 and intermediate nodes may include various configurations and protocols, including short-range communication protocols, such as bluetooth, bluetooth LE, the internet, the world wide web, intranets, virtual private networks, wide area networks, local area networks, private networks using communication protocols specific to one or more companies, ethernet, WiFi, and HTTP, as well as various combinations of the foregoing. Such communication may be facilitated by any device capable of sending and receiving data to and from other computing devices, such as modems and wireless interfaces.

In one example, the one or more computing devices 410 may include one or more server computing devices (e.g., a load balancing server farm) having multiple computing devices that exchange information with different nodes of a network for the purpose of receiving data from, processing data, and sending data to other computing devices. For example, the one or more computing devices 410 may include one or more server computing devices capable of communicating with the computing device 110 of the vehicle 100 or similar computing devices of the vehicle 100A and the computing devices 420, 430, 440 via the network 460. For example, the vehicles 100, 100A may be part of a fleet of vehicles that may be dispatched by a server computing device to various locations. In this regard, the server computing device 410 may function as a dispatch server computing system that may be used to dispatch vehicles (such as vehicles 100 and 100A) to different locations for picking up and dropping off passengers. Further, the server computing device 410 may use the network 460 to send and present information to users (such as users 422, 432, 442) on displays (such as displays 424, 434, 444 of computing devices 420, 430, 440). In this regard, the computing devices 420, 430, 440 may be considered client computing devices.

As shown in fig. 5, each client computing device 420, 430, and 440 may be a personal computing device intended for use by a user 422, 432, 442, and have all of the components typically used in connection with a personal computing device, including one or more processors (e.g., a Central Processing Unit (CPU)), memory (e.g., RAM and internal hard drives) that stores data and instructions, a display such as display 424, 434, 444 (e.g., a monitor having a screen, a touch screen, a projector, a television, or other device operable to display information), and a user input device 426, 436, 446 (e.g., a mouse, a keyboard, a touch screen, or a microphone). The client computing device may also include a camera for recording video streams, speakers, a network interface device, and all components for connecting these elements to each other.

While each of the client computing devices 420, 430, 440 may comprise a full-size personal computing device, they may alternatively comprise a mobile computing device capable of wirelessly exchanging data with a server over a network such as the internet. By way of example only, the client computing device 420 may be a mobile phone or device, such as a wireless-enabled PDA, a tablet PC, a wearable computing device or system, or a netbook capable of obtaining information via the internet or other network. In another example, the client computing device 430 may be a wearable computing system, such as a wristwatch shown in fig. 4. As an example, a user may input information using a small keyboard (keypad), a keypad (keypad), a microphone, using a visual signal with a camera, or a touch screen.

Like the memory 130, the storage system 450 may be any type of computerized storage capable of storing information accessible by the server computing device 410, such as a hard disk drive, memory card, ROM, RAM, DVD, CD-ROM, writable memory, and read-only memory. Further, storage system 450 may comprise a distributed storage system in which data is stored on a plurality of different storage devices physically located in the same or different geographic locations. As shown in fig. 4 and 5, storage system 450 may be connected to computing devices via network 460 and/or may be directly connected to or incorporated into any of computing devices 110, 410, 420, 430, 440, etc.

The storage system 450 may store various types of information. For example, the storage system 450 may also store the above-described autonomous vehicle control software to be used by a vehicle, such as the vehicle 100, to operate the vehicle in an autonomous driving mode. The autonomous vehicle control software stored in the storage system 450 includes various invalid and valid versions of the autonomous vehicle control software. Once active, the autonomous vehicle control software may be transmitted to, for example, the memory 130 of the vehicle 100 for use by the vehicle's computing device to control the vehicle in an autonomous driving mode.

The storage system 450 may store various types of information as described in more detail below. A server computing device (such as one or more server computing devices 410) may retrieve or otherwise access this information in order to perform some or all of the features described herein. For example, the storage system may store various models and parameter values for the models, which may be updated via training as discussed further below. The storage system 450 may also store log data. The log data may include, for example, sensor data generated by a sensing system, such as the sensing system 172 of the vehicle 100. The sensing system may include a plurality of sensors that generate sensor data. As an example, the sensor data may include raw sensor data as well as data identifying defined characteristics (defining characteristics) of a perceived object (including other road users), such as the shape, location, orientation, speed, etc. of an object (such as a vehicle, pedestrian, cyclist, vegetation, curb, lane line, sidewalk, crosswalk, building, etc.). As discussed further below, the log data may also include "event" data that identifies different types of audible communications generated by the vehicle in response to and/or requests for the environment of the vehicle.

Example method

In addition to the operations described above and shown in the figures, various operations will now be described. It should be understood that the following operations do not have to be performed in the exact order described below. Rather, various steps may be processed in a different order or concurrently, and steps may also be added or omitted.

To generate and train the model, a user of the service may be provided with an option to request that the vehicle provide communications, for example, via an application on the user's computing device (i.e., mobile phone). In this regard, the usage options may cause the vehicle to provide communications. This data may be recorded when the user uses the option. Fig. 6 is an example view of a client computing device 420, including options 610, 620 displayed on a display 424. In this example, option 610 may allow the client computing device to send a request to the vehicle, e.g., via network 460 or other wireless connection, to cause the vehicle to generate audible communications by sounding a horn or playing corresponding audio through speaker 154. Option 620 may allow the client computing device to send a request to the vehicle, e.g., via network 460 or other wireless connection, to cause the vehicle to generate a visual communication, e.g., by flashing headlights 350, 352 and/or by displaying information on electronic display 152. In some examples, an option 630 may be provided to allow the user to not request any communication, such as the user believing that he or she has identified his or her vehicle.

For example, the user may use option 620 in a dark parking lot to cause the autonomous vehicle to flash its headlights. As another example, in a well-lit parking lot, the user may use option 610 to horn the vehicle or provide some other audible communication in the event that little or no other pedestrians are present. In instances where there are more pedestrians, the user may select option 620 instead of option 610. As another example, the user may use option 610 to cause the vehicle to ring a horn when near a large parking lot or large building. As yet another option, when there are multiple autonomous vehicles in the vicinity, the user may use option 620 to cause the vehicle to blink its headlights. Alternatively, another type of visual communication option may be provided, such as displaying a message on the electronic display 152, rather than flashing the headlights.

Each time a communication is requested using one of the options (such as options 610, 620), a message may be provided to the vehicle to cause the vehicle's computing device 110 to engage in or generate the communication. The message may include information such as the date and time the request was generated, the type of communication to be made, and the location of the user. The message, as well as other message information, may also be sent, for example, by the vehicle and/or the user's client computing device to a server computing system, such as server computing system 410, which may store the message in storage system 450. By way of example, other message information may include data generated by the vehicle's computing system, such as the location of the vehicle, the type of communication (flashing lights, displaying information on the electronic display 152, ringing a horn, etc.), the location and/or characteristics of other road users (vehicles, pedestrians, cyclists, etc.) detected by the vehicle's perception system 172, ambient lighting conditions, and so forth.

As an example, ambient lighting conditions may be determined in a number of different ways. For example, the computing device 110 may receive feedback from light sensors of the vehicle, such as those used to control the state of headlights of the vehicle and adjust the brightness of an internal electronic display (such as the electronic display 152 in some cases). This information may also be collected from the status of the headlights of the vehicle and/or an internal electronic display if feedback from the light sensor is not directly available to the computing device 110. In other words, the computing device 110 can determine from this information whether it is "dark enough" for the vehicle to have its headlights on or the internal electronic display at a certain brightness. Additionally or alternatively, the ambient lighting conditions may be determined from data generated by a perception system of the vehicle. As described above, the perception system 172 may include a plurality of different sensors, some of which (e.g., still or video cameras) may be used to determine ambient lighting conditions. For example, a "real-time" camera image of the vehicle environment may be analyzed to determine ambient lighting conditions. This may include processing the pixels to determine whether the area toward which the camera is directed is a bright area. If the pixel is bright and the image has a short exposure time, this may indicate that the region is also bright. As another example, ambient lighting conditions may be determined in real-time by using camera exposure values. As an example, when capturing an image, the cameras of the perception system 172 may automatically recalibrate exposure values under given ambient lighting conditions. In this respect, the exposure value may be considered as a proxy for how bright the area of the vehicle's camera is currently visible. For example, the real-time exposure value may be used to determine ambient lighting conditions. The longer the exposure value, the darker the scene, or more precisely the lower the ambient lighting conditions. Also, the shorter the exposure value, the brighter the scene, or more precisely, the higher the ambient lighting conditions. Furthermore, exposure values for periods of time when the sun is out (i.e. dusk to dawn on any given day of the year) may be reviewed to identify those exposure values with small exposure times that indicate brighter artificial lighting.

The messages and other message information (including sensor data) may then be processed by the server computing device 410 to generate and train the model. The model may be a machine learning model, such as a decision tree (such as a random forest decision tree), a deep neural network, logistic regression, neural network, or the like. To train the model, the user's location, other message information (including sensor data generated by the perception systems 172 of the various vehicles that generated the message), and map information may be used as training inputs, and the type of communication (from the message) may be used as training outputs.

Training may thus include receiving training data including various training inputs as well as training outputs or target outputs. The current values of the parameters of the model may be used to train the model on training data to generate a set of output values. These output values may indicate a suitable degree for the type of communication or indicate any other output data determined using the model. The target output and the set of output values may be compared to each other to determine one or more difference values indicating how far the values differ from each other. Based on the one or more differences, current values of parameters of the model may be adjusted. Repeated training and adjustment can improve the accuracy of the model. Thus, as discussed further below, the more training data used to train the model, the more accurate the model is in determining whether to automatically provide communications and what types of communications are automatically provided or what types of communication options are provided or enabled. Further, by using map information as training data, a model may be trained that incorporates how the user's desire for communication is affected by how the environment (e.g., the type of road or area (e.g., residential or commercial) on which the vehicle and/or pedestrian is located) into the determination of which type of communication or communication option is provided or enabled.

Further, by using date and time and/or ambient lighting conditions as training data, the model may be trained to distinguish between different times of day and lighting conditions output by the model for different types of communications. For example, the date and time and ambient lighting conditions may also be used as training inputs. Likewise, the more training data used to train the model, the more accurate the model can become in determining when and what types of communications are enabled and/or provided.

Further, via feedback and manual training, weights may be assigned (or generated) for different communication options in order to reduce the likelihood of false positives (or more precisely, indicating that the vehicle should generate communications at inappropriate or untimely times). Such weights are likely to be based largely on environmental factors (such as map information and sensor data), and thus such inputs may subject the model to corresponding weighting factors. For example, in training a model, the model may be heavily weighted (heavily weighted) to prevent the pedestrian from ringing within a short distance, such as 1-2 meters from the vehicle, as this may be irritating to the pedestrian.

Similarly, a model may be trained to distinguish between situations where visual communication is appropriate and situations where audible communication is appropriate. For example, it may be less (or more) appropriate to ring a horn (or play a corresponding audible communication through speaker 154) in a crowded area, and it may be less (or more) appropriate to use lights during the day. In this regard, user feedback regarding the effectiveness or usefulness of different communications may also be used to train the model. As an example, the user may provide feedback indicating whether a particular communication is inappropriate or inconvenient and the cause (e.g., whether a person is standing in front of the vehicle when flashing headlights, which may be painful to the person's eyes, whether a horn or information displayed on an electronic display screen may cause confusion for others in the environment of the vehicle, etc.). Further, based on laws and regulations, audible communications may be more or less appropriate (e.g., flashing headlights or sounding a horn in a certain area may be illegal). Examples of such inappropriate, invalid, inconvenient, or less useful communications may be generated and/or flagged (e.g., manually by a human operator) as inappropriate and used as training data. As such, as described above, the model may be trained to output whether a communication is appropriate, and if so, the type of communication, or rather whether the communication should be an audible communication or a visual communication. As one example, the model may identify a list of possible communication types and the respective appropriateness of each communication type. In some instances, users may provide positive and/or negative feedback regarding their experience. This information may also be used to help train the model to select the communication that the user considers most helpful as the more appropriate communication.

In some instances, depending on the amount of training data available, the model may be trained for a particular purpose. For example, a model may be trained for a particular user or type of user based on a history that the user or type of user was picked up at a particular location. The model may in this way allow the vehicle to proactively provide notification to the user in the event that a deviation of the vehicle from the user's typical pickup position is required. For example, if a user is typically picked up at one corner of a building, but there are obstacles (such as construction, parked vehicles, pieces of fallen trees, etc.) and the vehicle is forced to go to a different location (such as a different corner of the building), the model may be trained to allow the vehicle to actively notify the user via visual and/or audible communications (such as sounding a horn, flashing lights, or by displaying information on the electronic display 152) in order to draw the user's attention when he or she leaves the building. The vehicle can respond as needed in this manner. In some instances, the model may be trained to further notify the user via an application on the user's client computing device in conjunction with visual and/or audible communications.

The trained models (or more precisely, models and parameter values) may then be provided to one or more vehicles, such as vehicles 100, 100A, in order to allow the computing devices 110 of those vehicles to better communicate with humans. When the vehicle is approaching the pick-up location (or the drop-off location for the load) or is waiting at the pick-up location (or the drop-off location for the load). The computing device 110 of the vehicle may use the model to determine whether the communication is appropriate and, if so, the type. This may occur, for example, based on the environment of the vehicle and/or depending on whether the user (or possible passenger) has a clear line of sight to see the vehicle, or vice versa.

In one aspect, the model and parameter values may be used to determine whether the options discussed above should be surfaced in an application. For example, sensor data generated by a perception system of the vehicle, local map information in an area surrounding the vehicle, and a current location of the vehicle may be input into the model. The map information may include various relevant information, such as a distance to a nearest curb, staircase, entrance, or exit, and/or whether the vehicle is near another object (such as a wall or tree) that may obstruct the user's view of the vehicle, etc. For example, the determination may be performed once the computing device of the vehicle has located a place where the vehicle is to be parked and waiting for the user, is parking into the place, and/or the vehicle has been parked (i.e., has been parked). The model may then output a value indicating whether the communication is appropriate and the degree to which each type of communication is appropriate.

In one example, the presented options may only allow visual communication if the output of the model indicates that visual communication is more appropriate than audible communication. In other words, the presented option may only allow visual communication if the value indicating the appropriateness of visual communication is greater than the value indicating audible communication. For example, turning to fig. 7, option 620 to provide visual communication is not available, but option 610 to provide audible communication is available.

Similarly, a presented option may only allow visual communication if the output of the model indicates that audible communication is more appropriate than visual communication. Again, in other words, the presented option may only allow visual communication if the value indicating the appropriateness of audible communication is greater than the value indicating visual communication. For example, turning to fig. 8, option 610 to provide audible communication is not available, but option 620 to provide visual communication is available.

By way of example, turning to fig. 9, corresponding to map information 200 of fig. 2, the training data may indicate that users tend to stand near area 910 when they leave building 220 (e.g., through entrance and exit 286) and request (e.g., via options 610 or 620) an audible or visual communication. Thus, when using the training model, the application may automatically present the option of providing communication (e.g., visual, audible, or both) when the user leaves the building 220 (e.g., through the entrance and exit 286) and is on a trajectory toward or near the area 910 as tracked by the GPS of their client computing device (and perhaps confirmed by pedestrian detection by the vehicle's perception system 172), as shown in any of the examples of fig. 6, 7, and 8.

In another aspect, the model may be used to determine whether the vehicle should automatically make audible communications rather than merely presenting options as discussed above. Also, this determination may be performed, for example, once the computing device of the vehicle has located a place where the vehicle is to be parked and waiting for the user, is parking into the place, and/or the vehicle has been parked (i.e., has been parked). For example, turning to fig. 10, if the training data indicates a user option 610 that the user tends to stand near area 1010 and tends to cause the vehicle's computing device to sound horn (or generate a corresponding audible communication via speaker 154) when they leave building 220 (e.g., via entrance and exit 282). Thus, when using the trained model, the vehicle's computing device 110 may automatically ring (or generate a corresponding audible communication via the speaker 154) when the user leaves the building 220 and is on a trajectory toward or near the area 1010 as tracked by their client computing device's GPS (and perhaps confirmed by pedestrian detection by the vehicle's perception system 172). As another example, the training data may indicate that when the user leaves the building 220 (e.g., via the entrance and exit 286), the user tends to stand near the area 1020 and use the option 620 to cause the vehicle's computing device to flash the headlights 350, 352. Thus, when using a trained model, the computing device 110 of the vehicle may automatically flash the headlights 350, 352 when the user has many pedestrians around as they exit the building 220 via the entrance and exit 286, and the user is in a trajectory towards or near the area 1020 where there are many pedestrians around as tracked by the GPS of their client computing device (and perhaps confirmed by the pedestrian detection by the perception system 172 of the vehicle).

In some instances, the vehicle's computing device may use information from the user account information and/or other input from the user to determine the appropriate type of communication. For example, if the user's account information or other input indicates that he or she is physically deficient (visually or hearing related), this may be used by the computing device to "override" the output of the model and/or as an input to the pattern. For example, a visually impaired person may benefit more from audible communications. However, if there are a large number of other people around, the system may prefer to give instructions through the user's device rather than ring. Similarly, a hearing impaired person may benefit more from visual communication than from audible communication. There may also be a higher threshold for various parameters related to the distance the user will need to travel to the vehicle. For example, the vehicle's computing device should avoid indicating or encouraging vision-impaired people to cross streets or other non-pedestrian friendly areas (high traffic flow) to reach the vehicle.

Additionally or alternatively, the output of the model may be used to determine and perform an initial action, and subsequent actions may be automatically taken depending on the initial action. Also, this determination may be performed, for example, once the computing device of the vehicle has located a place where the vehicle is to be parked and waiting for the user, is parking into the place, and/or the vehicle has been parked (i.e., has been parked). For example, when the user leaves the building 220 and approaches the area 1020, the computing device 110 of the vehicle may automatically flash the headlights 350, 352. If the user's trajectory does not change immediately (e.g., toward the vehicle), an option, such as option 610, may be presented via the user's client computing device to allow the user to cause the vehicle's computing device 110 to sound the vehicle's horn (or generate a corresponding audible communication via speaker 154), such as in the example of fig. 7. As another example, the computing device 110 of the vehicle may automatically flash the headlights 350, 352 when the user leaves the building 220 and approaches the area 1020. If the user's trajectory does not change immediately (e.g., toward the vehicle), the vehicle's computing device 110 may automatically ring the vehicle's horn (or generate a corresponding audible communication via the speaker 154). In some instances, in addition to automatically horn, an option may also be presented, such as option 610, to allow the user to horn the vehicle (or generate a corresponding audible communication via speaker 154), as in the example shown in fig. 7. Alternatively, a notification may be displayed instead of a show option to let the user know that the vehicle is ringing a horn. At least initially, these subsequent actions may be selected randomly or by using a manually adjusted heuristic (human-tuned juristics). In some instances, these heuristics may involve responding to a particular audio sequence (audio queue) or other information about the vehicle's environment (such as other message information). For example, loud audio communication may be a useful initial action if there is much ambient noise.

The user's response to subsequent actions may be used to build a model of the upgraded communication. The model of the upgraded communication may be a machine learning model, such as a decision tree (such as a random forest decision tree), a deep neural network, a logistic regression, a neural network, and so forth. For example, for each instance that a subsequent action is used, the results may be tracked. This information may then be analyzed, for example, by the server computing device 410, to train a model of the upgraded communication and thereby identify patterns that increase the likelihood that the user will enter the vehicle faster in response to the vehicle communication. For example, the analysis may include changing both the time/consistency of the trajectory in response to the "onboarding time" + "of the communication. As an example, if for a typical (or medium) user, it takes N seconds to log in when they leave the building 220. For example, but if the vehicle's computing device 110 provides audible communication, then it is reduced to N/2, resulting in a significant improvement in the vehicle's discovery capabilities. The same may be true for changes in trajectory. In moderate cases, if the user is generally far away from the vehicle at the building 220, and eventually finds the vehicle, then ideally, the vehicle may improve significantly in time once it provides audible communication before the user corrects his or her trajectory toward the vehicle.

The model of the upgraded communication may then be trained to determine what the next action should be based on the previous or initial action in order to best facilitate the user's arrival at the vehicle. As one example, the user's trends may be used to train the model. For example, the training input for the model may include the actual time the user arrived at and/or posted in the vehicle, what actions the user utilized over time, and the user's original heading. These combinations may indicate whether any upgraded communication (e.g., a second or third communication initiated by the user) has reduced the time to log on by correcting the user's trend when the user triggers an action. Thus, if the user walks north away from the building (when the vehicle is actually in the opposite direction, here south), using the above options to cause the vehicle to ring, and then changing heading towards the vehicle, the model may be trained such that when the user who walks away and north is tracked, the model may cause the vehicle to ring in advance. Similarly, if the initial action does not cause the user to change his or her heading, the upgraded model of communications may be used to determine a second communication, a third communication, etc., based on the user's reaction (e.g., changing heading), if necessary. Likewise, the more training data used to train the model, the more accurate the model will be in determining how to upgrade from previous actions.

As an example, a model of the upgraded communication may be trained such that for a user who leaves building 220 and stands at area 1010, and the vehicle has initially flashed its lights without a response from the user, the vehicle should then automatically ring (or generate a corresponding audible communication via speaker 154). If the user's trajectory has not changed immediately (e.g., toward the vehicle), and the vehicle's computing device 110 can automatically call a customer service representative (e.g., a customer service representative such as the user 442 using the computing device 440). The proxy is capable of communicating with and guiding the user to the vehicle using sensor data generated by the perception system 172 of the vehicle and received from the perception system 172 of the vehicle, a vehicle location generated by the localization system 170 of the vehicle and received from the localization system 170 of the vehicle, and a user location generated by and received from a client computing device of the user.

As another example, the model of the upgraded communication may be trained so that for a user leaving building 220 and standing at area 1020, and the vehicle should automatically horn three times thereafter, while waiting after each horn to see if the user's trajectory has changed. As another example, the model of upgraded communications may be trained so that for a user who leaves building E at night, the vehicle's computing device 110 may always automatically call a customer service representative instead of presenting options.

As with the first model, the trained, upgraded-communication model may then be provided to one or more vehicles (such as vehicles 100, 100A) in order to allow the computing devices 110 of those vehicles to better communicate with humans.

In addition to using messages and other information to train the model, the data can be analyzed to better facilitate pick-up and drop-down. For example, if the user is generally located in region 1010 for docking, and when the vehicle is at 1020, the option is typically used to activate the horn of the vehicle, this may be used to park the vehicle closer to region 1010.

Fig. 11 is an example flow diagram 1100 that may be executed by one or more processors of one or more computing devices, such as processor 120 of computing device 110, to facilitate communication from an autonomous vehicle to a user, in accordance with aspects of the present disclosure.

As shown in block 1110, when an attempt is made to pick up the user through the vehicle and before the user enters the vehicle, the current location of the vehicle and map information are input into the model to identify the type of communication action used to communicate the location of the vehicle to the user. This may include the models discussed above and/or models of upgraded communications. As such, as described above, the model may output whether the communication is appropriate and, if appropriate, the type of communication, or rather whether the communication should be an audible communication or a visual communication.

At block 1120, a first communication is enabled based on the type of communication action. Such enablement may include, for example, surfacing options as described above and/or automatically generating audible or visual communications as described above.

At block 1130, after enabling the first communication, it is determined from the received sensor data whether the user is moving toward the vehicle. In other words, it may be determined whether the user has responded to the first communication. The sensor data may include sensor data generated by the perception system 172 of the vehicle and/or sensor data from a client computing device of the user. From this sensor data, the computing device of the vehicle may determine, for example, whether the user is moving toward the vehicle, and/or whether the user has changed course to move toward the vehicle.

At block 1140, a second communication is enabled based on the determination of whether the user is moving toward the vehicle. As one example, in response to the enablement of the first communication, the second communication may be enabled when the user is not moving toward the vehicle or has not changed his or her heading or orientation to move toward the vehicle. Enabling may include, for example, surfacing options as described above and/or automatically generating audible or visual communications as described above.

The features described herein may allow an autonomous vehicle to improve pick-up and drop-off of passengers or users. For example, the user may use the presented options by himself or by prompting to cause the vehicle to visually and/or audibly communicate with the user. This makes it easier to identify the position of the vehicle relative to the user. Additionally or alternatively, the vehicle may use the model to proactively determine whether and how to communicate with the user, and how to upgrade those communications over time.

Unless otherwise specified, the foregoing alternative examples are not mutually exclusive and may be implemented in various combinations to achieve unique advantages. As these and other variations and combinations of the features discussed above can be utilized without departing from the subject matter defined by the claims, the foregoing description of the embodiments should be taken by way of illustration rather than by way of limitation of the subject matter defined by the claims. Furthermore, the provision of examples described herein, and clauses phrased as "such as," "including," and the like, should not be interpreted as limiting the subject matter of the claims to the specific examples; rather, these examples are intended to illustrate only one of many possible embodiments. Moreover, the same reference numbers in different drawings may identify the same or similar elements.

28页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:近红外线传感器及传感器罩向车辆的安装构造

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!