Autonomous vehicle safety system and method

文档序号:1854628 发布日期:2021-11-19 浏览:21次 中文

阅读说明:本技术 自主车辆安全系统和方法 (Autonomous vehicle safety system and method ) 是由 I·卢本希克 R·萨克 T·赖德 S·泰特 于 2016-05-17 设计创作,主要内容包括:本申请公开了自主车辆安全系统和方法。公开了检测和考虑对于潜在危险的乘员反应以建议或合并安全规程的自主车辆安全系统和方法。同样被公开的是基于乘员情绪和其他乘员数据来控制自主车辆以便改善乘员驾驶体验的系统。所公开的实施例可包括获取自主车辆的乘员的乘员数据的乘员监测系统。学习引擎可处理从乘员监测系统接收的乘员数据以基于该乘员数据来标识一个或多个建议驾驶方面。车辆接口可将诸如可提高乘员安全的防御动作之类的一个或多个建议驾驶方面传达给自主车辆。(Autonomous vehicle safety systems and methods are disclosed. Autonomous vehicle safety systems and methods are disclosed that detect and consider occupant reactions to potential hazards to suggest or incorporate safety procedures. Also disclosed is a system for controlling an autonomous vehicle based on occupant emotions and other occupant data to improve the occupant driving experience. The disclosed embodiments may include an occupant monitoring system that acquires occupant data for an occupant of an autonomous vehicle. The learning engine may process occupant data received from the occupant monitoring system to identify one or more suggested driving aspects based on the occupant data. The vehicle interface may communicate one or more suggested driving aspects, such as defensive actions that may improve occupant safety, to the autonomous vehicle.)

1. A safety system for an autonomous vehicle, the system comprising:

an occupant monitoring system for monitoring an occupant of the autonomous vehicle, the occupant monitoring system comprising one or more sensors for monitoring one or more occupant parameters;

a detection module to process sensor data received from the one or more sensors of the occupant monitoring system and detect a potential hazard external to the autonomous vehicle based on the one or more occupant parameters; and

a vehicle interface to communicate detection of a potential hazard external to the autonomous vehicle, wherein the detection by the detection module is based on the one or more occupant parameters.

2. The system of claim 1, wherein the occupant monitoring system is configured to monitor a plurality of occupants of the autonomous vehicle.

3. The system of claim 1, wherein the occupant monitoring system is configured to monitor an occupant located on a driver seat of the autonomous vehicle.

4. The system of claim 1, wherein the occupant monitoring system is configured to monitor one or more occupant parameters indicative of occupant reactions to a potential hazard outside of the autonomous vehicle.

5. The system of claim 1, wherein each sensor of the one or more sensors is to monitor an occupant parameter of the one or more occupant parameters.

6. A method for controlling an autonomous vehicle, the method comprising:

receiving occupant data for an occupant of the autonomous vehicle;

processing occupant data received from the occupant monitoring system to identify one or more suggested driving aspects based on the occupant data; and

communicating the one or more suggested driving aspects to the autonomous vehicle via a vehicle interface.

7. The method of claim 6, wherein the occupant data includes one or more occupant parameters indicative of occupant reactions to a potential hazard external to the autonomous vehicle,

wherein processing occupant data comprises detecting a potential hazard external to the autonomous vehicle based on one or more occupant parameters of the occupant data, an

Wherein the one or more suggested driving aspects include a defensive action to increase occupant safety of the autonomous vehicle.

8. The method of claim 6, further comprising identifying a pattern of correlation of occupant data to driving aspects from which the suggested driving aspects are identified.

9. A safety method in an autonomous vehicle, the method comprising:

receive sensor data from one or more sensors of an occupant monitoring system monitoring one or more occupant parameters of an occupant of the autonomous vehicle;

detecting a potential hazard external to the autonomous vehicle based on the one or more occupant parameters; and

communicating the detection of the potential hazard to a controller of the autonomous vehicle via a vehicle interface.

10. The method of claim 9, wherein communicating the detection of the potential hazard to the autonomous vehicle comprises providing a recommended driving aspect that includes a defensive action to increase occupant safety of the autonomous vehicle.

Technical Field

Various embodiments described herein relate generally to autonomous vehicles. More specifically, the disclosed embodiments relate to autonomous vehicle safety systems and methods.

Background

Autonomous (unmanned) automobiles are equipped with many safety systems designed to respond precisely to obstacles, problems, and emergency situations. These systems are based on direct input data collected from the surroundings using on-board sensors. These currently available safety systems, as well as the method of collecting and processing direct input data from the surroundings, are effective solutions when all vehicles are unmanned and are suitable for efficient operation of traffic. However, these systems and this approach are not sufficient to handle a hybrid environment with human participants (drivers) who do not necessarily obey or adhere to strict algorithms and rules in the same way as autonomous cars. Currently available autonomous automobile safety systems do not predict or anticipate what other human participants in the traffic will do. However, humans in a vehicle (e.g., a driver and/or other passengers) can sometimes intuitively analyze a dangerous situation and react before it occurs. For example, a human driver of another vehicle may be distracted by talking on his or her cell phone. From a purely mathematical point of view, there is no problem, and the safety systems of autonomous cars may not have a basis or the ability to detect problems, but may still have problems lasting around a few seconds. As another example, a human driver of another automobile may be driving a vehicle close to a rotary, and based on speed, direction, attention, or other factors, the driver may indicate whether he or she will not stop and give priority to other automobiles entering the rotary. Again, from a purely mathematical perspective, while there may be sufficient time to brake or slow down, currently available safety systems for autonomous cars may not have the basis or ability to detect other driver intentions through the rotary.

Autonomous cars also introduce a new driving experience, controlled by machines rather than human drivers. Such changes in control may provide a different and possibly uncomfortable experience to a given occupant depending on the occupant's driving preference and/or style. Currently available autonomous controller systems and methods may provide a mechanistic experience determined solely by algorithms based on sensor data input, an experience that does not consider occupant preferences and emotions with respect to driving aspects.

Drawings

FIG. 1A is a side partial cut-away view of a vehicle including a system for control based on an occupant parameter, according to one embodiment.

FIG. 1B is a top partial cross-sectional view of the vehicle of FIG. 1A.

FIG. 2 is a schematic diagram of a system for control based on occupant parameters, according to one embodiment.

FIG. 3 is a flow diagram of a method for autonomous vehicle control based on occupant parameters, according to one embodiment.

Detailed Description

Currently available autonomous vehicles implement strict standards, strictly adhering to algorithms and rules. Typically, vehicles detect and respond to external data without regard to or reaction to interior occupant performance (e.g., that marks a hazard) in the absence of external sensor data.

Although many situations are "legally feasible" from the perspective of traffic data, they can quickly evolve into dangerous situations, such as: the driver turns without turning on the turn signal or abrupt turn; the driver is distracted when approaching a crossroad, a hub station or a rotary island; large vehicles (e.g., trucks) are approaching at extremely fast speeds; and someone replacing the tires on his or her car at the roadside while others overtake your car at the exact location where you drive over the parked car and the exposed driver. Many other similar situations exist.

The present disclosure provides systems and methods for controlling an autonomous vehicle. The disclosed systems and methods take into account occupant parameters, including reaction, mood, preferences, patterns, history, context, biometrics, feedback, and so forth, to provide suggested driving aspects to the autonomous vehicle or otherwise guide or control driving aspects of the autonomous vehicle in order to improve safety and/or comfort of the autonomous driving experience.

The disclosed embodiments may include sensors that will track people within the vehicle. A single occupant identified by an embodiment as a "human driver" may be tracked even though that person may not be actively involved in driving. Alternatively, or additionally, all passengers may be tracked. The disclosed embodiments may monitor certain occupant parameters. When an anomaly in one or more of these parameters is detected, the system may perform defensive human-like actions without compromising the built-in security devices of the autonomous automobile. Example activities may include: decelerating while inside the terminal station or roundabout to avoid potential collisions; in a right driving country, if a human driver sees another car turning from his or her lane and is about to hit his or her car, then stop to the right; if a sudden congestion on the highway is detected, decelerating in advance and sending a signal by using an emergency lamp; if one sees that someone drives reckless, roundabout, then slow down, etc.; other defensive activities generally include slowing down and increasing the vehicle distance.

The disclosed embodiments may include sensors and other sources of information to detect human emotions regarding driving aspects and provide suggested driving aspects based on those emotions.

Example embodiments are described below with reference to the accompanying drawings. Many different forms and embodiments are possible without departing from the spirit and teachings of the present invention, and therefore the disclosure should not be construed as limited to the example embodiments set forth herein. Rather, these example embodiments are provided so that this disclosure will be thorough and complete, and will convey the scope of the invention to those skilled in the art. In the drawings, the size of components and associated dimensions may be exaggerated for clarity. The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Unless otherwise specified, a range of values, when recited, includes both the upper and lower limits of the range, as well as any subranges between such ranges.

Fig. 1A and 1B illustrate an autonomous vehicle 100 including a system 102 for control based on occupant parameters according to one embodiment of the present disclosure. Specifically, fig. 1A is a side partial cross-sectional view of the vehicle 100. Fig. 1B is a top partial sectional view of the vehicle 100.

Referring collectively to fig. 1A and 1B generally, the vehicle 100 may be completely autonomous such that it is able to drive itself to a destination without active intervention by a human operator. The vehicle 100 may be partially autonomous to any degree such that a human operator may monitor and/or control various aspects of driving and the vehicle 100 may assume control of various aspects of driving (e.g., steering, braking, signaling, acceleration, etc.) at some time or under some circumstances. Further, the vehicle 100 may use artificial intelligence, sensors, or global positioning system coordinates to drive itself or assume control of various aspects of driving. The vehicle 100 includes a system 102 for control based on occupant parameters, an autonomous vehicle controller 110, one or more sensors 112a, 112b, 112c, 112d, 112e, 112f, 112g (collectively 112), and a network interface 118. In other embodiments, the system 102 for controlling based on occupant parameters may include one or more autonomous vehicle controllers 110, one or more sensors 112, and a network interface 118.

The system 102 for controlling based on occupant parameters may include an occupant monitoring system for obtaining occupant data for an occupant 10 of the autonomous vehicle 100, a learning engine for processing the occupant data to identify one or more suggested driving aspects based on the occupant data, and a vehicle interface for communicating the suggested driving aspects to the autonomous vehicle 100. These elements of the system are shown in fig. 2 and described in more detail below with reference to the same drawing. The occupant monitoring system may include or otherwise be coupled to one or more sensors 112.

The one or more sensors 112 may include a microphone 112a, an inward image capture system 112b, an outward image capture system 112c, and one or more pressure sensors 112d, 112e, 112f, 112 g. One or more sensors 112 may detect and/or monitor one or more occupant parameters that may be used by system 102 for control to identify one or more suggested driving aspects.

For example, the one or more sensors 112 may detect and/or monitor occupant parameters indicative of occupant reactions to potential hazards external to the autonomous vehicle 100. Sensors may detect and monitor occupant parameters, such as sudden tightening or clenching of muscles, sudden movement of the occupant back toward the seat back, twitching of at least one or both feet, use of speech (or other use of sounds such as screaming), eye movement, pupil dilation, head movement, heart rate, breathing rhythm, and changes in breathing inhalation (e.g., air intake), any one or more of which are the natural reactions or responses of the occupant that are observing the external environment and intuitively predicting or anticipating (e.g., based on experience, discerning the distraction status of the driver of another vehicle), such as a potentially dangerous situation that may result from a collision and/or resulting injury. The system for controlling 102 (e.g., the learning engine) may process sensor data from one or more sensors 112 of the occupant monitoring system and detect a potential hazard external to the autonomous vehicle 100 based on one or more occupant parameters. In this manner, the system for controlling 102 may provide a human machine interface that enables consideration of occupant parameters by the autonomous vehicle 100 and/or the autonomous vehicle controller 110.

As another example, one or more sensors 112 may collect occupant data regarding occupant parameters, which may be used to detect the mood of the occupant 10. Sensors may detect and monitor such occupant parameters as speech, tone, biometrics (e.g., heart rate and blood pressure), occupant image data (e.g., for use in emotion extraction methods), and responses and/or commands generated by sound and/or via graphical user interface 120 (e.g., a touch screen) (e.g., for providing feedback mechanisms to the occupant that opportunistically express likes and dislikes).

Some example uses of the sensor may include the following. Pressure sensors 112g in the steering wheel 20, door handle, and other occupant handles may detect and monitor occupant parameters such as sudden strain or clenching of muscles. Pressure sensors 112d, 112e in the seat 22 (e.g., pressure sensor 112d in the seat back and/or pressure sensor 112e in the seat base) may detect occupant parameters such as sudden movement of the occupant back toward the seat back. Sensors in the bottom panel 112f may detect occupant parameters such as twitching of at least one foot. Microphone 112a may detect occupant parameters such as voice commands, occupant language, usage in the form of occupant language, and/or intonation. Occupant language and/or language forms may include commands, phrases, dirty words, and other uses of language. Other sensors may detect biometrics such as heart rate and blood pressure.

The inward image capture system 112b may detect occupant parameters such as eye movement, pupil dilation, and head movement. More specifically, the inward image capture system 112b captures image data of the occupant 10 (or occupants) of the vehicle 100. The inward image capture system 112b may include an imager or camera for capturing images of the occupant 10. In a certain embodiment, the inbound image capture system 112b may include one or more array cameras. The image data captured by the inbound image capture system 112b may be used for a variety of purposes. The image data may be used to identify the occupant 10 in order to obtain information about the occupant 10, such as typical head position, health information, and other contextual information. Alternatively, or in addition, the image data may be used to detect the position (e.g., height, depth, lateral distance) of the head/eyes of the occupant 10, which in turn is used to detect and/or track the current gaze of the occupant 10. The inward image capture system 112b may include an eye movement tracker for detecting eye movement parameters of the occupant 10. The eye movement tracker may include a gaze tracker for processing occupant image data of the occupant 10 of the autonomous vehicle 100 to determine a current region of central vision of the occupant 10. The inward image capture system 112b may include a pupil monitor for monitoring pupil dilation, including a pupil tracker for processing occupant image data of the vehicle 100 occupant 10 to determine the size of the occupant 10 pupil. The inward image capture system 112b may also provide occupant image data that may be used in emotion extraction methods to identify one or more occupant emotions.

The outward image capture system 112c captures image data of the environment in front of the vehicle 100, which may help collect occupant data and/or parameters related to what the occupant 10 may be focusing on. Image data captured by the external image capture system 112c may be processed based on gaze tracking and/or gaze detection to identify where the occupant 10 is focusing attention (e.g., on the driver of another vehicle who may be talking on a cell phone without noticing a skateboarder that will rush into traffic). The outbound image capture system 112c may include an imager or camera for capturing images of the area outside the vehicle 100. Outward graphic capture system 112c may include multiple imagers at different angles to capture multiple perspectives. Outward image capture system 112c may also include various types of imagers, such as active infrared imagers and visible spectrum imagers. In general, the outward image capturing system 112c captures an area in front of the vehicle or in front of the vehicle 100 in the traveling direction of the vehicle 100. In certain embodiments, outbound image capture system 112c may include one or more array cameras. The images captured by the outbound image capture system 112c may be primarily used by the autonomous vehicle controller 110 to direct and control navigation of the autonomous vehicle 100.

With specific reference to fig. 1B, the line of sight 152 of the occupant 10 may be determined by an eye movement tracker of the inward image capture system 112B. Using the line of sight 152 and the external image data acquired by the outbound image capture system 112c, the system 102 may determine the focus of attention of the occupant. In fig. 1B, the line of sight 152 of the occupant 10 is directed toward the sign 12. As can be appreciated, the occupant 10 may be concerned in other situations with the driver of another vehicle that may be inattentive or distracted by a mobile phone or other mobile device, or with a pedestrian (e.g., a child, a walker, a jogger, a skateboarder, a cyclist, etc.) that may be inattentive and rushing into traffic at a near risk, or otherwise enter its vicinity, such as when the autonomous vehicle 100 is moving.

The system for controlling 102 may be a safety system for the autonomous vehicle 100 that provides one or more suggested driving aspects including one or more defensive actions that increase safety of occupants of the autonomous vehicle 100. For example, a human driver of another vehicle may be distracted by a conversation on his or her phone. The occupant 10 of the autonomous vehicle 100 may appear frightened when other vehicles approach the intersection faster than expected. The occupant 10 may grip the handle or steering wheel 20 and may be urged against the seat 22 by a potential impact. For example, the system 102 receives sensor data for one or more of these occupant parameters and may notify the host vehicle controller 110 of a potential hazard and/or provide a suggested defensive action to increase the safety of the occupant 10. Examples of defensive actions that may increase occupant safety include, but are not limited to: reducing the travel speed of the autonomous vehicle 100; signaling or activating an emergency light; fastening the safety belt; closing the window; locking the door; opening the door; increasing a distance between the autonomous vehicle 100 and a vehicle in proximity to the autonomous vehicle 100; reminding a manager; reminding a current driving route; reminding the stopping distance; emitting an auditory signal; one or more emergency sensors configured to detect a potential hazard are activated so that these sensors may provide additional input to the autonomous vehicle controller 110. In this manner, the system for controlling 102 may provide a human machine interface that provides a premium additional decision vector to the restricted instruction set.

The system for controlling 102 may also provide one or more suggested driving aspects based on one or more occupant emotions and/or other occupant data to provide improved driving for the occupant. In other words, the system for controlling 102 may be a system for suggesting driving aspects to the autonomous vehicle 100, and suggesting driving aspects may allow the vehicle 100 to provide an adaptive driving experience by considering one or more occupant emotions, preferences, driving patterns, and/or additional contexts, thereby targeting a more personalized and/or customized driving experience. The machine (i.e., vehicle 100) may be driven more closely so that the occupants may desire to experience driving similar to or as if the "steering wheel" (e.g., controls of vehicle 100) were in their hands. The system 102 may use one or more occupant emotions, driving histories, contexts, and/or preferences in order to suggest and even control driving aspects such as speed, acceleration, path (e.g., turn sharpness, route), to personalize and adapt the driving experience to occupant needs and/or preferences. In this manner, the system for controlling 102 may provide a human machine interface that provides a premium additional decision vector to the restricted instruction set. The system 102 allows the autonomous vehicle to simply drive according to occupant emotional and intended activities and operations rather than in the same manner and feel as a robot.

The network interface 118 is configured to receive occupant data from a source external to the host vehicle 100 or proximate to the host vehicle 100. The network interface 118 may be equipped with conventional network connections such as, for example, Ethernet (IEEE 802.3), token Ring (IEEE 802.5), fiber distributed data Link interface (FDDI), or Asynchronous Transfer Mode (ATM). Further, the computer may be configured to support various network protocols, such as, for example, Internet Protocol (IP), Transmission Control Protocol (TCP), network File System over UDP/TCP, Server Message Block (SMB), MicrosoftCommon Internet File System (CIFS), Hypertext transfer protocol (HTTP), Direct Access File System (DAFS), File Transfer Protocol (FTP), real-time publish-subscribe (RTPS), open SystemInterconnection (OSI) protocol, Simple Mail Transfer Protocol (SMTP), Secure Shell (SSH), Secure Socket Layer (SSL), and the like.

The network interface 118 may provide an interface to a wireless network and/or other wireless communication devices. For example, the network interface 118 may enable access to wireless sensors (e.g., biometric sensors for obtaining occupant heart rate, blood pressure, body temperature, etc.), occupant mobile phones or handheld devices, or wearable devices (e.g., wrist band activity tracker, apple, etc.)Watch) for wireless connection. As another example, the network interface 118 may form a wireless data connection with a wireless network access point 140 disposed outside of the vehicle 100. Network interface 118 may connect with a wireless network access point 140 that couples to a network, such as a Local Area Network (LAN), Wide Area Network (WAN), or the internet. In a certain embodiment, the wireless network access point 140 is located on or coupled to a geographic local network that is isolated from the internet. These wireless connections with other devices and/or networks via the network interface 118 allow for the acquisition of occupant data such as schedule and/or trip information from an occupant's schedule. Contextual data such as statistics of other vehicle driving aspects (e.g., speed, acceleration, turning radius, travel pattern, route) may also be obtained by a given sector or geographic area that may help determine suggested driving aspects for the autonomous vehicle 100, occupant medical information, significant current events such as may affect occupant mood, and other environmental data.

In a certain embodiment, the wireless network access point 140 is coupled to a "cloudlet" of a cloud-based distributed computing network. A cloudlet is a computing architecture element (e.g., mobile device-cloudlet-cloud) that represents a middle tier. A cloudlet is a decentralized and widely dispersed internet infrastructure whose computing cycles and storage resources can be utilized by nearby mobile computers. A micro-cloud may be considered a local "data center" designed and configured to bring a cloud-based distributed computing architecture or network in close proximity to a mobile device (e.g., autonomous vehicle controller or system 102 in this case) and may provide computing cycles and storage resources that can be utilized by nearby mobile devices. A cloudlet may have only soft states, meaning that it cannot have any hard states, but may contain cached states from the cloud. It may also buffer data originating from one or more mobile devices in the cloud en route to the secure location. A cloudlet may possess sufficient computing power (i.e., CPU, RAM, etc.) to offload resource-intensive computing from one or more mobile devices. A cloudlet may have excellent connectivity to the cloud (typically a wired internet connection) and is generally not limited by limited battery life (e.g., it is connected to a power outlet). The micro-cloud is logically proximate to the associated mobile device. "logically contiguous" is to be interpreted as low end-to-end latency and high bandwidth (e.g., single hop Wi-Fi). Logically adjacent may mean physically close. The cloudlets are self-managing, requiring only power, internet connectivity, and access control or settings. The ease of management may correspond to a device model of the computing resource and make simple deployments on businesses such as coffee shops or doctor's offices. Internally, a cloudlet may be viewed as a cluster of multi-core computers, with gigabit internal connectivity and a high bandwidth wireless LAN.

In a certain embodiment, the wireless network access point 140 is coupled to a fog of a cloud-based distributed computing network. Fog may be more extensive than clouds. For example, fog may provide computing power along roads from ITS (intelligent transportation system) infrastructure: for example, data is uploaded/downloaded at the intelligent intersection. Fog may be confined to peer-to-peer connections along roads (i.e., without sending data to the cloud or remote data center), but may be spread along the entire highway system and vehicles may participate and disengage in local "fog" computed along roads. In other words, the fog may be distributed, associated with a network of micro-clouds.

As another example, fog may provide distributed computing through the collection of parking meters, where each individual meter may be an edge of the fog and may establish a peer-to-peer connection with the vehicle. The vehicle may be driven through the marginally calculated "fog" provided by each parking meter.

In some other embodiments, the network interface 118 may receive occupant data from a satellite (e.g., a Global Positioning System (GPS) satellite, an XM radio satellite). In some other embodiments, network interface 118 may receive occupant data from a cellular telephone tower. As can be appreciated, other suitable wireless data connections are possible.

Fig. 1A and 1B illustrate a single occupant seated in a typical driving position of a vehicle. As can be appreciated, the system 102 may monitor additional or other occupants, such as occupants typically seated in front and/or rear passenger seating positions. In other words, the autonomous vehicle 100 may not have a steering wheel 20, but rather only a handlebar, and thus may not have a driver seat/position. Further, the system 102 may monitor multiple occupants and may provide suggested driving aspects based on the multiple occupants (e.g., all occupants in the vehicle).

FIG. 2 is a schematic diagram of a system 200 for control based on occupant parameters, according to one embodiment. The system 200 includes a processing device 202, an inward image capture system 212b, an outward image capture system 212c, one or more sensors 212 in place of or in addition to the image capture systems 212b, 212c, and/or an autonomous vehicle controller 210 for controlling navigation and other driving aspects of the autonomous vehicle.

The processing device 202 may be similar or analogous to the system 102 based on the occupant parameter control of fig. 1A and 1B. The processing device may include one or more processors 226, memory 228, input/output interfaces 216, and network interface 218.

Memory 228 may include information and instructions necessary to implement the various components of system 200. For example, memory 228 may include various modules 230 and program data 250.

As used herein, the word "module," whether upper case or lower case, refers to logic that may be embodied in hardware or firmware, or to a collection of software instructions written in a programming language such as, for example, C + +, that may have entry and exit points. The software modules may be compiled and linked into an executable program that is included in a dynamically linked library or may be written in an interpreted language such as BASIC. A software module or program may be in an executable state or considered to be executable. "executable" generally means that a program can operate on a computer system without the involvement of a computer language interpreter. The term "automatically" generally refers to an operation that may be performed without significant user intervention or with only some limited user intervention. The term "startup" generally refers to the operation of initializing a computer module or program. As can be appreciated, software modules may be invoked by other modules or themselves, and/or may be invoked in response to detecting an event or interruption. The software instructions may be embedded in firmware, such as an EPROM. A hardware module may include connected logic units such as gates or flip-flops, and/or may include programmable units such as programmable gate arrays or processors.

Modules may be implemented using hardware, software, firmware, and/or any combination thereof. For example, as shown, the module 230 may include an occupant monitoring system 232, a gaze tracker 234, and a learning engine 236. Learning engine 236 may include one or more detection modules 242, emotion analyzer 244, and occupant profile 246.

The module 230 may handle various interactions between the processing device 202 and other elements in the system 200, such as the autonomous vehicle controller 210 and the sensors 212 (including the imaging systems 212b, 212 c). Further, the module 230 may create data that may be stored by the memory 228. For example, the module 230 may generate program data 250 such as a profile record 252, and the profile record 252 may include correlations between driving aspects 256 and occupant parameters 258. Occupant parameters may include emotions 262, biometrics 264, history 266, context 268, preferences 270, statistics 272, and so forth.

The occupant monitoring system 232 may assist in collecting occupant data in order to detect and/or monitor the occupant parameter 258. The learning engine 236 may process the occupant data and/or the occupant parameters 258 to determine or identify suggested driving aspects 256 for communication to the autonomous vehicle via a vehicle interface (e.g., the input/output interface 216) using the autonomous vehicle's autonomous vehicle controller 210.

The detection module 242 may process sensor data from one or more sensors 212 monitoring one or more occupant parameters to detect a potential hazard outside of the autonomous vehicle. The detection is done based on the occupant parameters 258.

The emotion analyzer 244 processes the occupant data and detects an occupant emotion 262 for a current driving aspect 256, and the emotion analyzer 244 records the current driving aspect 256 along with a correlation 254 between the occupant emotion 262 and the driving aspect 256.

The occupant profiler 246 maintains an occupant profile including recorded correlations 254 for driving aspects 256 of occupant and occupant parameters 258, the occupant parameters 258 including emotions 262, biometrics 264, history 266, context 268, preferences 270, and statistics 272.

As previously explained, emotions 262 and biometrics 264 may be detected by one or more sensors 212 (including inward image capture system 212b) and detection module 242. The biometrics 264, history 266, context 268, preferences 270, and statistics 272 may be obtained by the network interface 218.

The inward image capture system 212b is configured to capture image data of an occupant of a vehicle in which the system 200 is installed and/or operable. The inward image capture system 212b may include one or more imagers or cameras for capturing images of the operator. In a certain embodiment, the inbound image capture system 212b may include one or more array cameras. The image data captured by the inward image capture system 212b may be used to detect occupant reactions to potential external hazards, detect occupant emotions, identify occupants, detect occupant head/eye positions, and detect and/or track current gaze of the occupant.

The outbound image capture system 212c captures image data of the environment in front of the vehicle. The outbound image capture system 212c may include one or more imagers or cameras for capturing images of an area outside the vehicle, typically the area in front of the vehicle or the area in front of the vehicle in the direction of travel of the vehicle. In a certain embodiment, the outbound image capture system 212c may include one or more array cameras. The image data captured by the outbound image capture system 212c may be analyzed or otherwise used to identify objects in the environment surrounding the vehicle (e.g., generally in front of the vehicle, or in front of the vehicle in the direction of travel of the vehicle) in order to collect occupant data.

The gaze tracker 234 is configured to process occupant image data captured by the inward image capture system 212b to determine a current gaze line of sight for the vehicle occupant. The gaze tracker 234 may analyze the image data to detect the eyes of the occupant and detect the direction in which the eyes are focused. The gaze tracker 232 may continue to process the current occupant image data to detect and/or track the gaze of the current occupant. In a certain embodiment, gaze tracking 232 may process occupant image data substantially in real-time. The gaze tracker may include a pupil monitor for monitoring pupil dilation. The pupil monitor may include a pupil tracker for processing occupant image data of a vehicle occupant to determine an occupant pupil size.

The driving aspects 256 may include, but are not limited to, defensive actions such as slowing down, detouring, fastening a seat belt, closing a window, locking a door, unlocking a door, creating a greater distance (e.g., changing speed and/or direction), alerting a manager, alerting a driving route, alerting a stopping distance (e.g., stronger braking for faster decelerations), audio warnings or signals to other vehicles (e.g., lights), and activating emergency sensors (e.g., focusing a camera to follow user gaze) for determining potential hazards and providing additional information/feedback to an autonomous vehicle controller of the autonomous vehicle. The driving aspects 256 may also include adjustments to one or more of speed, acceleration, steering radius, and travel route of the autonomous vehicle.

Each of the emotions 262 stored in the memory 228 may be or otherwise represent a determination of occupant attitude based on, for example, speech, biometrics, image processing, and live feedback. Classical emotion analysis may analyze occupant emotions for current driving aspects by common text emotion analysis methods while using speech-to-text and/or acoustic models to identify emotions by tone.

The biometric 264 may be integrated into an emotional analysis, such as by capturing the heart rate, blood pressure, and/or body temperature of one or more occupants to understand the level of distress caused by the actual driving of the autonomous vehicle. For example, a sudden change in the biometric 264 may signal a distress based on the current driving aspect. In contrast, the biometric level of an occupant entering the vehicle may be used to detect other emotions. For example, a biometric that has risen beyond a level that may be normal or typical for an occupant after entering the vehicle may indicate stress, anxiety, or the like. Image processing may include emotion extraction methods that analyze the emotions of the occupant, such as may be apparent from, for example, facial expressions, actions, and so forth. The presence feedback mechanism may be used to explore and/or determine the likes and dislikes of the occupant, detected emotions, mood, preferences, and the like.

The drive history 266 may provide an indication of the general driving style of the occupant in controlling the vehicle. The manner in which an occupant drives may be a strong indication of the type of driving experience that the occupant of the autonomous vehicle would like to have. For example, some people who make a sharp turn or drive as quickly as possible (by law) would also expect this. Extending his or her driving path when possible to ensure that some of his or her coastal driving will desire the same scenic route taken by the autonomous vehicle. The drive history 266 may be obtained from a training vehicle or during training of autonomous vehicle occupant operations.

Context 268 may include information such as occupant age, current medical condition, mood, and free time (e.g., according to a calendar or trip system) and is critical to determining appropriate driving aspects. For example, elderly people with heart problems may not appreciate or even be adversely affected by the sudden steering or driving of the autonomous vehicle as quickly as possible. Similarly, a guest who is a passenger may desire a somewhat longer route through a prominent or particular landmark.

The preferences 270 may be input by the occupant via a graphical user interface or client computing device that may provide accessible data over a wireless network.

Statistics 272 may be collected by autonomous vehicles as described above, or acquired by network access points. If a majority of vehicles (e.g., 90%) passing through a given geographic sector follow similar driving aspects (e.g., speed, acceleration, steering radius, etc.), these statistics may inform the autonomous vehicles of the determination of the recommended driving aspect.

FIG. 3 is a flowchart of a method 300 for autonomous vehicle control based on occupant parameters, according to one embodiment. Such as capturing or otherwise receiving occupant data at 302 from sensors, wireless network connections, and/or stored profiles. The occupant data may help identify occupant parameters. The occupant data is processed at 304 to identify one or more recommended driving aspects at 306 based on the occupant data and/or occupant parameters. Alternatively, or in addition, the detected potential hazard may be communicated to the autonomous vehicle at 308. Processing occupant data and/or parameters may include identifying an occupant, such as a reaction to a potential hazard outside the vehicle, in order to detect the potential hazard and suggest driving aspects such as defensive actions to increase occupant safety at 306.

Processing the occupant data and/or parameters may include detecting an occupant emotion for the current driving aspect and recording a correlation between the detected occupant emotion and the current driving aspect in the occupant profile. The occupant data/parameters may be processed to identify a suggested driving aspect at 306 based on a correlation in the occupant profiles that correlates occupant emotions with driving aspects. The suggested driving aspects include one or more of a suggested speed, a suggested acceleration, a suggested steering control, and a suggested travel route that may conform to the preferences of the occupant, for example, as determined based on the occupant's mood.

Example embodiments

Examples may include subject matter, such as a method, an apparatus for performing method acts, at least one machine readable medium comprising instructions that, when executed by a machine, cause the machine to perform the acts of the method, apparatus or system.

Example 1: a safety system for an autonomous vehicle, the system comprising: an occupant monitoring system for monitoring an occupant of an autonomous vehicle, the occupant monitoring system comprising one or more sensors that monitor one or more occupant parameters; a detection module to process sensor data received from one or more sensors of an occupant monitoring system and detect a potential hazard external to the autonomous vehicle based on one or more occupant parameters; a vehicle interface to communicate detection of a potential hazard external to the autonomous vehicle, wherein the detection by the detection module is based on one or more occupant parameters.

Example 2: the system of example 1, wherein the occupant monitoring system is configured to monitor a plurality of occupants of the autonomous vehicle.

Example 3: the system of any of examples 1-2, wherein the occupant monitoring system is configured to monitor an occupant located on a driver seat of the autonomous vehicle.

Example 4: the system of any of examples 1-3, wherein the occupant monitoring system is configured to monitor one or more occupant parameters indicative of occupant reaction to a potential hazard external to the autonomous vehicle.

Example 5: the system of example 4, wherein the occupant monitoring system is configured to monitor one or more occupant parameters indicative of human occupant response to a potential hazard external to the autonomous vehicle.

Example 6: the system of any of examples 1-5, wherein the one or more occupant parameters include one or more of: sudden tightening or grasping of muscles; sudden movement of the occupant backwards towards the seat back; twitching of at least one foot; the use of a language; eye movement; enlarging the pupil; the head moves; heart rate; a breathing rhythm; and changes in breathing inhalation.

Example 7: the system of any of examples 1-6, wherein each sensor of the one or more sensors is to monitor an occupant parameter of the one or more occupant parameters.

Example 8: the system of any of examples 1-7, wherein the one or more sensors comprise one or more pressure sensors.

Example 9: the system of example 8, wherein the one or more pressure sensors are disposed on a handle within a passenger compartment of the autonomous vehicle to detect that the occupant has tightened his or her hand muscles.

Example 10: the system of example 8, wherein the one or more pressure sensors are disposed within a seat of the autonomous vehicle for detecting occupant movement relative to the seat, including movement toward a backrest of the seat.

Example 11: the system of example 8, wherein the one or more pressure sensors are disposed on a floor of a passenger compartment of the autonomous vehicle to detect a twitch of at least one foot of the occupant.

Example 12: the system of example 8, wherein the one or more pressure sensors are disposed within a seat of the autonomous vehicle to detect the breathing rhythm.

Example 13: the system of any of examples 1-12, wherein the one or more sensors include a microphone to detect the occupant using speech.

Example 14: the system of any of examples 1-13, wherein the one or more sensors comprise a microphone to detect occupant language.

Example 15: the system of any of examples 1-14, wherein the one or more sensors comprise an eye movement tracker to monitor eye movement parameters of the occupant, the eye movement tracker comprising: a gaze tracker to process occupant image data of an autonomous vehicle occupant to determine a current region of occupant central vision; and an inward image capture system for capturing occupant image data of an autonomous vehicle occupant for processing by the gaze tracker.

Example 16: the system of example 15, wherein the gaze tracker is configured to: the method includes determining a line of sight of a current gaze of an autonomous vehicle occupant, determining a field of view of the occupant based on the line of sight of the current gaze of the occupant, and determining a current region of central vision of the occupant within the field of view.

Example 17: the system of example 15, wherein the gaze tracker comprises a pupil monitor to monitor pupil dilation, the pupil monitor comprising a pupil tracker to process occupant image data of the vehicle occupant to determine a pupil size of the occupant.

Example 18: the system of any of examples 1-17, wherein the vehicle interface communicates the detection of the potential hazard to a controller of the autonomous vehicle.

Example 19: the system of any of examples 1-8, wherein the vehicle interface communicates the detection of the potential hazard to the autonomous vehicle by providing a suggested driving aspect, the suggested driving aspect including a defensive action to increase occupant safety of the autonomous vehicle.

Example 20: the system of example 19, wherein the defensive action to increase security is one of: reducing a travel speed of the autonomous vehicle; signaling using an emergency light; fastening the safety belt; closing the window; locking the door; unlocking the door; increasing a distance between the autonomous vehicle and a vehicle in proximity to the autonomous vehicle; reminding a manager; reminding a driving route; reminding the stopping distance; emitting an auditory signal; one or more emergency sensors configured to detect a potential hazard are activated.

Example 21: a method for controlling an autonomous vehicle, the method comprising: receiving occupant data for an autonomous vehicle occupant; processing occupant data received from an occupant monitoring system to identify one or more suggested driving aspects based on the occupant data; communicating the one or more suggested driving aspects to the autonomous vehicle via a vehicle interface.

Example 22: the method of example 21, wherein the occupant data includes one or more occupant parameters indicative of an occupant's reaction to a potential hazard outside of the autonomous vehicle, wherein processing the occupant data includes detecting the potential hazard outside of the autonomous vehicle based on the one or more occupant parameters of the occupant data, and wherein the one or more suggested driving aspects include a defensive action to increase safety of an occupant of the autonomous vehicle.

Example 23: the method of example 22, wherein the one or more occupant parameters include one or more of: sudden tightening or grasping of muscles; sudden movement of the occupant backwards towards the seat back; twitching of at least one foot; the use of a language; eye movement; enlarging the pupil; the head moves; heart rate; a breathing frequency; and changes in breathing inhalation.

Example 24: the method of any of examples 22-23, wherein the defensive action to increase security is one of: reducing a travel speed of the autonomous vehicle; signaling using an emergency light; fastening the safety belt; closing the window; locking the door; opening the door; increasing a distance between the autonomous vehicle and other vehicles in proximity to the autonomous vehicle; reminding a manager; reminding a driving route; reminding the stopping distance; emitting an auditory signal; one or more emergency sensors configured to detect a potential hazard are activated.

Example 25: the method of any of examples 21-24, further comprising identifying a pattern of correlation of occupant data to driving aspects from which the suggested driving aspects are identified.

Example 26: the method of any one of examples 21-25, wherein the occupant data includes one or more of: historical driving aspects of occupant driving; context data; and occupant preference data.

Example 27: the method of any one of examples 21-26, wherein processing the occupant data comprises: detecting occupant emotions for a current driving aspect; and recording a correlation of the detected occupant emotion to the current driving aspect in the occupant profile, wherein processing the occupant data to identify one or more suggested driving aspects comprises identifying one or more suggested driving aspects based on the correlation in the occupant profile that correlates the occupant emotion to the relevant driving aspect.

Example 28: the method of example 27, wherein detecting the occupant emotion comprises collecting sensor data from one or more sensors that detect and monitor one or more occupant parameters, wherein processing the occupant data comprises identifying the occupant emotion based on the sensor data.

Example 29: the method of any of examples 21-28, wherein the suggested driving aspect includes one or more of: a suggested speed; suggesting an acceleration; suggesting a steering control; and suggesting a driving route.

Example 30: a non-transitory computer-readable medium having instructions stored thereon, the instructions, when executed by a computing device, cause the computing device to perform the method of any of examples 21-29.

Example 31: a system comprising means for implementing the method of any of examples 21-29.

Example 32: a system for controlling an autonomous vehicle, the system comprising: an occupant monitoring system for acquiring occupant data for an occupant of an autonomous vehicle; a learning engine to process occupant data received from the occupant monitoring system to identify one or more suggested driving aspects based on the occupant data; and a vehicle interface to communicate the one or more suggested driving aspects to the autonomous vehicle.

Example 33: the system of example 32, wherein the occupant monitoring system includes one or more sensors to detect one or more occupant parameters indicative of occupant reactions to a potential hazard outside of the autonomous vehicle, wherein the learning engine processes sensor data from the one or more sensors of the occupant monitoring system to detect the potential hazard outside of the autonomous vehicle based on the one or more occupant parameters, and wherein the one or more suggested driving aspects include defensive actions to increase occupant safety of the autonomous vehicle.

Example 34: the system of example 33, wherein the one or more occupant parameters include one or more of: sudden tightening or grasping of muscles; sudden movement of the occupant backwards towards the seat back; twitching of at least one foot; the use of a language; eye movement; enlarging the pupil; the head moves; heart rate; a breathing frequency; and changes in breathing inhalation.

Example 35: the system of any of examples 33-34, wherein the defensive action to increase security is one of: reducing a travel speed of the autonomous vehicle; signaling using an emergency light; fastening the safety belt; closing the window; locking the door; unlocking the door; increasing a distance between the autonomous vehicle and a nearby vehicle; reminding a manager; reminding a driving route; reminding the stopping distance; emitting an auditory signal; one or more emergency sensors configured to detect a potential hazard are activated.

Example 36: the system of any of examples 33-35, wherein each of the one or more sensors of the occupant monitoring system monitors one of the one or more occupant parameters.

Example 37: the system of any one of examples 33-36, wherein the one or more sensors comprise one or more pressure sensors.

Example 38: the system of example 37, wherein the one or more pressure sensors are disposed on a handle within a passenger compartment of the autonomous vehicle for detecting that the occupant has tightened his or her hand muscles.

Example 39: the system of example 37, wherein the one or more pressure sensors are disposed within a seat of the autonomous vehicle for detecting occupant movement relative to the seat, including movement toward a seat back.

Example 40: the system of example 37, wherein the one or more pressure sensors are disposed on a floor of a passenger compartment of the autonomous vehicle for detecting twitching of at least one foot of the occupant.

Example 41: the system of example 37, wherein the one or more pressure sensors are disposed within a seat of the autonomous vehicle for detecting the breathing rhythm.

Example 42: the system of any of examples 33-41, wherein the one or more sensors include a microphone to detect occupant language.

Example 43: the system of any of examples 33-42, wherein the one or more sensors comprise an eye movement tracker to monitor the occupant's eye movement parameters, the eye movement tracker comprising: a gaze tracker to process occupant image data of an autonomous vehicle occupant to determine a current region of occupant central vision; and an inward image capture system for capturing occupant image data of an autonomous vehicle occupant for processing by the gaze tracker.

Example 44: the system of example 43, wherein the gaze tracker is configured to: the method includes determining a line of sight of a current gaze of an autonomous vehicle occupant, determining a field of view of the occupant based on the line of sight of the current gaze of the occupant, and determining a current region of central vision of the occupant within the field of view.

Example 45: the system of any of examples 33-44, wherein the one or more sensors comprise a pupil monitor to monitor pupil dilation, the pupil monitor comprising: a pupil tracker to process occupant image data of a vehicle occupant to determine an occupant pupil size; and an inward image capture system for capturing occupant image data of a vehicle occupant for processing by the pupil tracker.

Example 46: the system of any of examples 32-45, wherein the vehicle interface communicates the one or more suggested driving aspects to a controller of the autonomous vehicle.

Example 47: the system of any of examples 32-46, the learning engine to receive the occupant data and to identify a correlation pattern of the occupant data with the driving aspect and to record the correlation pattern in the memory for use in identifying the suggested driving aspect.

Example 48: the system of example 47, wherein the occupant data includes historical driving aspects of occupant driving.

Example 49: the system of any one of examples 47-48, wherein the occupant data includes contextual data:

example 50: the system of example 49, wherein the contextual data comprises one or more of: age of the occupant; occupant health/medical information; the mood of the passenger; and occupant travel information.

Example 51: the system of any one of examples 47-50, wherein the occupant data includes occupant preference data:

example 52: the system of any one of examples 47-51, wherein the occupant monitoring system comprises a statistical system that collects statistics for a given geographic sector, wherein the occupant data comprises the statistics.

Example 53: the system of example 52, wherein the statistics system collects the statistics by forming a wireless data connection with a wireless network access point within the geographic sector.

Example 54: the system of any one of examples 32-53, the learning engine comprising: an emotion analyzer for processing occupant data and detecting occupant emotions for a current driving aspect, the emotion analyzer recording a correlation of the detected occupant emotions with the current driving aspect; and an occupant profile analyzer to maintain an occupant profile comprising recorded correlations of occupant emotions to occupant driving aspects, wherein the learning engine identifies one or more suggested driving aspects based on the correlations of the occupant emotions to the occupant profiles of the relevant driving aspects.

Example 55: the system of example 54, the occupant monitoring system comprising one or more sensors to detect and monitor one or more occupant parameters, wherein the emotion analyzer detects occupant emotions based on sensor data from the occupant monitoring system.

Example 56: the system of example 55, wherein the one or more sensors comprise a microphone to capture occupant speech, wherein the emotion analyzer detects occupant emotions based on the occupant speech.

Example 57: the system of example 56, wherein the emotion analyzer is to detect the occupant emotion using an acoustic model to identify the emotion by tone.

Example 58: the system of example 56, wherein the emotion analyzer is to detect the occupant emotion based on the speech-to-text analysis.

Example 59: the system of example 55, wherein the one or more sensors comprise a biometric sensor to capture biometric data of one or more biometrics for the occupant, wherein the learning engine uses the biometric data to detect the occupant emotion.

Example 60: the system of example 59, wherein the one or more occupant biometrics comprises one or more of: occupant heart rate; occupant blood pressure; and occupant temperature.

Example 61: the system of any of examples 55-60, wherein the one or more sensors include an imaging sensor to capture image data of the occupant, wherein the learning engine detects the occupant emotion using the image data of the occupant.

Example 62: the system of example 54, wherein the emotion analyzer comprises a feedback system to provide the occupant with an opportunity to express a preference, the feedback system configured to process the occupant's command to obtain the occupant's expressed preference and to detect the occupant's emotion based on the expressed preference.

Example 63: the system of example 62, wherein the feedback system is configured to process the voice command.

Example 64: the system of example 62, wherein the feedback system is configured to process the provided command via a graphical user interface.

Example 65: the system of example 54, wherein the suggested driving aspect includes one or more of: a suggested speed; suggesting an acceleration; suggesting a steering control; and suggesting a driving route.

Example 66: a safety method in an autonomous vehicle, the method comprising: receive sensor data from one or more sensors of an occupant monitoring system monitoring one or more occupant parameters of an autonomous vehicle occupant; detecting a potential hazard external to the autonomous vehicle based on one or more occupant parameters; and communicating the detection of the potential hazard to a controller of the autonomous vehicle via a vehicle interface.

Example 67: the method of example 66, wherein communicating the detection of the potential hazard to the autonomous vehicle comprises providing a recommended driving aspect comprising a defensive action to increase safety of an occupant of the autonomous vehicle.

Example 68: the method of example 67, wherein the defensive action to increase security is one of: reducing a travel speed of the autonomous vehicle; signaling using an emergency light; fastening the safety belt; closing the window; locking the door; unlocking the door; increasing a distance between the autonomous vehicle and other vehicles in proximity to the autonomous vehicle; reminding a manager; reminding a driving route; reminding the stopping distance; emitting an auditory signal; one or more emergency sensors configured to detect a potential hazard are activated.

Example 69: a non-transitory computer-readable medium having instructions stored thereon that, when executed by a computing device, cause the computing device to perform the method of any of examples 66-68.

Example 70: a system comprising means for implementing the method of any of examples 66-68.

Example 71: a system for suggesting driving aspects of an autonomous vehicle, the system comprising: an occupant monitoring system for monitoring an autonomous vehicle occupant, the occupant monitoring system comprising one or more sensors monitoring one or more occupant parameters; a detection module to process occupant data received from the occupant monitoring system and detect an occupant emotion related to a driving aspect of the driving performed by the autonomous vehicle, wherein the detection module detects the occupant emotion based on one or more occupant parameters; a learning engine to receive the detected occupant emotion and driving aspects and to determine a correlation of the occupant emotion and driving aspects; an occupant profile analyzer to maintain an occupant profile comprising a correlation of occupant emotions to driving aspects of driving performed by the autonomous vehicle; and a vehicle interface for communicating the advised driving aspects to the autonomous vehicle based on a comparison of the currently detected occupant emotions to the occupant emotions in the occupant profile.

Example 72: the system of example 71, wherein the one or more sensors comprise one or more pressure sensors.

Example 73: the system of example 72, wherein the one or more pressure sensors are disposed on a handle within a passenger compartment of the autonomous vehicle to detect that the occupant has tightened his or her hand muscles.

Example 74: the system of example 72, wherein the one or more pressure sensors are disposed within a seat of the autonomous vehicle to detect occupant movement relative to the seat, including movement toward a seat back.

Example 75: the system of example 72, wherein the one or more pressure sensors are disposed on a floor of a passenger compartment of the autonomous vehicle to detect a twitch of at least one foot of the occupant.

Example 76: the system of example 72, wherein the one or more pressure sensors are disposed within a seat of the autonomous vehicle to detect the breathing rhythm.

Example 77: the system of any one of examples 71-76, wherein the one or more sensors include a microphone to detect occupant language.

Example 78: the system of any one of examples 71-77, wherein the occupant monitoring system comprises a statistical system configured to collect statistical data for a given geographic sector, wherein the detection module processes the statistical data.

Example 79: the system of example 78, wherein the statistics system collects the statistics by forming a wireless data connection with a wireless network access point within the geographic sector.

Example 80: the system of any one of examples 71-79, the learning engine comprising: an emotion analyzer for processing occupant data and detecting occupant emotions for a current driving aspect, the emotion analyzer recording a correlation of the detected occupant emotions with the current driving aspect; and an occupant profile analyzer to maintain an occupant profile comprising a correlation of the recorded occupant emotion to an occupant driving aspect, wherein the learning engine identifies one or more suggested driving aspects based on the correlation of the occupant emotion to the occupant profile of the relevant driving aspect.

Example 81: an autonomous vehicle comprising: an occupant monitoring system for monitoring an autonomous vehicle occupant, the occupant monitoring system comprising one or more sensors for monitoring one or more occupant parameters; a detection module to process sensor data received from one or more sensors of an occupant monitoring system and detect a potential hazard external to the autonomous vehicle based on one or more occupant parameters; and an autonomous vehicle controller to determine and cause the autonomous vehicle to perform a defensive action based on the detected potential hazard.

Example 82: an autonomous vehicle comprising: an occupant monitoring system for acquiring occupant data for an autonomous vehicle occupant; a learning engine to process occupant data received from the occupant monitoring system to identify one or more suggested driving aspects based on the occupant data; and an autonomous vehicle controller to provide autonomous vehicle navigation and autonomous vehicle control, wherein the autonomous vehicle controller receives the one or more suggested driving aspects and causes the autonomous vehicle to perform at least one of the one or more suggested driving aspects.

Example 83: the autonomous vehicle of example 82, wherein the occupant monitoring system includes one or more sensors to detect one or more occupant parameters indicative of occupant reaction to a potential hazard outside of the autonomous vehicle, wherein the learning engine processes sensor data from the one or more sensors of the occupant monitoring system to detect the potential hazard outside of the autonomous vehicle based on the one or more occupant parameters, and wherein the one or more suggested driving aspects include a defensive action to increase safety of occupants of the autonomous vehicle.

Example 84: the autonomous vehicle of any of examples 82-83, the learning engine comprising: an emotion analyzer for processing occupant data and detecting occupant emotions for a current driving aspect, the emotion analyzer recording a correlation of the detected occupant emotions with the current driving aspect; and an occupant profile analyzer to maintain an occupant profile comprising a correlation of the recorded occupant emotion to occupant driving aspects, wherein the learning engine is to identify one or more suggested driving aspects based on the correlation of the occupant emotion to the occupant profile of the relevant driving aspect.

Example 85: the autonomous vehicle of example 84, the occupant monitoring system comprising a detection module comprising one or more sensors to detect and monitor one or more occupant parameters, wherein the emotion analyzer detects occupant emotions based on sensor data from the occupant monitoring system.

The foregoing description provides numerous specific details for a detailed understanding of the embodiments described herein. One skilled in the relevant art will recognize, however, that one or more of the specific details can be omitted, or other methods, components, or materials can be used. In some instances, operations are not shown or described in detail.

Furthermore, the described features, operations, or characteristics may be combined in any suitable manner in one or more embodiments. As will also be readily apparent to those of skill in the art, the order of the steps or actions of the methods described in connection with the disclosed embodiments may be varied. Thus, any order in the drawings or detailed description is for illustrative purposes only and is not intended to imply a required order unless specified.

Embodiments may include various steps, which may be embodied in machine-executable instructions executed by a general-purpose or special-purpose computer (or other electronic devices). Alternatively, the steps may be performed by hardware components that include specific logic for performing the steps, or by a combination of hardware, software, and/or firmware.

Embodiments may also be provided as a computer program product including a computer-readable storage medium having stored thereon instructions which can be used to program a computer (or other electronic devices) to perform a process described herein. The computer readable storage medium may be non-transitory. Computer-readable storage media may include, but are not limited to: a hard disk drive, a floppy disk, an optical disk, a CD-ROM, a DVD-ROM, a RAM, an EPROM, an EEPROM, a magnetic or optical card, a solid state memory device, or other type of media/machine-readable medium suitable for storing electronic instructions.

As used herein, a software module or component may include any type of computer instruction or computer executable code located within a storage device and/or computer readable storage medium. For example, a software module may include one or more physical or logical blocks of computer instructions which may be organized into routines, programs, objects, components, data structures, etc., that perform one or more tasks or implement particular abstract data types.

In one embodiment, a particular software module may comprise different instructions stored in different locations of the memory device that together implement the described functionality of the module. Indeed, a module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across several memory devices. Some embodiments may be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, software modules may be located in local and/or remote memory storage devices. Further, data concatenated or presented together in a database record may reside in the same memory device, or across several memory devices, and may be linked together in a record field of the database across a network.

It will be obvious to those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the invention. The scope of the invention should, therefore, be determined only by the following claims.

21页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:一种三排头部安全气囊的折叠方法

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类