System and method for external environment sensing and rendering

文档序号:180854 发布日期:2021-11-02 浏览:47次 中文

阅读说明:本技术 用于外部环境感测和渲染的系统和方法 (System and method for external environment sensing and rendering ) 是由 C.M.特雷斯坦 M.威利斯 R.温顿 于 2021-04-23 设计创作,主要内容包括:提出了用于在车辆中产生声音的系统和方法。在一个示例中,在车辆外部产生的一个或多个声音可以促进车辆内声音的产生。可以以指示在车辆外部产生的声音的来源的方向的方式产生在车辆内产生的声音。(Systems and methods for generating sound in a vehicle are presented. In one example, one or more sounds generated outside of the vehicle may facilitate the generation of sounds within the vehicle. The sound generated within the vehicle may be generated in a manner that indicates a direction of origin of the sound generated outside the vehicle.)

1. A method for generating sound in a vehicle, comprising:

generating sound inside the vehicle via one or more speakers according to an angle between a location of the vehicle and a source of sound outside the vehicle.

2. The method of claim 1, wherein generating a sound comprises generating a sound indicative of the source of the sound external to the vehicle.

3. The method of any of the above claims, wherein generating a sound comprises generating a spoken sound indicating a direction of the sound and a type of sound associated with the source of the sound external to the vehicle.

4. The method of any of the above claims, further comprising receiving the angle from a sound processor to an in-vehicle computing system and producing the sound via the in-vehicle computing system.

5. The method of any of the above claims, further comprising adjusting which of the one or more speakers output the generated sound as an angle between the vehicle and the source of the sound external to the vehicle changes.

6. The method of any of the above claims, wherein generating a sound comprises applying reverberation to a signal according to a distance between the vehicle and the source of the sound external to the vehicle.

7. The method of any of the above claims, further comprising generating the sound in response to one or more properties of sound generated via the source of the sound external to the vehicle.

8. The method of any of the above claims, further comprising adjusting one or more properties of other sounds produced via the one or more speakers.

9. The method of claim 8 or any one of the above claims, wherein the one or more properties of the other sounds produced via the one or more speakers comprise a volume of the other sounds produced via the one or more speakers.

10. A sound system for a vehicle, comprising:

one or more speakers;

one or more microphones external to the vehicle; and

a controller electrically coupled to the one or more speakers, the one or more speakers including executable instructions stored in non-transitory memory that cause the controller to generate sound in an interior of the vehicle in response to distances and angles generated via output of the one or more microphones.

11. The sound system of claim 10, further comprising a sound processor electrically coupled to the one or more microphones, the sound processor outputting the distance and the angle to the controller.

12. The sound system according to any one of claims 10-11, wherein the distance is a distance from the vehicle to a sound source outside the vehicle, and wherein the angle is an angle from a location of the vehicle to the sound source outside the vehicle.

13. The sound system according to any one of claims 10-12, further comprising a navigation system configured to display a travel route of the vehicle and adjust the travel route of the vehicle in response to at least one of the angle and the distance.

14. The sound system according to any one of claims 10-13 or claim 12, further comprising additional executable instructions to generate sound inside the vehicle in response to a type assigned to the sound generated outside the vehicle.

15. The sound system according to any one of claims 10-14, wherein the type assigned to the sound produced outside the vehicle comprises at least an emergency vehicle sound.

Background

The present disclosure relates to a vehicle system that responds to sounds external to a vehicle.

Disclosure of Invention

A vehicle traveling on a roadway may encounter sounds that may be generated from sources external to the vehicle. For example, large trucks such as tractor-trailers, garbage trucks, gravel trucks, etc. may produce audible sounds in the vicinity of the vehicle as they pass in front of, to the side of, or behind the vehicle. Furthermore, emergency vehicles may pass by the vehicle from time to time, with the emergency vehicle's siren operating in an active state. However, due to vehicle acoustic isolation within the vehicle, these external sounds may not always be as noticeable as desired by the human driver of the vehicle. As a result, the situational awareness of the human driver may not be as acute as desired.

The inventors have recognized the aforementioned problems and have developed systems and methods that at least partially address the above-mentioned problems. In particular, the inventors have developed a method for generating sound in a vehicle, the method comprising: sound is generated inside the vehicle via one or more speakers according to an angle between the vehicle and a source of sound outside the vehicle.

By generating sound within the vehicle according to the angle between the vehicle and the source of sound external to the vehicle, the vehicle's speakers may inform the vehicle driver of the direction in which the source of sound external to the vehicle is pointed. For example, if an emergency vehicle is approaching the vehicle from the front right side of the vehicle, a speaker in the front right side of the vehicle interior may generate a sound that informs the vehicle occupant of the direction of the emergency vehicle. For example, a sound outside the vehicle may be mapped to a virtual point in space, and a notification sound may be reproduced inside the vehicle based on the virtual point in space, e.g., sounding as if it came from the virtual point. Additionally, the sound generated within the vehicle may be indicative of the intended source of the sound external to the vehicle. In this way, the vehicle occupant may be informed of the approaching sound source and the direction of the intended source of the sound.

The present description may provide several advantages. In particular, the method may improve situational awareness of occupants within the vehicle. In addition, the method may direct the attention of the vehicle occupant to a location outside the vehicle so that the vehicle occupant may more quickly identify the source of the sound. Further, the method may include wherein the navigation system of the vehicle provides an alternative travel route based on the noise of the external vehicle so that the vehicle may reach its destination faster.

The above advantages and other advantages and features of the present description will become readily apparent from the following detailed description, taken alone or in conjunction with the accompanying drawings.

It should be understood that the summary above is provided to introduce in simplified form a selection of concepts that are further described in the detailed description. It is not meant to identify key or essential features of the claimed subject matter, the scope of which is defined uniquely by the claims that follow the detailed description. Furthermore, the claimed subject matter is not limited to implementations that solve any disadvantages noted above or in any part of this disclosure.

Drawings

Fig. 1 shows an exemplary partial view of a car according to one or more embodiments of the present disclosure;

FIG. 2 illustrates an exemplary in-vehicle computing system in accordance with one or more embodiments of the present disclosure;

FIG. 3 illustrates an exemplary sound processing system in a vehicle according to one or more embodiments of the present disclosure;

4A-4C show schematic diagrams of speakers activated in response to the angle and distance of a sound source external to the vehicle; and is

5-7 illustrate flow diagrams of example methods for generating sound within a vehicle via an audio or infotainment system.

Detailed Description

The present disclosure relates to generating sound in a passenger compartment of a vehicle based on sound external to the vehicle. The sound generated within the vehicle may be generated in a manner that indicates the direction of the sound source, the type of the sound source, and the distance to the sound source outside the vehicle. For example, the volume or sound output power level (e.g., decibels (dB)) within the vehicle may be adjusted based on the distance to a sound source outside the vehicle. In addition, the angle and distance of the external sound source with respect to the vehicle may be transferred to a navigation system, which may change a driving route based on the external sound source information.

As shown in fig. 1-3, a system according to the present disclosure may be part of a vehicle and a method according to the present disclosure may be performed via an in-vehicle computing system.

FIG. 1 illustrates an exemplary partial view of one type of environment for an audio customization system: inside the cabin 100 of the vehicle 102, a driver and/or one or more passengers may be seated therein. The vehicle 102 of fig. 1 may be a motor vehicle that includes drive wheels (not shown) and an internal combustion engine 104. The internal combustion engine 104 may include one or more combustion chambers that may receive intake air via an intake passage and exhaust combustion gases via an exhaust passage. The vehicle 102 may be a road automobile, among other types of vehicles. In some examples, the vehicle 102 may include a hybrid propulsion system that includes an energy conversion device operable to absorb energy from vehicle motion and/or the engine and convert the absorbed energy into a form of energy suitable for storage by the energy storage device. Vehicle 102 may include an all-electric vehicle that incorporates fuel cells, solar capture elements, and/or other energy storage systems for powering the vehicle.

As shown, the dashboard 106 may include various displays and controls accessible to a human driver (also referred to as a user) of the vehicle 102. For example, the dashboard 106 may include a touch screen 108 of an in-vehicle computing system 109 (e.g., infotainment system), an audio system control panel, and an instrument cluster 110. The touch screen 108 may receive user input to the in-vehicle computing system 109 to control audio output, visual display output, user preferences, control parameter selections, and the like. While the exemplary system shown in fig. 1 includes audio system controls that may be executed via a user interface of the in-vehicle computing system 109 (such as the touch screen 108 without a separate audio system control panel), in other embodiments, the vehicle may include an audio system control panel that may include controls for a conventional vehicle audio system such as a radio, compact disc player, MP3 player, and the like. The audio system controls may include features for controlling one or more aspects of audio output via the speakers 112 of the vehicle speaker system. For example, the in-vehicle computing system or audio system controls may control the volume of the audio output, the sound distribution among the various speakers of the vehicle speaker system, the equalization of the audio signal, and/or any other aspect of the audio output. In further examples, the in-vehicle computing system 109 may adjust radio station selections, playlist selections, audio input sources (e.g., from a radio or CD or MP3), etc., based on user input received directly via the touch screen 108 or based on data about the user received via the external device 150 and/or the mobile device 128, such as the user's physical state and/or environment.

Additionally, the in-vehicle computing system 109 may adjust the audio output volume or power output level, which speakers are activated, and the signal used to produce sound at the speakers in response to the output from the sound processor 113 of external sound. The audio system of the vehicle may include an amplifier (not shown) coupled to a plurality of loudspeakers (not shown). The external sound processor 113 may be connected to the in-vehicle computing system via a communication link 138, which may be wired or wireless (as discussed with reference to communication link 130), and configured to provide bi-directional communication between the external sound processor 113 and the in-vehicle computing system.

In some embodiments, one or more hardware elements of the in-vehicle computing system 109, such as the touch screen 108, the display screen 111, various control panels, knobs and buttons, memory, processors, and any interface elements (e.g., connectors or ports) may form an integrated head unit that is mounted in the dashboard 106 of the vehicle. The head unit may be fixedly or removably attached in the instrument panel 106. In additional or alternative embodiments, one or more hardware elements of the in-vehicle computing system 109 may be modular and may be installed in multiple locations of the vehicle.

The cabin 100 may include one or more sensors for monitoring the vehicle, user, and/or environment. For example, the cabin 100 may include one or more seat-mounted pressure sensors configured to measure pressure applied to the seats to determine the presence of a user; a door sensor configured to monitor activity of a door; a humidity sensor for measuring humidity of the vehicle compartment; a microphone for receiving user input in the form of voice commands, enabling a user to make telephone calls and/or measure ambient noise in the cabin 100, etc. It should be appreciated that the above-described sensor and/or one or more additional or alternative sensors may be located at any suitable location of the vehicle. For example, sensors may be positioned in the engine compartment, on an exterior surface of the vehicle, and/or in other suitable locations to provide information regarding the operation of the vehicle, environmental conditions of the vehicle, a user of the vehicle, and the like. Information regarding the environmental condition of the vehicle, the vehicle state, or the vehicle driver may also be received from sensors external/separate from the vehicle (i.e., not part of the vehicle system), such as sensors coupled to the external device 150 and/or the mobile device 128.

The compartment 100 may also include one or more user objects, such as a mobile device 128, that are stored in the vehicle before, during, and/or after travel. The mobile device 128 may include a smart phone, a tablet, a laptop, a portable media player, and/or any suitable mobile computing device. The mobile device 128 may be connected to the in-vehicle computing system via a communication link 130. The communication link 130 may be wired (e.g., via universal serial bus [ USB ], mobile high definition link [ MHL ], high definition multimedia interface [ HDMI ], ethernet, etc.) or wireless (e.g., via bluetooth, WIFI direct, near field communication [ NFC ], cellular connection, etc.) and configured to provide two-way communication between the mobile device and the in-vehicle computing system. The mobile device 128 may include one or more wireless communication interfaces for connecting to one or more communication links (e.g., one or more of the exemplary communication links described above). The wireless communication interface may include one or more physical devices, such as an antenna or port coupled to a data line for carrying transmission or reception data, and one or more modules/drivers for operating the physical devices in accordance with other devices in the mobile device. For example, the communication link 130 may provide sensor and/or control signals to the mobile device 128 from various vehicle systems (such as a vehicle audio system, climate control system, etc.) and the touch screen 108, and may provide control and/or display signals from the mobile device 128 to the in-vehicle systems and the touch screen 108. The communication link 130 may also provide power from the in-vehicle power supply to the mobile device 128 to charge the internal battery of the mobile device.

In-vehicle computing system 109 may also be communicatively coupled to additional devices operated and/or accessed by the user, but located external to vehicle 102, such as one or more external devices 150. In the depicted embodiment, the external device is located outside of the vehicle 102, but it should be understood that in alternative embodiments, the external device may be located inside the cabin 100. The external devices may include server computing systems, personal computing systems, portable electronic devices, electronic wristbands, electronic headbands, portable music players, electronic activity tracking devices, pedometers, smart watches, GPS systems, and the like. The external device 150 may be connected to the in-vehicle computing system via a communication link 136, which may be wired or wireless (as discussed with reference to communication link 130), and configured to provide bi-directional communication between the external device and the in-vehicle computing system. For example, the external device 150 may include one or more sensors, and the communication link 136 may transmit sensor output from the external device 150 to the in-vehicle computing system 109 and the touch screen 108. The external device 150 may also store and/or receive information regarding contextual data, user behavior/preferences, operating rules, etc., and may transmit such information from the external device 150 to the in-vehicle computing system 109 and the touch screen 108.

In-vehicle computing system 109 may analyze inputs received from external device 150, mobile device 128, external sound processor 113, and/or other input sources, and select settings for various in-vehicle systems (such as a climate control system or audio system), provide outputs via touch screen 108 and/or speaker 112, communicate with mobile device 128 and/or external device 150, and/or perform other operations based on the evaluation. In some embodiments, all or a portion of the evaluation may be performed by the mobile device 128 and/or the external device 150.

In some embodiments, one or more external devices 150 may be indirectly communicatively coupled to in-vehicle computing system 109 via mobile device 128 and/or another external device 150. For example, the communication link 136 may communicatively couple the external device 150 to the mobile device 128 such that output from the external device 150 is relayed to the mobile device 128. The data received from the external device 150 may then be aggregated with the data collected by the mobile device 128 at the mobile device 128, and the aggregated data may then be transmitted to the in-vehicle computing system 109 and the touchscreen 108 via the communication link 130. Similar data aggregation may occur at the server system and then transmitted to the in-vehicle computing system 109 and the touch screen 108 via the communication link 136/130.

FIG. 2 illustrates a block diagram of an in-vehicle computing system 109 configured and/or integrated within the vehicle 102. In some embodiments, the in-vehicle computing system 109 may perform one or more of the methods described herein. In some examples, the in-vehicle computing system 109 may be a vehicle infotainment system configured to provide information-based media content (audio and/or visual media content, including entertainment content, navigation services, etc.) to a vehicle user to enhance the operator's in-vehicle experience. The vehicle infotainment system may include or be coupled to various vehicle systems, subsystems, hardware components, and software applications and systems integrated or integratable into the vehicle 102 to enhance the on-board experience for the driver and/or passengers.

In-vehicle computing system 109 may include one or more processors, including operating system processor 214 and interface processor 220. Operating system processor 214 may execute an operating system on the in-vehicle computing system and control input/output, display, playback, and other operations of the in-vehicle computing system. The interface processor 220 may interface with the vehicle control system 230 and the sound processor 113 of the external sound via the inter-vehicle system communication module 222.

The inter-vehicle system communication module 222 may output data to other vehicle systems 231 and vehicle control elements 261, while also receiving data input from other vehicle components and systems 231, 261, such as through the vehicle control system 230. When outputting data, the inter-vehicle system communication module 222 may provide a signal corresponding to an output of any state of the vehicle, the vehicle surroundings, or any other information source connected to the vehicle via the bus. The vehicle data output may include, for example, analog signals (such as current speed), digital signals provided by various information sources (such as clocks, thermometers, position sensors such as global positioning system [ GPS ] sensors, etc.), digital signals propagated through a vehicle data network (such as an engine CAN bus over which engine-related information may be communicated, a climate control CAN bus over which climate control-related information may be communicated, and a multimedia data network over which multimedia data is communicated between multimedia components in the vehicle). For example, the in-vehicle computing system 109 may retrieve the current speed of the vehicle estimated by the wheel sensors from the engine CAN bus, retrieve the power state of the vehicle via the vehicle's battery and/or power distribution system, retrieve the ignition state of the vehicle, and so forth. In addition, other handover approaches such as ethernet may also be used without departing from the scope of the present disclosure.

Nonvolatile storage 208 may be included in the in-vehicle computing system 109 to store data, such as instructions executable by processors 214 and 220, in a nonvolatile form. The storage 208 may store application data including pre-recorded sounds to enable the in-vehicle computing system 109 to run applications for connecting to a cloud-based server and/or collecting information for transmission to the cloud-based server. The application may retrieve information collected by vehicle systems/sensors, input devices (e.g., user interface 218), data stored in volatile 219A or non-volatile storage (e.g., memory) 219B, devices in communication with the in-vehicle computing system (e.g., mobile devices connected via a bluetooth link), and so forth. The in-vehicle computing system 109 may also include volatile memory 219A. The volatile memory 219A may be a Random Access Memory (RAM). Non-transitory storage devices, such as the non-volatile storage 208 and/or the non-volatile memory 219B, may store instructions and/or code that, when executed by a processor (e.g., the operating system processor 214 and/or the interface processor 220), control the in-vehicle computing system 109 to perform one or more of the actions described in this disclosure.

The microphone 202 may be included in the in-vehicle computing system 109 to receive voice commands from a user, to measure ambient noise in the vehicle, to determine whether to tune audio from the vehicle speakers according to the acoustic environment of the vehicle, and so forth. The voice processing unit 204 may process voice commands, such as received from the microphone 202. In some embodiments, the in-vehicle computing system 109 can also receive voice commands and sample ambient vehicle noise using a microphone included in the vehicle's audio system 232.

One or more additional sensors may be included in sensor subsystem 210 of in-vehicle computing system 109. For example, sensor subsystem 210 may include cameras, such as a rear view camera to assist the user in parking and/or a car camera to recognize the user (e.g., using facial recognition and/or user gestures). Sensor subsystem 210 of in-vehicle computing system 109 may communicate with and receive input from various vehicle sensors, and may further receive user input. For example, the inputs received by sensor subsystem 210 may include transmission gear position, transmission clutch position, accelerator pedal input, brake input, transmission selector position, vehicle speed, engine speed, air flow through the engine, ambient temperature, intake air temperature, and the like, as well as inputs from climate control system sensors (such as heat transfer fluid temperature, antifreeze temperature, fan speed, passenger compartment temperature, desired passenger compartment temperature, ambient humidity, and the like), audio sensors from detecting voice commands issued by a user, key fob sensors that receive commands from a key fob of the vehicle and optionally track its geographic location/proximity, and the like. While some vehicle system sensors may communicate with sensor subsystem 210 alone, other sensors may communicate with both sensor subsystem 210 and vehicle control system 230, or may communicate with sensor subsystem 210 indirectly via vehicle control system 230.

Navigation subsystem 211 of in-vehicle computing system 109 may generate and/or receive navigation information, such as location information (e.g., via GPS sensors and/or other sensors from sensor subsystem 210), route guidance, traffic information, point of interest (POI) identification, and/or provide other navigation services to the driver. Navigation subsystem 211 may include input/outputs 280 including analog-to-digital converters, digital inputs, digital outputs, network outputs, radio frequency transmitters, and the like. The sound processor 113 of the external sound may further include a central processing unit 281, a volatile memory 282, and a non-volatile (e.g., non-transitory memory) 283.

The external device interface 212 of the in-vehicle computing system 109 may couple and/or communicate with one or more external devices 150 located external to the vehicle 102. Although the external devices are shown as being located outside of the vehicle 102, it should be understood that they may be temporarily housed in the vehicle 102, such as when a user operates the external devices while operating the vehicle 102. In other words, the external device 150 is not integral with the vehicle 102. External device 150 may include mobile device 128 (e.g., connected via bluetooth, NFC, WIFI direct, or other wireless connection) or an alternative bluetooth-enabled device 252. The mobile device 128 may be a mobile phone, a smart phone, a wearable device/sensor that may communicate with an in-vehicle computing system via wired and/or wireless communication, or other portable electronic device. Other external devices include external services 246. For example, the external device may include an off-vehicle device that is separate from the vehicle and located outside the vehicle. Other external devices also include external storage 254, such as a solid state drive, pen drive, USB drive, or the like. The external device 150 may communicate with the in-vehicle computing system 109 wirelessly or via a connector without departing from the scope of this disclosure. For example, the external device 150 may communicate with the in-vehicle computing system 109 through the external device interface 212 via the network 260, a Universal Serial Bus (USB) connection, a direct wired connection, a direct wireless connection, and/or other communication link.

The external device interface 212 may provide a communication interface to enable the in-vehicle computing system to communicate with mobile devices associated with the driver's contacts. For example, the external device interface 212 may enable establishing a phone call and/or sending (e.g., via a cellular communication network) a text message (e.g., SMS, MMS, etc.) to a mobile device associated with the driver's contact. The external device interface 212 may additionally or alternatively provide a wireless communication interface to enable the in-vehicle computing system to synchronize data with one or more devices in the vehicle (e.g., the driver's mobile device) via WIFI direct.

One or more applications 244 may operate on the mobile device 128. As an example, the mobile device application 244 may be operable to aggregate user data regarding user interactions with the mobile device. For example, the mobile device application 244 may aggregate data regarding a music playlist that the user listens to on the mobile device, a phone call log (including the frequency and duration of phone calls accepted by the user), location information (including locations frequented by the user and the amount of time spent at each location), and so forth. The collected data may be transmitted by the application 244 to the external device interface 212 over the network 260. Additionally, a specific user data request may be received at the mobile device 128 from the in-vehicle computing system 109 via the external device interface 212. The specific data request may include a request to determine the user's geographic location, the ambient noise level and/or music genre of the user's location, the ambient weather conditions (temperature, humidity, etc.) of the user's location, etc. The mobile device application 244 may send control instructions to components of the mobile device 128 (e.g., microphone, amplifier, etc.) or other applications (e.g., navigation applications) to enable requested data to be collected on the mobile device or to enable requested adjustments to the components. The mobile device application 244 can then relay the collected information back to the in-vehicle computing system 109.

Likewise, one or more applications 248 may run on the external service 246. As an example, the external service application 248 may be operable to aggregate and/or analyze data from multiple data sources. For example, the external services application 248 may aggregate data from one or more social media accounts of the user, data from an in-vehicle computing system (e.g., sensor data, log files, user input, etc.), data from internet queries (e.g., weather data, POI data), and so forth. The collected data may be transmitted to another device and/or analyzed by an application to determine the context of the driver, vehicle, and environment, and perform actions (e.g., request/send data from the other device) based on the context.

The vehicle control system 230 may include controls for controlling various aspects of the various vehicle systems 231 relating to different on-board functions. These may include, for example, controlling aspects of a vehicle audio system 232 for providing audio entertainment to vehicle occupants, aspects of a climate control system 234 for meeting cabin cooling or heating needs of vehicle occupants, and aspects of a telecommunications system 236 for enabling vehicle occupants to establish telecommunications contact with others.

The audio system 232 may include one or more acoustic reproduction devices, including electromagnetic transducers such as speakers 235. The vehicle audio system 232 may be passive or active, such as by including a power amplifier. In some examples, the in-vehicle computing system 109 may be the only audio source of the acoustic reproduction device, or there may be other audio sources connected to the audio reproduction system (e.g., an external device such as a mobile phone). The connection of any such external device to the audio reproduction device may be analog, digital or any combination of analog and digital techniques.

The climate control system 234 may be configured to provide a comfortable environment within the cabin or passenger compartment of the vehicle 102. The climate control system 234 includes components capable of controlling ventilation, such as vents, heaters, air conditioners, integrated heater and air conditioning systems, and the like. Other components associated with the heating and air conditioning arrangement may include a windshield defrost and defogging system capable of cleaning the windshield and a ventilation air filter for cleaning outside air entering the passenger compartment through the fresh air inlet.

The vehicle control system 230 may also include controls for adjusting settings of various vehicle controls 261 (or vehicle system control elements) related to engine and/or auxiliary elements within the cabin of the vehicle, such as steering wheel controls 262 (e.g., steering wheel mounted audio system controls, cruise controls, windshield wiper controls, headlamp controls, turn signal controls, etc.), dashboard controls, microphones, accelerator/brake/clutch pedals, gear levers, door/window controls in the driver or passenger doors, seat controls, cabin lighting controls, audio system controls, cabin temperature controls, and so forth. The vehicle controls 261 may also include internal engine and vehicle operation controls (e.g., engine controller modules, actuators, valves, etc.) configured to receive instructions over the CAN bus of the vehicle to alter operation of one or more of the engine, exhaust system, transmission, and/or other vehicle systems. The control signals may also control audio output at one or more speakers 235 of the vehicle audio system 232. For example, the control signal may adjust audio output characteristics such as volume, equalization, audio image (e.g., configuration of the audio signal to produce audio output that appears to the user to originate from one or more defined locations), audio placement in a plurality of speakers, and so forth. Similarly, the control signals may control vents, air conditioners, and/or heaters of the climate control system 234. For example, the control signal may increase the delivery of cooling air to a particular portion of the cabin.

Control elements positioned outside of the vehicle (e.g., controls for the security system) may also be connected to the computing system 109, such as via the communication module 222. The control elements of the vehicle control system may be physically and permanently positioned on and/or in the vehicle to receive user input. In addition to receiving control instructions from the in-vehicle computing system 109, the vehicle control system 230 may also receive input from one or more external devices 150 operated by the user (such as from the mobile device 128). This allows aspects of the vehicle system 231 and vehicle controls 261 to be controlled based on user input received from the external device 150.

The in-vehicle computing system 109 may also include an antenna 206. Antenna 206 is shown as a single antenna, but may include one or more antennas in some embodiments. The in-vehicle computing system may obtain broadband wireless internet access via antenna 206 and may further receive broadcast signals such as radio, television, weather, traffic, and the like. The in-vehicle computing system may receive positioning signals, such as GPS signals, via one or more antennas 206. The in-vehicle computing system may also receive wireless commands via the FR, such as via antenna 206 or via infrared or other means by suitable receiving means. In some embodiments, the antenna 206 may be included as part of the audio system 232 or the telecommunication system 236. Additionally, the antenna 206 may provide AM/FM radio signals to the external device 150 (such as to the mobile device 128) via the external device interface 212.

One or more elements of in-vehicle computing system 109 may be controlled by a user via user interface 218. The user interface 218 may include a graphical user interface presented on a touch screen, such as the touch screen 108 of fig. 1, and/or user-actuated buttons, switches, knobs, dials, sliders, and the like. For example, the user-actuated elements may include steering wheel controls, door and/or window controls, dashboard controls, audio system settings, climate control system settings, and the like. The user may also interact with one or more applications of the in-vehicle computing system 109 and the mobile device 128 through the user interface 218. In addition to receiving the user's vehicle setting preferences on the user interface 218, the vehicle settings selected by the in-vehicle control system may also be displayed to the user on the user interface 218. Notifications and other messages (e.g., received messages) as well as navigation assistance may be displayed to the user on a display of the user interface. User preferences/information and/or responses to presented messages may be performed via user input to the user interface.

External sound processor 113 may be electrically coupled to a plurality of microphones 288 (e.g., external microphones) external to vehicle 102. The sound processor 113 of the external sound may receive the signal output from each external microphone 288 and convert the signal into an angle value, a distance value, and a type identifier of the sound source outside the vehicle 102. The sound processor 113 of the external sound may output the angle data, the distance data, and the sound source type data to the in-vehicle computing system 109 via the communication link 138. However, in other examples, tasks and functions that may be performed by sound processor 113 of external sounds may be integrated into in-vehicle computing system 109. Additionally, in such examples, the external microphone 288 may be in direct electrical communication with the in-vehicle computing system 109. The description of the method of fig. 6 provides additional details regarding tasks and functions that may be performed via the sound processor 113 of the external sound.

The sound processor 113 of the external sound may include input/outputs 290 including analog-to-digital converters, digital inputs, digital outputs, network outputs, radio frequency emitting devices, and the like. The sound processor 113 of the external sound may also include a central processing unit 291, a volatile memory 292, and a non-volatile (e.g., non-transitory) memory 293.

Fig. 3 is a block diagram of vehicle 102, which may include in-vehicle computing system 109, audio or sound processing system 232, and sound processor 113 for external sounds. The vehicle 102 has a front side 340, a rear side 342, a left side 343, and a right side 344. The vehicle 102 also includes a door 304, a driver seat 309, a passenger seat 310, and a rear seat 312. Although a four door vehicle is shown including doors 304-1, 304-2, 304-3, and 304-4, processors and systems 109, 232, and 113 may be used in vehicles having more or fewer doors. The vehicle 102 may be an automobile, truck, boat, etc. Although only one rear seat is shown, larger vehicles may have multiple rows of rear seats. Smaller vehicles may have only one or more seats. Although a particular example configuration is shown, other configurations may be used, including those with fewer or more components.

The audio system 232 (which may include amplifiers and/or other audio processing devices for receiving, processing, and/or outputting audio to one or more speakers of the vehicle) may improve the spatial characteristics of the surround sound system. The audio system 232 supports the use of various audio components such as radio, CO, DVD, their derivatives, and the like. The audio system 232 may use 2-channel source material, such as direct left and right, 5.1 channels, 6.2 channels, 7 channels, 12 channels, and/or any other source material from a matrix decoder that digitally encodes/decodes discrete source material, and so forth. The audio system 232 utilizes a channel that is only used for TI/HWL sounds and is separate from the channel used for the remaining sounds, including one or more of the remaining warning, media, navigation, and phone/telematics sounds.

The amplitude and phase characteristics of the source material and the reproduction of specific sound field characteristics in the listening environment play a key role in the successful reproduction of the surround sound field. For example, sound may be spatially mapped to the audio system 232 such that the sound is perceived as originating from a different spatial location that is related to, but modified from, the detected true source location of the sound.

In at least one example, the audio system 232 can improve the reproduction of the surround sound field by controlling sound delay times, surround upmixer parameters (e.g., surround sound, reverberation space extent, reverberation time, reverberation gain, etc.), amplitude, phase, and mixing ratios between discrete and passive decoder surround signals and/or direct dual channel output signals. The amplitude, phase and mixing ratio can be controlled between discrete and passive decoder output signals. By redirecting direct, passive and active mixing and steering parameters, spatial sound field reproduction can be improved for all seat positions, especially in a vehicle environment.

The mixing and steering ratios and spectral characteristics may be adaptively modified based on noise and other environmental factors. In a vehicle, information from the data bus, microphones, and other transducing devices may be used to control mixing and steering parameters.

The vehicle 102 has a front center speaker (CTR speaker) 324, a front left speaker (FL speaker) 313, a front right speaker (FR speaker) 315, and at least one pair of surround speakers.

The surround speakers may be a left speaker (LS speaker) 317 and a right speaker (RS speaker) 319, a left rear speaker (LR speaker) 329 and a right rear speaker (RR speaker) 330, or a combination of speaker groups. Other speakers may be used. Although not shown, one or more dedicated subwoofers or other drivers may be present. Possible subwoofer mounting locations include the luggage compartment 305, under-seat or rear sill 308. The vehicle 102 may also have one or more microphones 350 mounted in the interior.

Each CTR speaker, FL speaker, FR speaker, LS speaker, RS speaker, LR speaker, and RR speaker may include one or more transducers having a predetermined frequency response range, such as a tweeter, midrange speaker, or woofer. The tweeter, midrange speaker, or woofer may be mounted adjacent to one another in substantially the same location or in different locations. For example, FL speaker 313 may be a tweeter located in door 304-1 or elsewhere, with a height approximately equal to the side mirror or higher. The FR speaker 315 may have a similar arrangement as the FL speaker 313 on the right side of the vehicle (e.g., in the door 304-2).

The LR speaker 329 and RR speaker 330 may each be a woofer mounted in the rear sill 308. The CTR speaker 324 may be mounted in the front dashboard 307, in the roof, above or near the rear view mirror, or elsewhere in the vehicle 102. In other examples, other configurations of loudspeakers with other frequency response ranges are possible. In some embodiments, additional speakers may be added to the upper pillar of the vehicle to enhance the height of the sound image. For example, the upper pillar may include a vertical or near vertical support of the window area of the automobile. In some examples, additional speakers may be added to the upper region of the "a" pillar toward the front of the vehicle.

The vehicle 102 includes a longitudinal axis 345 and a lateral axis 346. The location of the sound source (e.g., alarm, motor, siren, etc.) may be referenced to the longitudinal axis 345, the lateral axis 346, or other location of the vehicle 102. In this example, the distance to an external noise source 399 (e.g., vehicle, engine, alarm, horn, etc.) is shown via vector 350. The angle θ between the longitudinal axis (e.g., the position of the vehicle) and the external noise source 399 is shown by the angle θ and is as indicated by the guide segment 347. The angle θ and the distance from the vehicle 102 to the external noise source 399 may be determined via an external sound processor 113 that processes a signal 355 representing sounds received via external microphones 288 positioned near the front 340 and rear 342 of the vehicle 102. For example, based on the angle θ determined via external sound processor 113 and the distance from vehicle 102 to external noise source 399, external sound 399 may be mapped to different locations in virtual space that are directly related to the actual angle and distance (e.g., a particular proportion of). As an example, the controller may map the external sound 399 to a virtual space map describing a space around the vehicle, such as the virtual sound source zone 360.

The external sound source 399 may be within the virtual sound source area 360, and the position of the external sound source 399 within the virtual sound source area 360 may be determined or based on its angle θ and distance 350 relative to the vehicle 102. In this example, the virtual sound source area 360 surrounds the vehicle 102, but in other examples the virtual sound source area may extend only forward of the vehicle 102. The in-vehicle computing system 109 may command the audio system 232 to play sound through one or more speakers located within the virtual speaker zone 362 depending on the location of the external sound source 399 within the virtual sound source zone 360. In one example, the respective external sound source positions within the virtual sound source area 360 may be mapped to the respective positions of the speakers in the virtual speaker area 362. Virtual speaker zone 362 includes speakers 329, 330, 317, 319, 313, 324, and 315. Accordingly, the position of the external sound source 399 may be continuously tracked in real-time, and the position of the external sound source 399 may be applied to generate sound associated with the type of the external sound source 399 via one or more speakers mapped to the position of the external sound source 399. For example, when the external sound source is located at the right rear of the side of the vehicle 102, the speaker 329 may generate a sound associated with the type of the external sound source 399. Likewise, when the external sound source is located at the left rear of the side of the vehicle 102, the speaker 330 may generate a sound associated with the type of the external sound source 399.

In another example, rather than mapping spatial positions to discrete speakers, the external sound source 399 may be continuously mapped to the virtual speaker zone 362 and each speaker may be adjusted to reproduce the perceived spatial position of the sound. Specifically, there may be a 1: 1 such that the position of the external sound source 399 may be reproduced by the audio system 232. As an example, the in-vehicle computing system 109 may determine the location of the external sound source 399 in the virtual sound source region 360. Further, the in-vehicle computing system 109 may determine a corresponding location in the virtual speaker zone 362 such that the spatial location of the external sound source 399 may be reproduced in the virtual speaker zone 362. For example, the in-vehicle computing system may adjust the audio gain, panning settings, and other audio settings for each speaker of the virtual speaker zone 362 based on the spatial location of the external sound source 399. As one example, to map the spatial location of the external sound source 399, the plurality of speakers may generate sound associated with the type of external sound source 399 based on the angle θ and distance 350 from the vehicle 102.

In one example, the vehicle navigation system may include two virtual directional regions 361A and 361B located in front of the vehicle 102. The vehicle navigation system may request that the driver drive to one of two virtual directional zones 361A (e.g., right turn) and 361B (e.g., left turn) so that the vehicle may reach the intended destination. Thus, the vehicle navigation system may request that the vehicle driver turn to the right (e.g., to 361A) or to the left (e.g., to 361B). The navigation system 211 can instruct the audio system 232 to play sound through the front left speaker 313 or the front right speaker 315 within the virtual speaker zones 363A and 363B according to the requested virtual direction zones 361A and 361B. The right virtual direction zone 361A may be mapped to the right front speaker 315 via the virtual speaker zone 363B so that when the navigation system requests the vehicle driver to turn to the right, verbal driving instructions may be played through the right front speaker 315. Similarly, the left virtual direction zone 361B may be mapped to the front left speaker 313 via the virtual speaker zone 363A such that when the navigation system requests the vehicle driver to turn left, verbal driving instructions may be played through the front left speaker 313.

Thus, the system of fig. 1-3 provides a sound system for a vehicle, comprising: one or more speakers; one or more microphones external to the vehicle; and a controller electrically coupled to the one or more speakers, the one or more speakers including executable instructions stored in non-transitory memory that cause the controller to generate sound inside the vehicle in response to the distance and angle generated via the output of the one or more microphones. The system also includes a sound processor electrically coupled to the one or more microphones, the sound processor outputting the distance and angle to the controller. The system comprises: wherein the distance is a distance from the vehicle to a sound source outside the vehicle, and wherein the angle is an angle from a location of the vehicle to the sound source outside the vehicle. The system also includes a navigation system configured to display a travel route of the vehicle and adjust the travel route of the vehicle in response to at least one of the angle and the distance. The system also includes additional executable instructions to generate sound inside the vehicle in response to the type of sound assigned to being generated outside the vehicle. For example, the system may determine a 1: 1, and generating sound inside the vehicle based on the map. The system includes wherein the type assigned to sound generated outside the vehicle includes at least emergency vehicle sound.

Turning now to fig. 4A, a schematic example illustrating a portion of the method of fig. 5-7 is shown. In this example, the external noise sources 399 are trucks located in front and to the left of the vehicle 102. The truck 399 may emit engine noise and tire noise that may be detected via the microphone 288 located on the front side 340 of the vehicle 102.

In this example, since the noise source 399 is the vehicle 102 approaching from the front left side of the vehicle 102, the audio system 232 is commanded to output a sound or verbal cue that maps to the front left side of the virtual speaker zone 362. In particular, sounds or verbal cues associated with the truck may be translated such that they are perceived as originating from the left front side of the vehicle. As an example, sounds or verbal cues may be panned between multiple speakers in a vehicle based on a known relationship between audio panning and perceived spatial location, such as surround sound techniques known in the art (e.g., such as 5.1 surround sound, 7.1 surround sound, ambisonic surround sound, etc.). As one non-limiting example, the left front speaker 313 (shown shaded) may have the highest audio gain of the multiple speakers in the vehicle (e.g., the sound or spoken prompt may pan left and front), while the sound or spoken prompt in the front center speaker 324 and the right front speaker 315 may be quieter. In some examples, additional speakers of the audio system 232 may also be used to output sounds or verbal cues.

Additionally, the audio system 232 may be commanded to decrease the volume of any sounds it is playing that are not related to the noise source 399 near the vehicle 102. In addition, the audio system 232 may be instructed to adjust the volume of the sound being played that is associated with or based on the approaching noise source 399. For example, as the noise source 399 comes closer to the vehicle 102, the volume of sound associated with or based on the approaching noise source 399 that the audio system 232 is playing may be increased. In some examples, the relationship between the distance of the noise source 399 and the volume of sound being played by the audio system 232 may be a linear relationship, while in other examples, the relationship may be a non-linear relationship. Also, the volume of sound being played by the audio system 232 associated with or based on the approaching noise source 399 may be adjusted in response to the amount of noise from the noise source 399 detected within the passenger compartment via the interior microphone 350. For example, if the noise from the noise source 399 is relatively loud as the noise source 399 gets closer to the vehicle 102, the volume of the sound that the audio system 232 is playing that is associated with or based on the approaching noise source 399 may be reduced. For example, if the noise from noise source 399 is relatively quiet as noise source 399 comes closer to vehicle 102, the volume of sound that audio system 232 is playing that is associated with or based on the approaching noise source 399 may be increased.

Control of the speakers (e.g., such as 313, 324, and 315) and audio system 232 may be controlled via the external sound processor 113 that processes the signals generated via microphone 288 to determine the angle θ relative to the vehicle longitudinal axis 345, the distance from the vehicle 102 to the noise source 399 as shown by vector 410, and the type of noise source. The angle θ, distance, and type of noise source may be provided to the in-vehicle computing system 109 from the sound processor 113 of the external sound. The in-vehicle computing system 109 may instruct the audio system 232 to play a predetermined sound via a particular speaker or set of speakers according to the angle θ, the distance, and the type of noise source.

In this way, the sound played in the vehicle may direct the attention of the human driver to noise sources outside the vehicle, so that the situational awareness of the human driver may be improved. In addition, the sound output from the interior speakers of the vehicle may be adjusted to compensate for the distance between the noise source and the vehicle so that the sound in the passenger compartment may not become objectionable.

Referring now to fig. 4B, a second illustrative example illustrating a portion of the method of fig. 5-7 is shown. In this example, the external noise source 399 is a truck located directly in front of the vehicle 102. The truck 399 may emit engine noise and tire noise that may be detected via the microphone 288 located on the front side 340 of the vehicle 102.

In this example, the audio system is again commanded to output a sound or verbal cue to the speaker closest to the noise source 399. Since the noise source 399 is located directly in front of the vehicle 102, the audio system 232 is commanded to play a sound or verbal cue that is mapped to the center in front of the virtual speaker zone 362. In particular, sounds or verbal cues associated with the truck may be translated such that they are perceived as originating from the front center of the vehicle. For example, sounds or verbal cues may be panned between multiple speakers in the vehicle based on a known relationship between audio panning and perceived spatial location. As one non-limiting example, the front center speaker 324 (shown shaded) may have the highest audio gain of the multiple speakers in the vehicle (e.g., the sound or spoken prompt may pan toward the front center), while the sound or spoken prompt in the front left and right speakers 313 and 315 may be quieter. The audio system may also respond to the angle and distance of the noise source 399 as discussed with respect to fig. 4A.

Referring now to fig. 4C, a third illustrative example illustrating a portion of the method of fig. 5-7 is shown. In this example, the external noise source 399 is a truck located at the front right of the vehicle 102. The truck 399 may emit engine noise and tire noise that may be detected via the microphone 288 located on the front side 340 of the vehicle 102.

In this example, the audio system is commanded to output a sound or spoken prompt that is mapped to the front right of the virtual speaker zone 362. In particular, sounds or verbal cues associated with the truck may be translated such that they are perceived as originating from the front center of the vehicle. For example, sounds or verbal cues may be panned between multiple speakers in the vehicle based on a known relationship between audio panning and perceived spatial location. As one non-limiting example, the right front speaker 315 (shown shaded) may have the highest audio gain of the multiple speakers in the vehicle (e.g., sound or spoken prompts may pan to the front right), while the sound or spoken prompts in the left front speaker 313 and the front center speaker 324 may be quieter. The audio system may also respond to the angle and distance of the noise source 399 as discussed with respect to fig. 4A.

Thus, as can be observed from fig. 4A-4C, as the location of the external noise source changes relative to the vehicle 102, the speaker playing the sound or verbal cue associated with the noise source 399 may be adjusted to gradually indicate the location of the noise source 399. Further, the volume output from the vehicle speakers may be adjusted in response to the type of noise source and the distance to the noise source 399.

5-7 illustrate flow diagrams of example methods 500 for adjusting audio output (e.g., in a vehicle) and 700. Method 500-700 may be performed by computing system 109 and/or a combination of computing system and audio system (which may include one or more computing systems integrated in a vehicle). The sound processor 113 of the external sound may also be included in a system that performs the method of fig. 5-7. For example, the method 500-700 may be performed by executing instructions stored in a non-transitory memory of the in-vehicle computing system 109, alone or in combination with one or more other vehicle systems (e.g., an audio controller, a sound processor for external sounds, a CAN bus, an engine controller, etc.) that include executable instructions stored in the non-transitory memory. The computing system 109 in conjunction with other systems described herein may perform the method 500-700, the method 500-700 including adjusting an actuator (e.g., a speaker) in the real world and internally performing operations that ultimately underlie adjusting the actuator in the real world. One or more of the steps included in method 500-700 may optionally be performed.

At 502, method 500 determines whether a vehicle system is to generate sound inside a vehicle based on sound detected outside the vehicle. The method 500 may receive an input from a human-machine interface (e.g., the touch screen 108) indicating whether a vehicle occupant wishes to be notified of sounds external to the vehicle. In other examples, method 500 may determine whether the vehicle operating conditions indicate a desire or usefulness to notify the vehicle occupants of sounds external to the vehicle. For example, if the vehicle is traveling in an urban area, the answer may be yes. However, if the vehicle is not traveling on a road, the answer may be no. If the method 500 determines that the answer is yes, the method 500 proceeds to 504. Otherwise, the answer is no, and method 500 proceeds to 520.

At 520, method 500 does not monitor the external vehicle sound and may turn off or set the sound processor of the external sound to a low power consumption state. Method 500 proceeds to 522.

At 522, method 500 generates audio output within the passenger compartment via the speaker based on the selection provided by the vehicle occupant or the automatic selection. For example, if a vehicle occupant selects a particular genre or artist of music, the audio system plays the selection from that genre or artist at the volume level selected by the vehicle occupant or an automatic control. Further, the speakers that are activated and output sound and the speakers that do not output sound may be based on a sound field and mode (e.g., stadium, surround, stereo, mono, etc.) selected by the user. The method 500 may also provide visual output based on a selection provided by the vehicle occupant or an automatic selection. For example, method 500 may display a music video via a touch screen according to the selected artist or genre of music. Method 500 proceeds to exit.

At 504, method 500 monitors and samples sounds external to the vehicle, as described in further detail in the description of fig. 6. Method 500 proceeds to 506.

At 506, method 500 determines whether the external sound is relevant to reporting to the vehicle navigation subsystem. If the vehicle navigation subsystem is activated and the requested travel route of the vehicle is displayed, the method 500 may determine that the external sound is relevant to reporting to the vehicle navigation subsystem. Additionally, method 500 may also consider other factors and vehicle operating conditions to determine whether the external sound is relevant to reporting to the vehicle navigation subsystem. For example, if the vehicle is traveling on a city street where the vehicle may easily change direction, the method 500 may determine that the external sound is relevant to reporting to the navigation subsystem. However, if the vehicle is traveling on a road with restricted egress (e.g., a highway), the method 500 may not determine that the external sound is relevant to reporting to the navigation subsystem. In other examples, method 500 may determine whether the external sound is relevant to reporting to the vehicle navigation subsystem based on the type of external sound. For example, the method 500 may consider sounds from an emergency vehicle to be associated with notifying a vehicle navigation subsystem, while sounds from a towing trailer are not associated with notifying the vehicle navigation subsystem. In some examples, a vehicle navigation subsystem may be notified of noise sources determined to be within a predetermined distance of the vehicle. If method 500 determines that noise or sounds external to the vehicle are determined to be relevant to reporting to the navigation subsystem, the answer is yes and method 500 proceeds to 530. Otherwise, the answer is no, and method 500 proceeds to 508.

At 530, method 500 adjusts an output of the navigation subsystem in response to the monitored external sound, as discussed in the description of fig. 7. The method 500 proceeds to 508.

At 508, method 500 determines whether the external sound is relevant to adjusting the output of the vehicle audio system. If the external sound is within the predetermined frequency range and power level, the method 500 may determine that the external sound is relevant to adjusting the audio system. Additionally, the method 500 may determine that the external sound determined to originate from the emergency vehicle is related to the adjustment vehicle's audio system based on a user selection for an external sound notification, but the towing trailer's sound is not related to the adjustment vehicle's audio system. In other examples, method 500 may determine that the sound determined to originate from the emergency vehicle and the sound of the towing trailer are related to adjusting the audio system of the vehicle. If method 500 determines that the noise or sound external to the vehicle is determined to be relevant to reporting to the audio system of the vehicle, the answer is yes and method 500 proceeds to 510. Otherwise, the answer is no, and method 500 proceeds to 540.

At 540, method 500 continues with generating audio system output via the speakers according to the selected preference and sound level. In particular, method 500 generates audio output within the passenger compartment based on a selection or automatic selection of an audio system or in-vehicle computing system provided by the vehicle occupant. Method 500 proceeds to exit.

At 512, the method 500 generates a relevant sound and/or an audible verbal cue within the passenger cabin via the speaker in response to the angle between the position of the vehicle and the position of the sound source (e.g., θ in fig. 4A), the distance from the vehicle to the external noise source (e.g., vector 410 in fig. 4A), and the indicated type of external sound determined at 504. For example, the method 500 may generate relevant sounds and/or audible verbal cues, and may be based on 1: 1 virtual mapping reproduces them via the vehicle audio system. As an example, method 500 may map an external sound source to a point in virtual sound space based on the location of the sound source and the distance from the vehicle to the external sound source.

In one example, when a siren is detected outside the vehicle, the method 500 can generate a siren sound in the passenger compartment based on a pre-recorded siren sound. The method 500 may also generate a truck sound in the passenger compartment based on a pre-recorded truck sound when the truck sound is detected outside the vehicle. Likewise, other sounds detected outside the vehicle may be the basis for producing similar sounds in the passenger compartment. The method 500 may adjust the reverberation of the sound generated within the passenger compartment according to the distance between the current vehicle and the source of the external sound such that a vehicle occupant may perceive that the source of the external sound is approaching or driving away from the current vehicle. The method 500 may adjust the reverberation gain and time according to the distance between the current vehicle and the source of the external sound. Additionally, in some examples, method 500 may output a predetermined verbal prompt in response to the determined type of sound. For example, the method 500 may cause an audio system to generate a verbal warning, such as "note that an emergency vehicle is approaching" or "note that a tractor-trailer is approaching. The method 500 may also indicate from which direction the distance and the source of the external noise are approaching. For example, the method 500 may generate verbal warnings such as "note that the emergency vehicle is approaching from 50 meters to the left" or "note that the emergency vehicle is approaching from 100 meters behind".

The method 500 may also adjust and control which speaker outputs sound based on the detected external sound. In particular, method 500 may adjust and control which speaker outputs sound in order to simulate the spatial location of the external sound. For example, as described above, the method 500 may map an external sound source to a point in the virtual sound space of the vehicle, and may adjust and control each speaker such that the associated sound and/or audible verbal cues emanate from the same point in the virtual sound space of the vehicle. For example, as shown in fig. 4A-4C and described in its accompanying description, the method 500 may produce sound in the passenger compartment via a speaker closest to the origin or source of the external sound. In particular, the method 500 may translate the associated sound and/or spoken audio cues to the correct point in the virtual sound space of the vehicle. Additionally, the method 500 may adjust the volume of the sound output from the speaker according to the distance between the vehicle and the origin or source of the external sound. For example, if a source of the external sound is approaching the vehicle, the volume of the sound generated in the vehicle based on the external sound may be increased. If the source of the external sound is moving away from the vehicle, the volume of the sound generated in the vehicle based on the external sound may be decreased. Further, the actual total number of speakers that output sounds generated in the vehicle based on the external sounds may be adjusted in response to the angle between the vehicle and the external sound source and the distance between the vehicle and the external sound source. For example, if the vehicle generating the noise is a relatively long distance from the current vehicle, the single speaker may output the noise based on the vehicle noise output generating the noise. However, if the vehicle that generates the noise is relatively close to the own vehicle, the two or more speakers may output the sound based on the vehicle noise output that generates the noise. Further, if a vehicle generating noise is approaching the own vehicle from the left side at a first angle, one speaker may output sound based on the first angle. However, if the vehicle that generates the noise is approaching the own vehicle from the left side at the second angle, the two speakers may output the sound based on the second angle.

The method 500 may also adjust the volume of a speaker that outputs sound in the passenger compartment based on external sound according to the sound power level of the external sound detected within the passenger compartment via the microphone. For example, if an external sound source is approaching the host vehicle and a microphone in the vehicle detects a high power level or sound level sound from the external sound source, the method 500 may generate a sound in the passenger compartment that is related to or based on the external sound at a lower volume or power level. However, if an external sound source is approaching the host vehicle and a microphone in the vehicle detects a lower power level or sound level sound from the external sound source, then the method 500 may generate a sound in the passenger compartment that is related to or based on the external sound at a higher volume or power level.

The method 500 may also assign different levels of priority to different sound types as determined at 504. For example, emergency sound types may be assigned a higher priority than trailer sound types. The method 500 may generate sounds that may be associated with higher priority sound types without generating sounds that may be associated with lower priority sound types such that driver confusion may be avoided.

The method 500 may generate a sound related to or associated with the detected external sound until the vehicle occupant confirms that the notification of the external sound source has been received. Further, the method 500 may repeatedly generate sounds until the vehicle occupant confirms that a notification of an external sound source has been received. After the sounds associated with the external sounds are generated in the passenger compartment, method 500 proceeds to exit.

Referring now to FIG. 6, a method for monitoring sounds external to a vehicle is shown. Method 600 may be included as part of the method of fig. 5. Further, the method of fig. 6 may be included in one or more systems described herein (e.g., a sound processor of an external sound) as executable instructions stored in non-transitory memory.

At 602, method 600 monitors the output of an external microphone (e.g., 288 of fig. 2) mounted to the vehicle. In one example, the microphone outputs an analog signal that is sampled via an analog-to-digital converter of the sound processor of the external sound. The sampled signal data may be stored to the controller volatile memory. In other examples, the microphone may output digital data that is input to a sound processor of an external signal or an in-vehicle computing system. Method 600 proceeds to 604.

At 604, the method 600 may convert data from the microphone in the time domain to the frequency domain. For example, the method 600 may apply a fourier transform to determine the frequency and magnitude or power level of the signal at different frequencies. The frequency and power level may be stored to controller volatile memory for further processing. Method 600 proceeds to 606.

At 606, the method selects and classifies individual sounds by type from the sampled microphone data. Since different sounds may occur at different frequencies, the frequencies in the microphone data may be indicative, and may be a type of sound source that produces sounds picked up by the external microphone. For example, an emergency siren sound may occur at a higher frequency, while a diesel engine sound may occur at a lower frequency. The method 600 may compare the detected frequencies from the microphone data to predetermined frequencies stored in the controller memory and associated with a particular sound source. If the frequency or frequency range determined from the microphone data matches the frequency or frequency range type stored in the controller memory, it may be determined that an external sound is being generated via a known or predetermined sound generation source. For example, if a frequency between 500 and 1500Hz is detected with a predetermined amount of time-sampled data (e.g., more than 4 seconds), the frequency may indicate a sweep alarm. In contrast, the frequency of the sound from the diesel engine may be relatively constant for a predetermined time, and thus the sound of the diesel engine may be classified in this manner. In this way, a particular sound may be assigned a type (e.g., emergency, tow trailer, child, reverse vehicle, etc.). The method 600 may store and identify more than one sound source at a time in this manner. Method 600 proceeds to 608.

At 608, method 600 determines an angle between the current vehicle and a source of the external sound. In one example, the method 600 determines a phase change of a sound frequency between two or more microphones to determine an angle between a current vehicle and a source of an external sound. Further, the method 600 may determine a distance between the current vehicle and a source of the external sound based on a phase difference between sounds captured via one or more external microphones. Method 600 proceeds to 610.

At 610, method 600 outputs an angle and distance between the current vehicle and the source of the external sound. The method 600 may also indicate the type of sound source (e.g., emergency vehicle, siren, horn, tractor trailer, etc.) or the type of sound source associated with the source of the external sound. In some cases, the method 600 may output angle, distance, and source type data for a plurality of sounds as determined from the external microphone data to other systems and processors. Method 600 proceeds to exit.

Referring now to FIG. 7, a method for adjusting navigation system output based on external sounds is shown. Method 700 may be included as part of the method of fig. 5. Further, the method of fig. 7 may be included as executable instructions stored in non-transitory memory in one or more systems (e.g., navigation subsystems) described herein.

At 702, method 700 may determine a current geographic location of a current vehicle via a global positioning system and/or a geographic map stored in a controller memory. Additionally, method 700 determines a current travel route based on a requested destination entered via a vehicle occupant or an autonomous driver. The current travel route may be based on a shortest distance, a shortest travel time, or other requirements. The current travel route may be a travel route that is not based on sounds outside the current vehicle.

When the travel route includes an upcoming right turn, the method 700 may also request that the human driver of the vehicle turn to the right through an audible verbal command output via the front right speaker 315. Similarly, when the travel route includes an upcoming left turn, method 700 may also request that the human driver of the vehicle turn left by an orally audible command output via front left speaker 313. Thus, the method 700 maps the request for the upcoming right turn to the front right speaker 315. Method 700 maps a request for an upcoming left turn to front left speaker 313. Method 700 proceeds to 704.

At 704, method 700 determines whether there is an exit on the current driving route that is within a predetermined distance of the vehicle (e.g., between the vehicle and a source of external sounds). If so, the answer is yes and method 700 proceeds to 706. Otherwise, the answer is no, and method 700 proceeds to 720.

At 720, method 700 maintains a display of the current driving route on a display in the vehicle. Method 700 proceeds to exit.

At 706, method 700 determines an alternate travel route based on angles to externally detected sound sources and requested destination. For example, if the current travel route is directly in front of the vehicle, but the external sound is determined to be directly in front of the vehicle, the alternate travel route may instruct the vehicle driver to turn right, then left, then right again to direct the driver to bypass the sound source. Method 700 proceeds to 708.

At 708, method 700 determines whether the driver or vehicle occupant accepts the alternate travel route. If so, the answer is yes and method 700 proceeds to 710. Otherwise, the answer is no, and method 700 proceeds to 720.

At 710, method 700 displays the alternate travel route via the in-vehicle display. Method 700 proceeds to exit.

Thus, the method of fig. 5-7 provides a method for generating sound in a vehicle, the method comprising: sound is generated inside the vehicle via one or more speakers according to an angle between a location of the vehicle and a source of sound outside the vehicle. The method includes wherein the generating sound includes generating sound indicative of a source of sound external to the vehicle. The method includes wherein generating the sound includes generating a spoken sound indicative of a direction of the sound and a type of sound associated with a source of the sound external to the vehicle. The method also includes receiving the angle from the sound processor to an in-vehicle computing system and producing sound via the in-vehicle computing system. The method also includes adjusting which of the one or more speakers output the generated sound as an angle between the vehicle and a source of sound external to the vehicle changes. The method includes wherein generating the sound includes applying reverberation to the signal according to a distance between the vehicle and a source of the sound external to the vehicle. The method also includes generating the sound in response to one or more attributes of the sound generated via the source of the sound external to the vehicle. The method also includes adjusting one or more attributes of other sounds produced via the one or more speakers. The method includes wherein the one or more attributes of the other sounds produced via the one or more speakers include a volume of the other sounds produced via the one or more speakers.

The method of fig. 5-7 also provides a method for generating sound in a vehicle, the method comprising: the plurality of speakers are adjusted to produce sound inside the vehicle according to an angle between a location of the vehicle and a source of sound outside the vehicle. The method also includes adjusting a property of the generated sound in response to a property of the sound external to the vehicle sensed within the vehicle. The method includes wherein the attribute of the generated sound is a first volume or sound power level, and wherein the attribute of the sound external to the vehicle is a second volume or power level. The method includes wherein adjusting which of the plurality of speakers produces sound inside the vehicle includes increasing an actual total number of speakers included in the plurality of speakers that produce sound. The method also includes adjusting the sound generated within the vehicle in response to the type of source generating the sound external to the vehicle. The method further comprises the following steps: the method includes mapping a first requested steering direction generated via a navigation system to a first speaker, mapping a second requested steering direction generated via the navigation system to a second speaker, and mapping a notification of an external sound to a mapping including the first speaker, the second speaker, and a plurality of additional speakers.

The description of the embodiments has been presented for purposes of illustration and description. Suitable modifications and variations to the embodiments may be made in light of the above description or may be acquired from practicing the methods. The method may be performed by executing stored instructions with one or more logic devices (e.g., processors) in conjunction with one or more additional hardware elements, such as storage devices, memory, image sensor/lens systems, light sensors, hardware network interfaces/antennas, switches, actuators, clock circuits, etc. The described methods and associated actions may be performed in a variety of orders, in addition to the orders described herein, in parallel, and/or simultaneously. Furthermore, the described method may be repeatedly performed. The described system is exemplary in nature and may include additional elements and/or omit elements. The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various systems and configurations, and other features, functions, and/or properties disclosed.

As used in this application, an element or step recited in the singular and proceeded with the word "a" or "an" should be understood as not excluding plural said elements or steps, unless such exclusion is stated. Furthermore, references to "one embodiment" or "an example" of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features. The terms "first," "second," and "third," etc. are used merely as labels, and are not intended to impose numerical requirements or a particular order of placement on their objects. The following claims particularly point out subject matter from the above disclosure that is regarded as novel and non-obvious.

26页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:车辆及其开门防撞方法、系统、装置及电子设备、介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!