Remote inference of sound frequencies for determining head-related transfer functions of a user of a head-mounted device

文档序号:1895407 发布日期:2021-11-26 浏览:23次 中文

阅读说明:本技术 用于确定头戴式装置的用户的头部相关传递函数的声音频率的远程推断 (Remote inference of sound frequencies for determining head-related transfer functions of a user of a head-mounted device ) 是由 莫尔塔扎·哈莱吉梅巴迪 帕布鲁·弗朗西斯科·方德斯霍夫曼 于 2020-04-21 设计创作,主要内容包括:一种头戴式装置包括框架(105)和音频系统。该音频系统包括麦克风组件(120),该麦克风组件定位在框架(105)上的检测区域(125)中,该检测区域在佩戴头戴式装置的用户的耳朵外部,并且在距离耳朵的耳道的阈值距离内,麦克风组件(120)被配置为检测从音频源发出的音频信号,其中在检测区域(125)检测到的音频信号在用户耳道处的声压波的阈值相似度内;以及音频控制器,其被配置为部分基于检测到的音频信号来确定一组头部相关传递函数(HRTF)。(A head-mounted device includes a frame (105) and an audio system. The audio system comprises a microphone assembly (120) positioned in a detection region (125) on a frame (105), the detection region being outside an ear of a user wearing the head-mounted device and within a threshold distance from an ear canal of the ear, the microphone assembly (120) being configured to detect an audio signal emanating from an audio source, wherein the audio signal detected at the detection region (125) is within a threshold degree of similarity of a sound pressure wave at the ear canal of the user; and an audio controller configured to determine a set of Head Related Transfer Functions (HRTFs) based in part on the detected audio signals.)

1. A head-mounted device, comprising:

a frame; and

an audio system, comprising:

a microphone assembly positioned on the frame in a detection region that is outside an ear of a user wearing the head-mounted device and within a threshold distance from an ear canal of the ear, the microphone assembly configured to detect audio signals emanating from an audio source in a local region, wherein the audio signals detected at the detection region are within a threshold degree of similarity of sound pressure waves at the ear canal of the user, an

An audio controller configured to determine a set of Head Related Transfer Functions (HRTFs) based in part on the detected audio signal.

2. The headset of claim 1, wherein the microphone assembly comprises a plurality of microphones; and preferably at least one of the plurality of microphones is located on the frame at a position other than the detection area.

3. The headset of claim 1 or claim 2, wherein the threshold distance is at most 3 inches.

4. The headset of any one of the preceding claims, wherein the audio source is a speaker that is part of the audio system; and preferably the speaker is located on a frame of the head-mounted device.

5. The headset of any one of the preceding claims, wherein the audio source is a transducer of the cartilage conduction system; and/or wherein the audio source is external to and separate from the head mounted device and the audio signal describes ambient sound in a local area of the head mounted device.

6. The headset of any one of the preceding claims, wherein the frequency of the audio signal is less than or equal to 2 kHz.

7. The headset of any one of the preceding claims, wherein the audio controller is configured to:

estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; and

based on the DoA estimation, HRTFs related to the audio system for frequencies above 2kHz are updated.

8. A method, comprising:

detecting, via a microphone assembly located within a detection region on a frame of a headset, an audio signal emanating from an audio source in a local region, wherein the detection region is outside an ear of a user wearing the headset and within a threshold distance from an ear canal of the user and within a threshold degree of similarity of sound pressure waves of the audio signal detected at the detection region at the ear canal; and

a set of Head Related Transfer Functions (HRTFs) is determined via an audio controller based in part on the detected audio signals.

9. The method of claim 8, wherein the headset comprises an audio system, and wherein the audio source is a speaker that is part of the audio system.

10. The method of claim 8 or claim 9, wherein the frequency of the audio signal is less than or equal to 2 kHz; and/or wherein the audio source is a transducer of the cartilage conduction system.

11. The method of any of claims 8 to 10, wherein the audio signal describes ambient sound in a local area of the user.

12. The method of any of claims 8 to 11, further comprising:

estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; and

based on the DoA estimation, HRTFs related to the audio system for frequencies above 2kHz are updated.

13. A non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:

detecting, via a microphone assembly located within a detection region on a frame of a headset, an audio signal emanating from an audio source in a local region, wherein the detection region is outside an ear of a user wearing the headset and within a threshold distance from an ear canal of the user and within a threshold degree of similarity of sound pressure waves of the audio signal detected at the detection region at the ear canal; and

a set of Head Related Transfer Functions (HRTFs) is determined via an audio controller based in part on the detected audio signals.

14. The non-transitory computer-readable medium of claim 13, wherein the frequency of the audio signal is less than or equal to 2 kHz; and/or wherein the microphone assembly comprises a plurality of microphones.

15. The non-transitory computer-readable medium of claim 13 or claim 14, wherein the audio controller is configured to:

estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; and

based on the DoA estimation, HRTFs related to the audio system for frequencies above 2kHz are updated.

Background

The present disclosure relates generally to determination of Head Related Transfer Functions (HRTFs), and in particular to remote inference of sound frequencies for determining HRTFs for a user of a head mounted device (headset).

The sound perceived at the two ears may be different depending on at least one of the direction of the sound, the position of the sound source relative to each ear, the anatomy of the user's head and/or body, and the surrounding environment of the room in which the sound is perceived. Humans can determine the location of a sound source by comparing the sound perceived by each ear. In one type of "spatial sound" system, a plurality of speakers reproduce directional aspects of sound using HRTFs. HRTFs represent the sound propagation from a sound source in free field to a human ear. HRTFs encode the directional information of sound sources in their interaural (interaural) time and intensity differences and their audio response. HRTFs vary from person to person, and personalized HRTFs for users enable users to experience superior spatial sound quality when delivering audio content to the user.

A calibration system for determining HRTFs may typically include a microphone placed in the ear canal of a user. By measuring the audio signal in the ear canal in response to sound sources in a local area, the HRTF can be determined and customized for the user. However, this is not a comfortable or convenient user experience.

SUMMARY

An audio system for remotely inferring low sound frequencies for use in determining HRTFs of a wearer of a head mounted device. The audio system is configured to generate and/or customize a set of HRTFs for a user of the head mounted device. HRTFs may be used to generate audio content for a user of a head mounted device. According to some embodiments, the head mounted device is an artificial reality head mounted device.

The audio system includes a microphone assembly located on a frame (of the headset) in the detection area. The detection region is outside an ear of a user wearing the head-mounted device and within a threshold distance from an ear canal. The microphone assembly is configured to detect an audio signal emanating from an audio source. The audio signal detected at the detection area is within a threshold similarity of the sound pressure wave at the ear canal of the user. Further, the audio system includes an audio controller configured to determine a set of Head Related Transfer Functions (HRTFs) based in part on the detected audio signals.

In some embodiments, an audio system performs a method. The method includes detecting, by a microphone assembly located within a detection area on a headset frame, an audio signal emanating from an audio source. The detection region is outside an ear of a user wearing the head-mounted device and within a threshold distance from an ear canal of the user and within a threshold similarity of sound pressure waves of audio signals detected by the detection region at the ear canal. The method also includes determining, by the audio controller, a set of HRTFs based in part on the detected audio signals.

In some embodiments, it is preferred to provide a head-mounted device comprising: a frame; and an audio system, the audio system comprising: a microphone assembly positioned in a detection region on the frame, the detection region being outside an ear of a user wearing the head-mounted device and within a threshold distance from an ear canal of the ear, the microphone assembly configured to detect audio signals emanating from an audio source in a local region, wherein the audio signals detected at the detection region are within a threshold degree of similarity of a sound pressure wave at the ear canal of the user; and an audio controller configured to determine a set of Head Related Transfer Functions (HRTFs) based in part on the detected audio signals.

The microphone assembly may include a plurality of microphones.

In some embodiments, at least one microphone of the plurality of microphones is located on the frame at a location other than the detection area.

In some embodiments, the threshold distance is at most 3 inches.

In some embodiments, the audio source is a speaker that is part of an audio system.

In some embodiments, the speaker is located on a frame of the head-mounted device.

In some embodiments, the audio source is a transducer of the cartilage conduction system.

The audio source may be external to the head mounted device and may be separate from the head mounted device, the audio signal describing ambient sound of a local area of the head mounted device.

In some embodiments, the frequency of the audio signal is less than or equal to 2 kHz.

In some embodiments, the audio controller is configured to: estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; and updating the audio system related HRTF for frequencies above 2kHz based on the DoA estimate.

A method may be provided, comprising: detecting, via a microphone assembly located within a detection region on a frame of the headset, audio signals emanating from audio sources in a local region, wherein the detection region is outside an ear of a user wearing the headset and within a threshold distance from an ear canal of the user and within a threshold similarity of sound pressure waves of the audio signals detected by the detection region at the ear canal; and determining, via the audio controller, a set of Head Related Transfer Functions (HRTFs) based in part on the detected audio signals.

In some embodiments, the head mounted device includes an audio system, wherein the audio source is a speaker that is part of the audio system.

In some embodiments, the frequency of the audio signal is less than or equal to 2 kHz.

The audio source may be a transducer of the cartilage conduction system.

In some embodiments, the audio signal describes ambient sound in a local area of the user.

In some embodiments, the method preferably further comprises: estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; and updating the audio system related HRTF for frequencies above 2kHz based on the DoA estimate.

In some embodiments, there is preferably provided a non-transitory computer-readable medium storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: detecting, via a microphone assembly located within a detection region on a headset frame, an audio signal emanating from an audio source in a local region, wherein the detection region is outside an ear of a user wearing the headset and within a threshold distance from an ear canal of the user and within a threshold similarity of sound pressure waves of the audio signal detected by the detection region at the ear canal; and determining, via the audio controller, a set of Head Related Transfer Functions (HRTFs) based in part on the detected audio signals.

In some embodiments, the frequency of the audio signal is less than or equal to 2 kHz.

In some embodiments, the microphone assembly includes a plurality of microphones.

In some embodiments, the audio controller is preferably configured to: estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; based on the DoA estimation, the HRTF related to the audio system for frequencies above 2kHz is updated.

Features described herein as being suitable for incorporation into one or more embodiments will be understood to be summarized in the teachings of the disclosure and thus suitable for incorporation into any embodiment of the invention.

Brief Description of Drawings

Fig. 1 is an example illustrating a headset including an audio system in accordance with one or more embodiments.

Fig. 2 is an example illustrating a portion of a headset including an acoustic sensor in accordance with one or more embodiments.

Fig. 3 is a block diagram of an audio system in accordance with one or more embodiments.

Fig. 4 is a graph illustrating a similarity ratio of sound pressure at the ear canal entrance to sound pressure in the detection region as a function of direction and frequency in accordance with one or more embodiments.

Fig. 5 is a flow diagram illustrating a process for customizing a set of Head Related Transfer Functions (HRTFs) for a user using a head mounted device, in accordance with one or more embodiments.

Fig. 6 is a system environment of a headset including an audio system in accordance with one or more embodiments.

The figures depict various embodiments for purposes of illustration only. One skilled in the art will readily recognize from the following discussion that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.

Detailed Description

Overview

The human pinna acts like an individualized acoustic filter that shapes the frequency response of an incoming sound according to the direction of the sound. For humans, this function is crucial in 3D sound localization. Therefore, it is important to collect the sound pressure at the entrance of the ear canal, where all localization cues (localization cue) can be accurately captured. However, it is generally undesirable to have a microphone at the entrance of the ear canal due to, for example, industrial design considerations. Various embodiments of an audio system that infers an acoustic pressure at an entrance of an ear canal based on the acoustic pressure detected at a location remote from the entrance of the ear canal are discussed herein. The audio system uses the detected sound pressures to determine a Head Related Transfer Function (HRTF) for a wearer of the head mounted device. The audio system presents audio content to the user using the determined HRTFs.

The audio system detects sound (i.e., sound pressure) and generates one or more Head Related Transfer Functions (HRTFs) for the user. In some embodiments, an audio system includes a microphone assembly including a plurality of acoustic sensors and a controller. Each acoustic sensor is configured to detect sound within a localized area around the microphone assembly. At least some of the plurality of acoustic sensors are coupled to a headset configured to be worn by a user, and at least one acoustic sensor for each ear of the user is located within a detection region on a frame of the headset that is within a threshold distance from an ear canal entrance of the respective ear. One or more audio sources within the local area emit audio signals that are detected by an acoustic sensor on the head mounted device. For each detection region, a first frequency band (e.g., at or below 2kHz) of the audio signal detected by the acoustic sensor in the detection region is used to infer the acoustic pressure at the entrance of the ear canal in the detection region for the first frequency band. The first frequency band typically corresponds to relatively low/medium audio frequencies (e.g., 2kHz or lower). The audio signal in the first frequency band detected at the detection region is within (e.g., substantially the same as) a threshold similarity to the sound pressure wave of the first frequency band at the entrance of the user's ear canal. This relationship occurs, for example, because low/intermediate frequency sound pressure waves have less directional dependence than higher frequency sound pressure waves. For audio signals outside the first frequency band (e.g. above 2kHz), the directional dependence increases and the similarity between the audio signal detected at the acoustic sensor and the corresponding pressure wave at the entrance of the ear canal is smaller (i.e. the error increases). The controller may use, for example, a calibration, a template of higher frequency HRTFs, etc., to account for increased error for frequencies outside the first frequency band. The controller may generate one or more HRTFs using the detected audio signals. The controller may then instruct the speaker assembly to present audio content to the user using the generated HRTF.

Embodiments of the present disclosure may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some way before being presented to a user, and may include, for example, Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), mixed reality, or some combination and/or derivative thereof. The artificial reality content may include fully generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, and any of them may be presented in a single channel or multiple channels (e.g., stereoscopic video that produces a three-dimensional effect to a viewer). Further, in some embodiments, the artificial reality may also be associated with an application, product, accessory, service, or some combination thereof, that is used, for example, to create content in the artificial reality and/or otherwise use in the artificial reality (e.g., perform an activity in the artificial reality). An artificial reality system that provides artificial reality content may be implemented on a variety of platforms, including a head mounted device connected to a host computer system, a standalone head mounted device, a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.

Head mounted device equipment configuration

Fig. 1 is an example illustrating a head-mounted device 100 including an audio system in accordance with one or more embodiments. The head mounted device 100 presents media to a user. In one embodiment, the head mounted device 100 may be a Near Eye Display (NED). Examples of media presented by the head mounted device 100 include one or more images, video, audio, or some combination thereof. The head-mounted device 100 may include a frame 105, one or more lenses (lenses) 110, a sensor apparatus 115, and components of an audio system. Although fig. 1 shows components of the headset 100 in an example location on the headset 100, these components may be located elsewhere on the headset 100, on a peripheral device paired with the headset 100, or some combination of the two locations.

The head mounted device 100 may correct or enhance the vision of the user, protect the eyes of the user, or provide images to the user. The head-mounted device 100 may be eyeglasses to correct visual defects of the user. The head-mounted device 100 may be sunglasses that protect the user's eyes from sunlight. The head-mounted device 100 may be safety glasses that protect the user's eyes from impact. The head mounted device 100 may be a night vision device or infrared goggles to enhance the user's vision at night. The head mounted device 100 may be a near-eye display that generates VR, AR, or MR content for a user. Alternatively, the head mounted device 100 may not include the lens 110 and may be the frame 105 with an audio system that provides audio (e.g., music, radio, podcast) to the user.

The frame 105 includes a front portion that holds one or more lenses 110 and an end piece (end piece) that attaches to a user. The front of the frame 105 rests on top of the nose of the user. The end pieces (e.g., temples) are part of a frame 105 that holds the headset 100 in place on the user (e.g., each end piece extends over a respective ear of the user). The length of the end piece can be adjusted to suit different users. The end pieces may also include portions that bend (curl) behind the user's ears (e.g., temple caps (temples), ear pieces (ear pieces)).

The one or more lenses 110 provide or transmit light to a user wearing the head-mounted device 100. The lens 110 may be a prescription lens (e.g., single vision, bifocal, and trifocal or progressive lenses) to help correct the user's vision deficiencies. The prescription lens transmits ambient light to the user wearing the head-mounted device 100. The transmitted ambient light may be altered by the prescription lens to correct defects in the user's vision. The one or more lenses 110 may be polarizing or tinted lenses to protect the user's eyes from sunlight. The one or more lenses 110 may be one or more waveguides that are part of a waveguide display, where the image light is coupled to the user's eye through an end or edge of the waveguide. The one or more lenses 110 may include an electronic display for providing image light, and may also include an optics block for magnifying the image light from the electronic display. The one or more lenses 110 are held by the front of the frame 105 of the head mounted device 100.

In some embodiments, the headset 100 may include a Depth Camera Assembly (DCA) that captures data describing depth information for a local area around the headset 100. In one embodiment, the DCA may include a structured light projector, an imaging device, and a controller. The captured data may be an image of structured light projected by the structured light projector to the local region captured by the imaging device. In one embodiment, the DCA may include two or more cameras and a controller, the cameras oriented to capture portions of the local area in a stereoscopic manner. The captured data may be images of local areas captured stereoscopically by two or more cameras. The controller calculates depth information of the local area using the captured data. Based on the depth information, the controller determines absolute position information of the head mounted device 100 within the local area. The DCA may be integrated with the headset 100 or may be located in a local area external to the headset 100. In the latter embodiment, the controller of the DCA may transmit the depth information to the audio system.

The sensor device 115 generates one or more measurement signals in response to the motion of the headset 100. The sensor device 115 may be located on a portion of the frame 105 of the headset 100. The sensor device 115 may include a position sensor, an Inertial Measurement Unit (IMU), or both. Some embodiments of the headset 100 may or may not include a sensor device 115 or may include more than one sensor device 115. In embodiments where the sensor device 115 includes an IMU, the IMU generates IMU data based on measurement signals from the sensor device 115. Examples of the sensor device 115 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor to detect motion, one type of sensor for error correction of the IMU, or some combination thereof. The sensor device 115 may be located external to the IMU, internal to the IMU, or some combination thereof.

Based on the one or more measurement signals, the sensor device 115 estimates a current position of the headset 100 relative to an initial position of the headset 100. The estimated position may include a position of the head mounted device 100 and/or an orientation of the head mounted device 100 or a user wearing the head mounted device 100, or some combination thereof. The orientation may correspond to the position of each ear relative to a reference point. In some embodiments, the sensor device 115 uses the depth information and/or absolute position information from the DCA to estimate the current position of the headset 100. The sensor device 115 may include multiple accelerometers to measure translational motion (forward/backward, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, roll). In some embodiments, the IMU quickly samples the measurement signals and calculates an estimated position of the headset 100 from the sampled data. For example, the IMU integrates the measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated position of a reference point on the headset 100. Alternatively, the IMU provides the sampled measurement signals to the console, which determines IMU data. The reference point is a point that may be used to describe the position of the headset 100. While the reference point may be defined generally as a point in space, in practice the reference point is defined as a point within the head mounted device 100.

The audio system detects sound and generates one or more HRTFs for the user. HRTFs characterize how a user receives sound from a point in space. One or more HRTFs may be associated with a user wearing the head mounted device 100. The audio system of the head mounted device 100 includes a microphone assembly, a speaker assembly, and a controller 135. Additional details regarding the audio system are discussed with respect to fig. 3.

The microphone assembly detects sound in a local area around the microphone assembly. The microphone assembly includes a plurality of acoustic sensors 120. The acoustic sensor 120 is a sensor that detects a change in air pressure due to an acoustic wave. Each acoustic sensor 120 is configured to detect sound and convert the detected sound into an electronic format (analog or digital). The acoustic sensor 120 may be an acoustic wave sensor, a microphone, a sound transducer, or similar sensor suitable for detecting sound. The microphone assembly includes at least two acoustic sensors 120, each of which is located within a respective detection zone 125 on the frame 105. Each detection region 125 is within a threshold distance from a respective entrance of the user's ear canal. As shown, the detection regions 125 are on the frame 105, but in other embodiments they include regions that are not on the frame 105. Although only two acoustic sensors 120 are shown, in other embodiments, the microphone array includes additional acoustic sensors. Additional acoustic sensors may be used to provide better direction of arrival (DoA) estimation for the audio signal. Further, the location of each additional acoustic sensor of the microphone assembly may vary. Additional acoustic sensors may be located within one or both detection zones 125, elsewhere on frame 105, or some combination thereof. For example, additional acoustic sensors may be positioned along the length of the temple, across the bridge, above or below the lens 110, or some combination thereof. The acoustic sensors of the microphone array may be oriented such that the microphone assembly is capable of detecting sound in a wide range of directions around the user wearing the head mounted device 100.

The microphone assembly detects sound in a local area around the microphone assembly. The local area is the environment surrounding the head mounted device 100. For example, the local area may be a room inside of the user wearing the head mounted device 100, or the user wearing the head mounted device 100 may be outside, and the local area is an external area where the microphone assembly can detect sound. The detected sound may be an uncontrolled sound or a controlled sound. Uncontrolled sound is sound that is not controlled by the audio system and occurs in a local area. An example of uncontrolled sound may be naturally occurring ambient noise. In such a configuration, the audio system may calibrate the headset 100 using uncontrolled sounds detected by the audio system. The controlled sound is a sound controlled by an audio system. An example of the controlled sound may be one or more signals output by an external system such as a speaker, a speaker assembly, a calibration system, or some combination thereof. While the headset 100 may be calibrated using uncontrolled sound, in some embodiments, an external system may be used to calibrate the headset 100 during the calibration process. Each detected sound (uncontrolled and controlled) may be associated with a frequency, amplitude, duration, or some combination thereof.

The detected audio signal may generally be divided into a first frequency band and one or more high frequency bands. The first frequency band generally corresponds to relatively low and possibly moderate acoustic frequencies. For example, the first frequency band may be 0-2kHz and the one or more high frequency bands cover frequencies in excess of 2 kHz. For each detection region 125, a first frequency band of the audio signal detected by the acoustic sensor 120 in the detection region 125 is used to infer the sound pressure at the respective entrance of the ear canal for the first frequency band. The audio signal in the first frequency band detected at the detection region is within a threshold similarity to the sound pressure wave of the first frequency band at the entrance of the user's ear canal. The threshold similarity may be such that they are substantially the same pressure waveform over the first frequency band (e.g., less than 1dB difference, and/or within a Just Noticeable Difference (JND) threshold if perception is considered). This relationship occurs, for example, because low/mid frequency sound pressure waves have less directional dependence than higher frequency sound pressure waves.

The controller 135 processes information from the microphone assembly describing sounds detected by the microphone assembly. The information associated with each detected sound may include the frequency, amplitude, and/or duration of the detected sound. For each detected sound, the controller 135 performs a DoA estimation. The DoA estimate is an estimated direction of arrival of the detected sound at the acoustic sensor 120 and/or the acoustic sensor 125 of the microphone assembly. If sound is detected by at least two acoustic sensors of the microphone assembly, the controller 135 may use the known positional relationships of the acoustic sensors and the DoA estimate from each acoustic sensor to estimate, for example by triangulation, the source location of the detected sound. The accuracy of the source location estimation may increase as the number of acoustic sensors that detect sound increases and/or as the distance between acoustic sensors that detect sound increases.

In some embodiments, the controller 135 populates the audio data set with information. The information may include detected sounds and parameters associated with each detected sound. Example parameters may include frequency, amplitude, duration, DoA estimation, source location, or some combination thereof. Each audio data set may correspond to a different source position relative to the headset and include one or more sounds having that source position. The audio data set may be associated with one or more HRTFs for the source location. One or more HRTFs may be stored in a data set. In alternative embodiments, each audio data set may correspond to several source locations relative to the headset 100 and include one or more sounds for each source location. For example, source locations that are relatively close to each other may be grouped together. When the microphone assembly detects sound, the controller 135 may populate the audio data set with information. When performing DoA estimation or determining a source location for each detected sound, the controller 135 may further populate an audio data set for each detected sound.

In some embodiments, controller 135 selects the detected sound on which to perform the DoA estimation. The controller 135 may select the detected sound based on parameters associated with each detected sound stored in the audio data set. The controller 135 may evaluate the stored parameters associated with each detected sound and determine whether one or more of the stored parameters satisfy the corresponding parameter condition. For example, a parameter condition may be satisfied if the parameter is above or below a threshold or within a target range. The controller 135 performs DoA estimation on the detected sound if the parameter condition is satisfied. For example, the controller 135 may perform DoA estimation on detected sounds having frequencies within a frequency range, amplitudes above a threshold amplitude, durations below a threshold duration, other similar variations, or some combination thereof. The parameter conditions may be set by a user of the audio system based on historical data, based on an analysis of information in the audio data set (e.g., evaluating the collected parameter information and setting an average), or some combination thereof. The controller 135 may create elements in the audio set to store the DoA estimate and/or source location of the detected sound. In some embodiments, the controller 135 may update the elements in the audio set if the data already exists.

In some embodiments, the controller 135 may receive the position information of the headset 100 from a system external to the headset 100. The position information includes the position of the head mounted device 100 and the orientation of the head mounted device 100 or the head of the user wearing the head mounted device 100. The location information may be defined relative to a reference point. The position information may be used to generate and/or customize HRTFs for the user, including determining the relative position of sound sources in a local area. Examples of external systems include an imaging component, a console (e.g., as described in fig. 6), a point-of-care and mapping (SLAM) system, a depth camera component, a structured light system, or other suitable system. In some embodiments, the head mounted device 100 may include sensors that may be used for SLAM calculations, which may be performed in whole or in part by the controller 135. The controller 135 may receive location information from the system continuously or at random or designated intervals. In other embodiments, the controller 135 receives the position information of the head mounted device 100 using a system coupled to the head mounted device 100. For example, a depth camera component coupled to the head mounted device 100 may be used to provide position information to the controller 135.

Based on the parameters of the detected sound, the controller 135 generates one or more HRTFs associated with the audio system. HRTFs characterize how the ear receives sound from a certain point in space. Since human anatomy (e.g., ear shape, shoulders, etc.) can affect sound as it propagates to a human ear, an HRTF that is unique to each ear of a human (and unique to a human) relative to a particular source location of a human. For example, in fig. 1, controller 135 generates at least one HRTF for each ear. The HRTFs include HRTFs generated using portions of the audio signal in a first frequency band, the HRTFs corresponding to frequencies in the first frequency band. As shown in fig. 1, the higher frequency HRTF may be generated using a plurality of acoustic sensors (which may include acoustic sensor 120) that provide directional information, using acoustic sensors placed in the ear canal of the user, using acoustic sensors placed elsewhere on the frame than acoustic sensor 120, using a template of higher frequency HRTFs, or some combination thereof. In this manner, controller 135 generates and/or updates a customized set of HRTFs for the user. Controller 135 uses a set of customized HRTFs to present audio content to a user. For example, a customized HRTF can be used to create audio content that includes sound that appears to come from a particular point in space. In some embodiments, controller 135 may update one or more pre-existing HRTFs based on a DoA estimate of each detected sound. As the position of the head mounted device 100 within the local region changes, the controller 135 may generate one or more new HRTFs or update one or more pre-existing HRTFs accordingly.

Fig. 2 is an example illustrating a portion of a headset including an acoustic sensor in accordance with one or more embodiments. The headset 200 may be an embodiment of the headset 100. The head-mounted device 200 includes an acoustic sensor 210, which may be an embodiment of the acoustic sensor 120. According to some embodiments, the acoustic sensors 210 are microphones, each microphone being located at a detection region 230 on a portion of the frame 220 of the head mounted device 200, and the detection regions 230 are embodiments of the detection region 125. Although only one ear 240 is shown in fig. 2, according to some embodiments, the portion of the head mounted device 200 corresponding to the other ear 240 of the user also includes the same configuration shown in fig. 2. The head-mounted device 200 may have a different acoustic sensor configuration than that shown in fig. 2. For example, in some embodiments, there are a greater number of acoustic sensors 210 located in the detection region 230. As shown in fig. 2, a portion of the frame 220 of the head mounted device 200 is positioned behind the pinna of each ear 240 to secure the head mounted device 200 to the user.

The acoustic sensor 210 is located in the detection region 230 outside the entrance 250 to the user's ear canal. A first frequency band of the audio signal (e.g., at or below 2kHz) detected by the acoustic sensor 210 in the detection region is used to infer the sound pressure wave at the ear canal entrance 250. The audio signal in the first frequency band detected at the detection region 230 is within a threshold degree of similarity (e.g., substantially the same) as the sound pressure wave of the first frequency band at the ear canal entrance 250. This relationship occurs, for example, because low/intermediate frequency sound pressure waves have less directional dependence than high frequency sound pressure waves. For audio signals outside the first frequency band (e.g. above 2kHz), the directional dependence increases and the similarity between the audio signal detected at the acoustic sensor and the corresponding pressure wave at the entrance of the ear canal is smaller (i.e. the error increases). For simplicity, the detection region 230 is shown on the frame 220, however, the detection region 230 may extend to regions that are not on the frame 220 within a threshold distance (e.g., closer to the entrance of the ear canal 250). In some embodiments, detection region 230 is located within a threshold distance from the front of the helix (helix) of ear 240.

As described above, the threshold distance (e.g., 3 inches or less) may be a distance that the low frequency audio signals measured within the detection zone are within a threshold degree of similarity of the low frequency sound pressure waves at the ear canal entrance 250. This threshold similarity enables low frequency pressure waves at the ear canal entrance 250 to be inferred without the need for placing a microphone in the ear canal of the user. The threshold similarity may be such that they are substantially the same pressure waveform over the first frequency band (e.g., less than a 1dB difference, and/or within a JND threshold).

The portion of the audio signal in the first frequency band may be used to accurately and remotely infer the acoustic pressure at the ear canal entrance 250. The inferred sound pressure wave at the user's ear canal entrance 250 is used to generate and/or customize a unique HRTF for each ear of the user for frequencies in the first frequency band.

The configuration of the acoustic sensor 210 of the microphone assembly may vary. Although the head mounted device 200 is shown in fig. 2 as having one acoustic sensor 210 for each ear 240 of the user, the number of acoustic sensors 210 may be increased. Increasing the number of acoustic sensors 210 may increase the amount of audio information collected as well as the sensitivity and/or accuracy of the audio information. For example, increasing the number of acoustic sensors 210 in the detection region 250 may improve calibration, including generating and/or customizing HRTFs for a user based on an inference of sound pressure waves within a first frequency band at the ear canal entrance 250. According to some embodiments, additional acoustic sensors 210 located on the frame 220 outside the detection region 250 are used to generate and/or customize higher frequency HRTFs for the user. In a further embodiment, the additional acoustic sensors 210 are part of an acoustic sensor array for performing DoA estimation to generate and/or customize higher frequency HRTFs.

In other embodiments, the portion of the audio signal detected by the acoustic sensor 210 may also be used to collect information for frequencies above the first frequency band. For example, the frequencies above the first frequency band may be above 2 kHz. As described above, for frequencies above the first frequency band, the directional dependence increases and the similarity between the audio signal detected at the acoustic sensor 210 and the corresponding pressure wave at the ear canal entrance 250 is smaller (i.e. the error increases). In some embodiments, the increase in error may be offset by using data from additional acoustic sensors. Additional acoustic sensors may be placed anywhere on the frame 220 and, in some embodiments, also within the detection region 230. The greater number of acoustic sensors allows for increased accuracy of the DOA analysis, which may help to counteract the directional dependence associated with higher frequencies.

Overview of Audio System

Fig. 3 is a block diagram of an audio system 300 in accordance with one or more embodiments. The audio system of fig. 1 and 3 may be an embodiment of an audio system 300. The audio system 300 detects sounds to produce one or more HRTFs for a user. The audio system 300 may then use one or more HRTFs to generate audio content for the user. In the embodiment of fig. 3, audio system 300 includes a microphone assembly 310, a controller 320, and a speaker assembly 330. Some embodiments of audio system 300 have different components than those described herein. Similarly, in some cases, functionality may be distributed among components in a manner different than that described herein. For example, some or all of the controller 320 may be located on a server or console remote from the headset.

The microphone assembly 310 detects sound in a local area around the microphone assembly 310. Microphone assembly 310 may include a plurality of acoustic sensors that each detect changes in air pressure of sound waves and convert the detected sound into an electronic format (analog or digital). The plurality of acoustic sensors includes at least one acoustic sensor in each detection area associated with each ear of the user. The plurality of acoustic sensors may include an embodiment of acoustic sensor 120. The plurality of acoustic sensors may be located on a head-mounted device (e.g., head-mounted device 100), on a user, or some combination thereof. As described above, the detected sound may be an uncontrolled sound or a controlled sound. Each detected sound may be associated with audio information, such as a frequency, amplitude, duration, or some combination thereof.

The speaker assembly 330 plays audio content according to instructions from the controller 320. Speaker assembly 330 may include an embodiment of speaker 130 shown in fig. 1. The speaker may be, for example, a moving coil transducer, a piezoelectric transducer, some other device that generates acoustic pressure waves using an electrical signal, or some combination thereof. In some embodiments, the speaker assembly 330 also includes a speaker (e.g., a headset, an earbud, etc.) covering each ear. In other embodiments, the speaker assembly 330 does not include any speakers that would obscure the user's ears. In some embodiments, the speaker assembly 330 includes a speaker that transmits audio content to the user using a conduction method other than air conduction (e.g., bone conduction, cartilage conduction, or tragus conduction). Additional details regarding audio sources using conduction methods other than air conduction can be found in U.S. patent application nos. 15/680,836, 15/702,680, and 15/967,924, all of which are hereby incorporated by reference in their entirety.

The controller 320 controls the components of the audio system 300. The controller 320 processes the information from the microphone assembly 310 to determine a set of HRTFs that are customized for the user. The controller 320 can instruct the speaker assembly 330 to render the audio content using the set of HRTFs. Controller 320 may be an embodiment of controller 135. In the embodiment of fig. 3, controller 320 includes HRTF customization module 340, calibration module 345, data storage 350, and audio content engine 360. However, in other embodiments, the controller 320 may include different and/or additional components. Similarly, in some cases, functionality may be distributed among components in a manner different than that described herein. For example, some or all of the functions of the controller 320 may be performed by a console (e.g., as shown in fig. 6).

The data storage 350 stores data generated and/or used by the controller 320. According to some embodiments, the data may include audio signals detected by the microphone component 310, audio content to be played by the speaker component 330, HRTFs generated and/or customized by the HRTF customization module 340, other data related to the audio system 300, or some combination thereof. The data store 350 may include a data storage device. In some embodiments, the data storage device may be coupled to a frame of the headset. In other embodiments, the data storage device is external to the headset. In some embodiments, the data store 350 is part of a remote database that the controller 320 accesses via network communication.

According to some embodiments, the HRTF customization module 340 performs DoA estimation on detected sounds at frequencies above a first frequency band (e.g., above 2 kHz). The DoA estimate is an estimated direction of arrival of detected sound at the acoustic sensors of the microphone assembly 310. If sound is detected by at least two acoustic sensors of the microphone assembly, the controller 320 may estimate the source location of the detected sound using the positional relationship of the acoustic sensors and the DoA estimate from each acoustic sensor, for example, by triangulation. The DoA estimate for each detected sound may be represented as a vector between the estimated source location of the detected sound and the location of the microphone assembly 310 within the local region. The estimated source location may be a relative location of the source location in the local region with respect to the location of the microphone assembly 310. Additional details of DoA estimation may be found, for example, in U.S. patent application No. 16/015,879, which is hereby incorporated by reference in its entirety.

The location of the microphone assembly 310 may be determined by one or more sensors on the headset with the microphone assembly 310. In some embodiments, if the absolute position of the microphone assembly 310 is known in the local region, the controller 320 may determine the absolute position of the source location. The location of the microphone component 310 may be received from an external system (e.g., an imaging component, an AR or VR console, a SLAM system, a depth camera component, a structured light system, etc.). The external system may create a virtual model of the local area, where the local area and the location of the microphone assembly 310 are mapped. The received position information may include a position and/or orientation of the microphone assembly in the mapped local area. The controller 135 may update the mapping of the local regions with the determined source location of the detected sound. The controller 320 may receive location information from the external system continuously or at random or designated intervals. In some embodiments, controller 320 selects the detected sound on which to perform the DoA estimation.

The HRTF customization module 340 generates and/or customizes one or more HRTFs. HRTFs characterize how a person's ears receive sound from a certain point in space. Since human anatomy (e.g., ear shape, shoulder, etc.) can affect sound when it passes to a person's ears, an HRTF that is unique to each ear of a person (and unique to a person) relative to a particular source location of a person. The HRTF customization module 340 may generate and/or update an HRTF associated with a frequency in a first frequency band using portions of an audio signal detected by an acoustic sensor in a detection region. The HRTF customization module 340 may generate and/or update an HRTF associated with frequencies above the first frequency band using an audio signal captured by the microphone assembly, a template associated with HRTFs of frequencies above the first frequency band, or some combination thereof. In some embodiments, HRTF customization module 340 generates and/or updates an HRTF associated with frequencies above the first frequency band using audio signals captured by microphones of microphone assembly 310, the microphones of microphone assembly 310 being located at a different location than acoustic sensor 210, as shown in fig. 2. In some embodiments, HRTF customization module 340 uses machine learning techniques to generate and/or customize personalized HRTFs for a user. For example, the machine learning model may be trained to determine the direction of the sound source based on the audio signals detected by the microphone component 310. In other embodiments, the machine learning model is trained to determine the direction of a sound source producing sounds of frequencies above the first frequency band based on the audio signals detected by the microphone component 310. In some embodiments, the machine learning model is trained with audio signals captured by the microphone assembly 310 and a training HRTF generated by measuring the audio signals with a microphone placed in the ear canal of the user.

The machine learning model may include any number of machine learning algorithms. Some other machine learning models that may be used are linear and/or logistic regression, classification and regression trees, k-means clustering, vector quantization, and the like. In some embodiments, the machine learning model includes a deterministic method that has been trained with reinforcement learning (thereby creating a reinforcement learning model).

HRTF customization module 340 may generate multiple HRTFs for a single person, where each HRTF may be associated with a different source location, a different location of the person wearing microphone assembly 310, or some combination thereof. As an example, the HRTF customization module 340 may generate two HRTFs for a user at a particular position and orientation in a local region of the user's head relative to a single source position. If the user turns his or her head in a different direction, the HRTF customization module 340 may generate two new HRTFs for the user at a particular location and new orientation, or the HRTF customization module 340 may update two pre-existing HRTFs. Thus, HRTF customization module 340 generates HRTFs for different source locations, different locations of microphone component 310 in a local region, or some combination thereof.

The calibration module 345 calibrates the audio system 300 for generating (and/or updating) the customized HRTF. The calibration step may include instructing the speaker assembly 330 and/or the external speaker to produce controlled sounds with predetermined timing that occur at different orientations relative to the microphone assembly 330. The calibration module 345 may instruct the microphone assembly 310 to detect audio signals emitted by the speaker assembly 330 and/or external speakers, uncontrolled audio signals emitted by audio sources in the local area, or some combination thereof. The audio signals may have particular frequencies and be emitted by audio sources that are in different relative positions with respect to the microphone assembly 320. In some embodiments, one or more template HRTFs are customized based on audio signals detected by microphone component 310 during a calibration process.

In some embodiments, calibration system 345 calibrates audio system 300 in response to a level of coherence between an audio signal emitted by speaker assembly 330 and a measured audio signal detected by microphone assembly 310 being above a threshold level of coherence. For a transmitted audio signal having a frequency within the first frequency band, the calibration system 345 calibrates the audio system 300 for the first frequency band in response to a degree of coherence between the transmitted audio signal and a corresponding measured audio signal being above a threshold degree of coherence. For a transmitted audio signal having a higher frequency, the calibration system 345 calibrates the audio system 300 in response to a degree of coherence between the transmitted audio signal and the measured audio signal being above a threshold degree of coherence. In this case, the calibration system 345 calibrates only the transfer function between the speaker assembly 330 and the microphone assembly 310.

The audio content engine 360 generates an audio characterization configuration using the customized HRTFs. The audio characterization configuration is a function used by the audio system 300 to synthesize binaural sound (binaural sound) that appears to come from a particular point in space. The audio content engine 360 may, for example, fit an interpolation function to the HRTFs (e.g., a set of spherical harmonics) such that any given direction in space is mapped to the HRTFs. Alternatively, the audio content engine 360 may generate a look-up table that maps different directions in space to the closest HRTF. The speaker component 330 can present audio content (e.g., surround sound) using an audio characterization configuration. In some embodiments, the audio content engine 360 instructs the speaker component 330 to present the audio content according to the audio characterization configuration.

Example data

Fig. 4 is a graph 400 illustrating a similarity ratio of sound pressure at an ear canal entrance to sound pressure in a detection region as a function of direction and frequency in accordance with one or more embodiments. The graph includes a curve 410, a curve 420, a curve 430, and a curve 440. Curve 410 corresponds to recording an audio source at a location corresponding to 0 ° azimuth and 0 ° elevation (corresponding to the front of the user) using a spherical coordinate system. Curve 420 corresponds to an audio source at a position corresponding to 45 ° azimuth and 0 ° elevation. Curve 430 corresponds to the audio source at a location corresponding to 90 ° azimuth and 45 ° elevation. Curve 440 corresponds to an audio source at a location corresponding to 180 ° azimuth and 0 ° elevation. The horizontal axis is in frequency (Hz) and the vertical axis is in decibels (dB). Thus, if the sound pressure at the ear canal opening is substantially the same as the sound pressure in the detection area, the ratio of the two values is about 1, resulting in a value of 0 db (log of 1 is zero).

As shown in fig. 4, for different locations of an audio source, the audio signal detected by the acoustic sensor is within a threshold degree of similarity to the audio signal detected in the ear canal for frequencies in a first frequency band (e.g., 0-2 kHz). Thus, the portion of the audio signal in the first frequency band detected by the acoustic sensor in the detection area may be used to infer the sound pressure wave at the entrance of the ear canal of the user. Since the wavelength of the audio signal in the first frequency band is large, the portion of the audio signal in the first frequency band measured at the detection region is not significantly affected by small features of the helix and/or other parts of the ear anatomy. Thus, the portion of the audio signal in the first frequency band measured at the detection area is within a threshold similarity to the sound pressure wave in the ear canal of the user.

As shown in fig. 4, for frequencies above about 2kHz, curves 410, 420, 430, and 440 begin to substantially diverge from one another (recall that each curve is associated with a different direction). The divergence curve is due to the increase in directional dependence with increasing frequency.

Head Related Transfer Function (HRTF) personalization

Fig. 5 is a flow diagram illustrating a process for customizing a set of Head Related Transfer Functions (HRTFs) for a user using a head mounted device, in accordance with one or more embodiments. In one embodiment, the process of fig. 5 is performed by components of audio system 300. In other embodiments, other entities (e.g., consoles) may perform some or all of the steps of the process. Likewise, embodiments may include different and/or additional steps, or perform the steps in a different order.

The audio system 300 detects 510 an audio signal emanating from an audio source. The audio system 300 detects audio signals using a microphone assembly located within a detection area on the headset frame. The detection region is outside an ear of a user wearing the head-mounted device and within a threshold distance from an ear canal of the user. Some or all of the detected signals are within a first frequency band (e.g., 0-2 kHz). The portion of the audio signal within the first frequency band detected at the detection region is within a threshold similarity to a sound pressure wave at the ear canal at the same frequency band.

The audio system 300 determines 520 a set of HRTFs based in part on the detected audio signals. The set of HRTFs may be determined using a controller. At least some of the HRTFs are determined using an inferred sound pressure at the entrance to the ear canal for a first frequency band. For HRTFs associated with frequencies above the first frequency band, the audio system 300 may use, for example, template HRTFs, audio signals captured by additional acoustic sensors of a microphone array, sound field decomposition, machine learning, and so forth. In some embodiments, audio system 300 uses DoA estimation to determine HRTFs associated with frequencies above the first frequency band.

The audio system 300 renders 530 the audio content using the set of HRTFs. As described above with respect to fig. 3, the audio system 300 generates an audio characterization configuration using the determined HRTFs. The audio system 300 presents audio content to a user using an audio characterization configuration and speaker component 330.

If the location of the user wearing the head mounted device is within a local region, the audio system may generate one or more new HRTFs, or update one or more pre-existing acoustic transfer functions accordingly. The process 500 may be repeated continuously as the user wearing the head-mounted device moves through the localized area, or the process 500 may be initiated when sound is detected by the microphone assembly.

Example System Environment

Fig. 6 is a system environment of a headset including an audio system in accordance with one or more embodiments. The system 600 may operate in an artificial reality environment. The system 600 shown in fig. 6 includes an input/output (I/O) interface 610 and a headset 605 coupled to a console 615. The head mounted device 605 may be an embodiment of the head mounted device 100. Although fig. 6 illustrates an example system 600 including one headset 605 and one I/O interface 610, in other embodiments any number of these components may be included in the system 600. For example, there may be multiple headsets 605, each headset having an associated I/O interface 610, each headset 605 and I/O interface 610 communicating with the console 615. In alternative configurations, different and/or additional components may be included in system 600. In addition, in some embodiments, the functionality described in connection with one or more of the components shown in fig. 6 may be distributed among the components in a manner different than that described in connection with fig. 6. For example, some or all of the functionality of console 615 is provided by headset 605.

In some embodiments, the head-mounted device 605 may correct or enhance the vision of the user, protect the user's eyes, or provide images to the user. The head-mounted device 605 may be glasses to correct a user's vision deficiencies. The head-mounted device 605 may be sunglasses that protect the user's eyes from sunlight. The head-mounted device 605 may be safety glasses that protect the user's eyes from impact. The head mounted device 605 may be a night vision device or infrared goggles to enhance the user's vision at night. Alternatively, the head-mounted device 605 may not include a lens and may simply be a frame with an audio system 620 that provides audio (e.g., music, radio, podcast) to the user.

In some embodiments, the head mounted device 605 may be a head mounted display that presents content to a user that includes a view of a physical real environment enhanced with computer-generated elements (e.g., two-dimensional (2D) or three-dimensional (3D) images, 2D or 3D video, sound, etc.). In some embodiments, the presented content includes audio presented via the audio system 620, which receives audio information from the headset 605, the console 615, or both, and presents audio data based on the audio information. In some embodiments, the head mounted device 605 presents virtual content to the user based in part on the real environment surrounding the user. For example, the virtual content may be presented to a user of the eyewear device. The user may be physically in a room, and virtual walls and virtual floor of the room are rendered as part of the virtual content. In the embodiment of fig. 6, the headset 605 includes an audio system 620, an electronic display 625, optics block 630, position sensor 635, Depth Camera Assembly (DCA)640, and Inertial Measurement (IMU) unit 645. Some embodiments of the headset 605 have different components than those described in connection with fig. 6. Additionally, in other embodiments, the functionality provided by the various components described in conjunction with fig. 6 may be distributed differently among the components of the headset 605 or captured in a separate component remote from the headset 605.

The audio system 620 detects sounds to produce one or more HRTFs for the user. The audio system 620 may then use the one or more HRTFs to generate audio content for the user. The audio system 620 may be an embodiment of the audio system 300. As described above with respect to fig. 3, the audio system 620 may include a microphone assembly, a controller, and a speaker assembly, among other components. The microphone assembly detects sound in a local area around the microphone assembly. The plurality of acoustic sensors may be located on a headset (e.g., headset 100), on the user (e.g., within the user's ear canal), on the napestrap, or some combination thereof. According to some embodiments, each of the at least two acoustic sensors is located at a detection region within a threshold distance from an entrance of each ear canal of the user. The detected sound may be an uncontrolled sound or a controlled sound. The controller may perform DoA estimation on high frequency sounds above 2kHz detected by the microphone assembly. In some embodiments, the controller generates and/or updates one or more HRTFs associated with source locations of the detected sounds based in part on the DoA estimate of the detected higher frequency sounds and parameters associated with the detected sounds. The controller also generates and/or updates one or more HRTFs based at least in part on the detected low-frequency audio signals measured by the low-frequency audio sensor at the detection region. The controller may generate instructions for the speaker assembly to emit audio content that appears to come from several different points in space. Note that in some embodiments, some or all of the controllers are part of the console 615.

The electronic display 625 displays 2D or 3D images to the user according to the data received from the console 615. In various embodiments, the electronic display 625 includes a single electronic display or multiple electronic displays (e.g., a display for each eye of the user). Examples of electronic display 625 include: a Liquid Crystal Display (LCD), an Organic Light Emitting Diode (OLED) display, an active matrix organic light emitting diode display (AMOLED), some other display, or some combination thereof.

Optics block 630 amplifies the image light received from electronic display 625, corrects optical errors associated with the image light, and presents the corrected image light to a user of headset 605. Electronic display 625 and optics block 630 may be one embodiment of lens 110. In various embodiments, optics block 630 includes one or more optical elements. Example optical elements included in the optics block 630 include: an aperture, a fresnel lens, a convex lens, a concave lens, a filter, a reflective surface, or any other suitable optical element that affects image light. Further, the optics block 630 may include a combination of different optical elements. In some embodiments, one or more optical elements in optics block 630 may have one or more coatings, such as a partially reflective coating or an anti-reflective coating.

The magnification and focusing of the image light by optics block 630 allows electronic display 625 to be physically smaller, lighter in weight, and consume less power than larger displays. Further, the magnification may increase the field of view of the content presented by the electronic display 625. For example, the field of view of the displayed content is such that the displayed content is presented using nearly all (e.g., about 110 degrees diagonal), and in some cases, all of the user's field of view. Additionally, in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements.

In some embodiments, optics block 630 may be designed to correct one or more types of optical errors. Examples of optical errors include barrel or pincushion distortion, longitudinal chromatic aberration, or lateral chromatic aberration. Other types of optical errors may also include spherical aberration, chromatic aberration (chromatic aberration) or errors due to lens field curvature (lens field curvature), astigmatism or any other type of optical error. In some embodiments, the content provided to electronic display 625 for display is pre-distorted, and optics block 630 corrects for the distortion when optics block 630 receives image light generated based on the content from electronic display 625.

The DCA 640 captures data describing depth information for a local region around the headset 605. In one embodiment, the DCA 640 may include a structured light projector, an imaging device, and a controller. The captured data may be an image of structured light projected by the structured light projector to the local region captured by the imaging device. In one embodiment, the DCA 640 may include two or more cameras and controllers, the cameras oriented to capture portions of the local area in a stereoscopic manner. The captured data may be images of local areas captured stereoscopically by two or more cameras. The controller calculates depth information of the local area using the captured data. Based on the depth information, the controller determines absolute position information of the head mounted device 605 within the local region. The DCA 640 may be integrated with the headset 605 or may be located in a local area external to the headset 605. In the latter embodiment, the controller of the DCA 640 may transmit the depth information to the controller of the audio system 620.

The IMU 645 is an electronic device that generates data indicative of the position of the headset 605 based on measurement signals received from one or more position sensors 635. The one or more position sensors 635 may be an embodiment of the sensor device 115. The position sensor 635 generates one or more measurement signals in response to the movement of the headset 605. Examples of position sensor 635 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor to detect motion, one type of sensor for error correction of IMU 645, or some combination thereof. Position sensor 635 may be located outside of IMU 645, inside IMU 645, or some combination of the two.

Based on one or more measurement signals from one or more position sensors 635, IMU 645 generates data indicative of an estimated current position of the headset 605 relative to an initial position of the headset 605. For example, the position sensor 635 includes multiple accelerometers for measuring translational motion (forward/backward, up/down, left/right) and multiple gyroscopes for measuring rotational motion (e.g., pitch, yaw, and roll). In some embodiments, the IMU 645 rapidly samples the measurement signals and calculates an estimated current position of the headset 605 from the sampled data. For example, the IMU 645 integrates the measurement signals received from the accelerometers over time to estimate a velocity vector, and integrates the velocity vector over time to determine an estimated current location of a reference point on the headset 605. Instead, the IMU 645 provides sampled measurement signals to the console 615, which the console 615 parses through to reduce errors. The reference point is a point that may be used to describe the position of the headset 605. The reference point may generally be defined as a point in space or a location related to the orientation and position of the eyewear apparatus 605.

IMU 645 receives one or more parameters from console 615. As discussed further below, one or more parameters are used to keep track of the headset 605. Based on the received parameters, IMU 645 may adjust one or more IMU parameters (e.g., sampling rate). In some embodiments, the data from the DCA 640 causes the IMU 645 to update the initial position of the reference point so that it corresponds to the next position of the reference point. Updating the initial position of the reference point to the next calibrated position of the reference point helps to reduce the cumulative error associated with the estimated current position of the IMU 645. The accumulated error (also referred to as drift error) causes the estimated position of the reference point to "drift" away from the actual position of the reference point over time. In some embodiments of the headset 605, the IMU 645 may be a dedicated hardware component. In other embodiments, IMU 645 may be a software component implemented in one or more processors.

The I/O interface 610 is a device that allows a user to send action requests and receive responses from the console 615. An action request is a request to perform a particular action. For example, the action request may be an instruction to begin or end the capture of image or video data, an instruction to begin or end the production of sound by the audio system 620, an instruction to begin or end a calibration process for the headset 605, or an instruction to perform a particular action within an application. The I/O interface 610 may include one or more input devices. Example input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving and transmitting an action request to the console 615. The action request received by the I/O interface 610 is transmitted to the console 615, and the console 615 performs an action corresponding to the action request. In some embodiments, as further described above, the I/O interface 615 includes an IMU 645 that captures calibration data indicating an estimated location of the I/O interface 610 relative to an initial location of the I/O interface 610. In some embodiments, the I/O interface 610 may provide haptic feedback to the user according to instructions received from the console 615. For example, the haptic feedback is provided when an action request is received, or when the console 615 transmits an instruction to the I/O interface 610 that causes the I/O interface 610 to generate the haptic feedback when the console 615 performs the action.

The console 615 provides content to the head-mounted device 605 for processing in accordance with information received from one or more of the head-mounted device 605 and the I/O interface 610. In the example shown in fig. 6, the console 615 includes application storage 650, a tracking module 655, and an engine 660. Some embodiments of console 615 have different modules or components than those described in conjunction with fig. 6. Similarly, the functionality described further below may be distributed among components of the console 615 in a manner different from that described in connection with FIG. 6.

The application store 650 stores one or more applications for execution by the console 615. An application is a set of instructions that, when executed by a processor, generates content for presentation to a user. The application-generated content may be responsive to input received from the user via movement of the headset 605 or the I/O interface 610. Examples of applications include: a gaming application, a conferencing application, a video playback application, a calibration process, or other suitable application.

The tracking module 655 calibrates the system environment 600 using one or more calibration parameters, and may adjust the one or more calibration parameters to reduce errors in the position determination of the head-mounted device 605 or the I/O interface 610. The calibration performed by the tracking module 655 may also take into account information received from the IMU 645 in the headset 605 and/or the IMU 645 included in the I/O interface 610. Additionally, if tracking of the headset 605 is lost, the tracking module 655 may recalibrate some or all of the system environment 600.

The tracking module 655 uses information from one or more sensor devices 635, IMU 645, or some combination thereof, to track movement of the headset 605 or I/O interface 610. For example, the tracking module 655 determines the location of a reference point of the headset 605 in a map of local areas based on information from the headset 605. The tracking module 655 may also determine the location of a reference point of the headset 605 or a reference point of the I/O interface 610 using data from the IMU 645 indicating the location of the headset 605 or using data from the IMU 645 included in the I/O interface 610 indicating the location of the I/O interface 610, respectively. Additionally, in some embodiments, the tracking module 655 may use the partial data from the IMU 645 indicating the location of the headset 605 to predict a future position of the headset 605. The tracking module 655 provides the estimated or predicted future location of the head mounted device 605 or the I/O interface 610 to the engine 660.

Engine 660 also executes applications within system environment 600 and receives position information, acceleration information, velocity information, predicted future positions, audio information, or some combination thereof, for headset 605 from tracking module 655. Based on the received information, the engine 660 determines the content to provide to the headset 605 for presentation to the user. For example, if the received information indicates that the user is looking to the left, the engine 660 generates content for the headset 605 that reflects the user's movement in the virtual environment or in an environment that augments the local area with additional content. Further, engine 660 performs an action within an application executing on console 615 in response to an action request received from I/O interface 610 and provides feedback to the user that the action was performed. The feedback provided may be visual or auditory feedback via the headset 605, or tactile feedback via the I/O interface 610.

Additional configuration information

Embodiments according to the invention are particularly disclosed in the accompanying claims directed to a head-mounted apparatus, a method and a storage medium, wherein any feature mentioned in one claim category (e.g. head-mounted apparatus) may also be claimed in another claim category (e.g. method, storage medium, system and computer program product). The dependencies or back-references in the appended claims are chosen for formal reasons only. However, any subject matter resulting from an intentional back-reference (especially multiple references) to any preceding claim may also be claimed, such that any combination of a claim and its features is disclosed and may be claimed, irrespective of the dependencies chosen in the appended claims. The subject matter which can be claimed comprises not only the combination of features as set forth in the appended claims, but also any other combination of features in the claims, wherein each feature mentioned in the claims can be combined with any other feature or combination of other features in the claims. Furthermore, any of the embodiments and features described or depicted herein may be claimed in a separate claim and/or in any combination with any of the embodiments or features described or depicted herein or in any combination with any of the features of the appended claims.

In one embodiment, a head-mounted device may include: a frame; and an audio system, the audio system comprising: a microphone assembly positioned on the frame in a detection region that is outside an ear of a user wearing the head-mounted device and within a threshold distance from an ear canal of the ear, the microphone assembly configured to detect audio signals emanating from an audio source in a local region, wherein the audio signals detected at the detection region are within a threshold degree of similarity of a sound pressure wave at the ear canal of the user; and an audio controller configured to determine a set of Head Related Transfer Functions (HRTFs) based in part on the detected audio signals.

The microphone assembly may include a plurality of microphones.

In one embodiment, the head-mounted device may include at least one microphone of the plurality of microphones located on the frame at a location other than the detection area.

The threshold distance may be up to 3 inches.

The audio source may be a speaker that is part of an audio system.

The speaker may be located on a frame of the headset.

The audio source may be a transducer of the cartilage conduction system.

The audio source may be external to and separate from the head-mounted device, and the audio signal may describe ambient sound in a localized area of the head-mounted device.

The frequency of the audio signal may be less than or equal to 2 kHz.

The audio controller may be configured to:

estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; and

from the DoA estimation, the HRTF related to the audio system for frequencies above 2kHz is updated.

In an embodiment, a method may include:

detecting, via a microphone assembly located within a detection region on a frame of the headset, audio signals emanating from audio sources in a local region, wherein the detection region is outside an ear of a user wearing the headset and within a threshold distance from an ear canal of the user and within a threshold similarity of sound pressure waves of the audio signals detected at the detection region at the ear canal; and

a set of Head Related Transfer Functions (HRTFs) is determined via an audio controller based in part on the detected audio signals.

The head-mounted device may include an audio system and the audio source may be a speaker that is part of the audio system.

The frequency of the audio signal may be less than or equal to 2 kHz.

The audio source may be a transducer of the cartilage conduction system.

The audio signal may describe ambient sound in a local area of the user.

In one embodiment, a method may comprise:

estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; and

based on the DoA estimation, the HRTF related to the audio system for frequencies above 2kHz is updated.

In one embodiment, a non-transitory computer-readable medium may store instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising:

detecting, via a microphone assembly located within a detection region on a frame of the headset, audio signals emanating from audio sources in a local region, wherein the detection region is outside an ear of a user wearing the headset and within a threshold distance from an ear canal of the user and within a threshold similarity of sound pressure waves of the audio signals detected at the detection region at the ear canal; and

a set of Head Related Transfer Functions (HRTFs) is determined via an audio controller based in part on the detected audio signals.

The frequency of the audio signal may be less than or equal to 2 kHz.

The microphone assembly may include a plurality of microphones.

The audio controller may be configured to:

estimating a direction of arrival (DoA) of the detected sound relative to a position of the head-mounted device within the local area; and

from the DoA estimation, the HRTF related to the audio system for frequencies above 2kHz is updated.

In an embodiment, one or more computer-readable non-transitory storage media may embody software that is operable when executed to perform a method according to or within any of the embodiments described above.

In an embodiment, a system may include: one or more processors; and at least one memory coupled to the processor and comprising instructions executable by the processor, the processor being operable when executing the instructions to perform a method according to or within any of the embodiments described above.

In an embodiment, a computer program product, preferably comprising a computer-readable non-transitory storage medium, which when executed on a data processing system, is operable to perform a method according to or within any of the embodiments described above.

The foregoing description of the embodiments of the disclosure has been presented for the purposes of illustration; it is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. One skilled in the relevant art will recognize that many modifications and variations are possible in light of the above disclosure.

Some portions of the present description describe embodiments of the disclosure in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Moreover, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combination thereof.

Any of the steps, operations, or processes described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code, the computer program code executable by a computer processor for performing any or all of the steps, operations, or processes described.

Embodiments of the present disclosure may also relate to apparatuses for performing the operations herein. The apparatus may be specially constructed for the required purposes, and/or it may comprise a general purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of medium suitable for storing electronic instructions, which may be coupled to a computer system bus. Moreover, any computing system referred to in the specification may include a single processor, or may be an architecture that employs a multi-processor design to increase computing power.

Embodiments of the present disclosure may also relate to products produced by the computing processes described herein. Such products may include information derived from computing processes, where the information is stored on non-transitory, tangible computer-readable storage media and may include any embodiment of a computer program product or other combination of data described herein.

Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the inventive subject matter. It is therefore intended that the scope of the disclosure be limited not by this detailed description, but rather by any claims that issue on an application based thereupon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the disclosure, which is set forth in the following claims.

27页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:侧行数据传输方法、设备及存储介质

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类