Computing device, computer readable medium, and method executed by computing device
阅读说明:本技术 计算设备、计算机可读介质以及由计算设备执行的方法 (Computing device, computer readable medium, and method executed by computing device ) 是由 蒂莫西·希恩 于 2015-09-08 设计创作,主要内容包括:公开一种计算设备、存储有指令的计算机可读介质以及由计算设备执行的方法。该计算设备包括一个或更多个处理器和存储有指令的计算机可读介质,指令在由处理器执行时使计算设备:维护代表性声学特性的数据库;从特定网络设备接收指示特定网络设备的一个或更多个特性、回放设备的标识以及指示在回放设备播放校准音时由特定网络设备检测到的音频的数据的数据;基于特定网络设备的至少一个特性,在代表性声学特性的数据库中识别与一个或更多个特性对应的代表性声学特性;基于所识别的代表性声学特性、回放设备的标识和指示所检测到的音频的数据,确定调节回放设备的音频输出的音频处理算法;以及使回放设备的音频输出通过音频处理算法被调节。(A computing device, a computer-readable medium having instructions stored thereon, and a method performed by the computing device are disclosed. The computing device includes one or more processors and a computer-readable medium storing instructions that, when executed by the processors, cause the computing device to: maintaining a database of representative acoustic characteristics; receiving, from a particular network device, data indicative of one or more characteristics of the particular network device, an identification of a playback device, and data indicative of audio detected by the particular network device while the playback device is playing a calibration tone; identifying, in a database of representative acoustic characteristics, representative acoustic characteristics corresponding to the one or more characteristics based on at least one characteristic of the particular network device; determining an audio processing algorithm that adjusts an audio output of the playback device based on the identified representative acoustic characteristic, the identification of the playback device, and the data indicative of the detected audio; and causing the audio output of the playback device to be adjusted by the audio processing algorithm.)
1. A computing device, comprising:
one or more processors; and
a computer-readable medium having instructions stored thereon, which when executed by the one or more processors, cause the computing device to perform functions comprising:
maintaining a database of representative acoustic characteristics, wherein each representative acoustic characteristic corresponds to a respective plurality of network devices, and wherein each network device of each respective plurality of network devices shares one or more characteristics;
receiving data from a particular network device indicating: (i) one or more characteristics of the particular network device; (ii) an identification of the playback device; and (iii) data indicative of audio detected by the particular network device if the playback device is playing calibration tones;
identifying, in the database of representative acoustic characteristics, representative acoustic characteristics corresponding to one or more characteristics of the particular network device based on at least one of the one or more characteristics;
based on (i) the identified representative acoustic characteristics; (ii) an identification of the playback device; and (iii) data indicative of the detected audio, determining an audio processing algorithm to adjust an audio output of the playback device; and
causing the audio output of the playback device to be adjusted by the audio processing algorithm.
2. The computing device of claim 1, wherein to maintain a database of the representative acoustic characteristics comprises to:
receiving, from a plurality of network devices sharing one or more common characteristics, respective data indicative of microphone acoustic characteristics of each network device;
determining a representative acoustic characteristic based on the received data indicative of the microphone acoustic characteristics of the each network device; and
storing data indicative of the representative acoustic characteristics in a database of the representative acoustic characteristics in association with the one or more common characteristics.
3. The computing device of claim 1, wherein to cause audio output of the playback device to be adjusted by the audio processing algorithm comprises to:
sending data indicative of the audio processing algorithm to the particular network device.
4. The computing device of claim 1, wherein to cause audio output of the playback device to be adjusted by the audio processing algorithm comprises to:
transmitting data indicative of the audio processing algorithm to the playback device.
5. The computing device of claim 1, wherein to determine the audio processing algorithm comprises to:
determining a frequency response based on data indicative of audio detected by the particular network device with the playback device playing the calibration tone; and
determining the audio processing algorithm based on the determined frequency response.
6. The computing device of claim 1, wherein the functions further comprise:
transmitting the calibration tone to the playback device prior to receiving data indicative of audio detected by the particular network device with the playback device playing the calibration tone.
7. The computing device of claim 1, wherein a network device of each plurality of network devices includes a respective particular model of microphone, wherein the one or more characteristics of the particular network device include the particular model of microphone, and wherein identifying the representative acoustic characteristics corresponding to the one or more characteristics includes:
identifying a representative acoustic characteristic corresponding to a particular model of the microphone in the database of representative acoustic characteristics.
8. The computing device of claim 1, wherein to cause audio output of the playback device to be adjusted by the audio processing algorithm comprises to:
causing the audio output of the playback device to be adjusted by the audio processing algorithm to have predetermined audio characteristics indicative of a desired audio playback quality.
9. A computer-readable medium having stored thereon instructions that, when executed by one or more processors of a computing device, cause the computing device to perform functions comprising:
maintaining a database of representative acoustic characteristics, wherein each representative acoustic characteristic corresponds to a respective plurality of network devices, and wherein each network device of each respective plurality of network devices shares one or more characteristics;
receiving data from a particular network device indicating: (i) one or more characteristics of the particular network device; (ii) an identification of the playback device; and (iii) data indicative of audio detected by the particular network device if the playback device is playing calibration tones;
identifying, in the database of representative acoustic characteristics, representative acoustic characteristics corresponding to one or more characteristics of the particular network device based on at least one of the one or more characteristics;
based on (i) the identified representative acoustic characteristics; (ii) an identification of the playback device; and (iii) data indicative of the detected audio, determining an audio processing algorithm to adjust an audio output of the playback device; and
causing the audio output of the playback device to be adjusted by the audio processing algorithm.
10. The computer-readable medium of claim 9, wherein maintaining the database of representative acoustic characteristics comprises:
receiving, from a plurality of network devices sharing one or more common characteristics, respective data indicative of microphone acoustic characteristics of each network device;
determining a representative acoustic characteristic based on the received data indicative of the microphone acoustic characteristics of the each network device; and
storing data indicative of the representative acoustic characteristics in a database of the representative acoustic characteristics in association with the one or more common characteristics.
11. The computer-readable medium of claim 9, wherein causing the audio output of the playback device to be adjusted by the audio processing algorithm comprises:
sending data indicative of the audio processing algorithm to the particular network device.
12. The computer-readable medium of claim 9, wherein causing the audio output of the playback device to be adjusted by the audio processing algorithm comprises:
transmitting data indicative of the audio processing algorithm to the playback device.
13. The computer-readable medium of claim 9, wherein determining the audio processing algorithm comprises:
determining a frequency response based on data indicative of audio detected by the particular network device with the playback device playing the calibration tone; and
determining the audio processing algorithm based on the determined frequency response.
14. The computer-readable medium of claim 9, wherein the functions further comprise:
transmitting the calibration tone to the playback device prior to receiving data indicative of audio detected by the particular network device with the playback device playing the calibration tone.
15. A method performed by a computing device, the method comprising:
maintaining a database of representative acoustic characteristics, wherein each representative acoustic characteristic corresponds to a respective plurality of network devices, and wherein each network device of each respective plurality of network devices shares one or more characteristics;
receiving data from a particular network device indicating: (i) one or more characteristics of the particular network device; (ii) an identification of the playback device; and (iii) data indicative of audio detected by the particular network device if the playback device is playing calibration tones;
identifying, in the database of representative acoustic characteristics, representative acoustic characteristics corresponding to one or more characteristics of the particular network device based on at least one of the one or more characteristics;
based on (i) the identified representative acoustic characteristics; (ii) an identification of the playback device; and (iii) data indicative of the detected audio, determining an audio processing algorithm to adjust an audio output of the playback device; and
causing the audio output of the playback device to be adjusted by the audio processing algorithm.
16. The method of claim 15, wherein causing the audio output of the playback device to be adjusted by the audio processing algorithm comprises:
sending data indicative of the audio processing algorithm to the particular network device.
17. The method of claim 15, wherein causing the audio output of the playback device to be adjusted by the audio processing algorithm comprises:
transmitting data indicative of the audio processing algorithm to the playback device.
18. The method of claim 15, wherein determining the audio processing algorithm comprises:
determining a frequency response based on data indicative of audio detected by the particular network device with the playback device playing the calibration tone; and
determining the audio processing algorithm based on the determined frequency response.
19. The method of claim 15, further comprising:
transmitting the calibration tone to the playback device prior to receiving data indicative of audio detected by the particular network device with the playback device playing the calibration tone.
20. The method of claim 15, wherein a network device of each plurality of network devices includes a respective particular model of microphone, wherein the one or more characteristics of the particular network device include the particular model of microphone, and wherein identifying the representative acoustic characteristics corresponding to the one or more characteristics includes:
identifying a representative acoustic characteristic corresponding to a particular model of the microphone in the database of representative acoustic characteristics.
Technical Field
The present disclosure relates to consumer products, and more particularly, to methods, systems, products, features, services, and other elements related to media playback or some aspect thereof.
Background
Until 2003, where the options for accessing and listening to digital Audio with larger sound settings were limited, SONOS corporation filed one of its first patent applications entitled "Method for Synchronizing Audio Playback between multiple Networked Devices" in 2003 and began to open the market for media Playback systems in 2005. Sonos wireless HiFi systems enable people to experience music from multiple sources through one or more networked playback devices. Through a software control application installed on a smartphone, tablet or computer, a person can play his or her desired music in any room with a networked playback device. In addition, using the controller, for example, different songs may be streamed to each room with a playback device, the rooms may be grouped together for synchronized playback, or the same song may be listened to in all rooms simultaneously.
Given the growing interest in digital media, there remains a need to develop consumer accessible technologies to further enhance the listening experience.
Drawings
The features, aspects, and advantages of the presently disclosed technology may be better understood with regard to the following description, appended claims, and accompanying drawings where:
FIG. 1 illustrates an example media playback system configuration in which certain embodiments may be implemented;
FIG. 2 shows a functional block diagram of an example playback device;
FIG. 3 shows a functional block diagram of an example control device;
FIG. 4 illustrates an example controller interface;
FIG. 5 illustrates an example flow chart of a first method for calibrating a playback device;
FIG. 6 illustrates an example playback environment within which a playback device may be calibrated;
FIG. 7 illustrates an example flow chart of a second method for calibrating a playback device;
FIG. 8 illustrates an example flow chart of a third method for calibrating a playback device;
FIG. 9 illustrates an example flow chart of a first method for calibrating a microphone;
fig. 10 shows an example arrangement for microphone calibration; and
fig. 11 illustrates an example flow diagram of a second method for calibrating a microphone.
The drawings are for purposes of illustrating example embodiments, and it is to be understood that the invention is not limited to the arrangements and instrumentality shown in the drawings.
Detailed Description
I. Overview
Calibrating one or more playback devices in a playback environment using a microphone may involve acoustic characteristics of the microphone. However, in some cases, the acoustic characteristics of the microphone of the network device used to calibrate the one or more playback devices may not be known.
Examples discussed herein relate to calibrating a microphone of a network device based on audio signals detected by the microphone of the network device when the network device is placed within a predetermined physical range of the microphone of the playback device.
In one example, the functionality of the calibration may be coordinated and performed, at least in part, by the network device. In one case, the network device may be a mobile device with a built-in microphone. The network device may also be a controller device for controlling one or more playback devices.
The microphone of the network device may detect the first audio signal when the network device is placed within a predetermined physical range of the microphone of the playback device. In one example, the location within the predetermined physical range of the microphone of the playback device may be one of: a position above the playback device, a position behind the playback device, a position to the side of the playback device, a position in front of the playback device, etc.
The network device may also receive data indicative of a second audio signal detected by a microphone of the playback device. Both the first audio signal and the second audio signal may include a portion corresponding to a third audio signal played by one or more playback devices. The one or more playback devices may include a network device capable of being placed within a predetermined physical range of the playback device with a microphone. The first audio signal and the second audio signal may be detected by the respective microphones simultaneously or at different times. The data indicative of the second audio signal may be received by the network device before or after a microphone of the network device detects the first audio signal.
The network device may then identify a microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal, and thus apply the determined microphone calibration algorithm when performing a function associated with the playback device, such as a calibration function.
In another example, the functionality of the calibration may be coordinated and at least partially performed by a computing device, such as a server in communication with the playback device and/or the network device.
The computing device may receive, from the network device, data indicative of a first audio signal detected by a microphone of the playback device when the network device is placed within a predetermined physical range of the microphone. The computing device may also receive data indicative of a second audio signal detected by a microphone of the playback device. The computing device may then identify a microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal. In one case, the computing device may then apply the determined microphone calibration algorithm when performing functions associated with the network device and the playback device, such as calibration functions. In one case, the computing device may also transmit data indicative of the determined microphone calibration algorithm to the network device for application by the network device in performing functions associated with the playback device.
In one case, identifying the microphone calibration algorithm may include accessing a database of microphone calibration algorithms and microphone acoustic characteristics to identify the microphone calibration algorithm based on the microphone acoustic characteristics of the microphone of the network device. The microphone acoustic characteristic may be determined based on the data indicative of the first audio signal and the data indicative of the second audio signal.
In another case, identifying the microphone calibration algorithm may include calculating the microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal. For example, a microphone calibration algorithm may be calculated such that the determined microphone calibration algorithm is applied by the one or more playback devices when audio content is played in the playback environment to produce a third audio signal having standardized audio characteristics. For example, if the microphone acoustic characteristics include a low sensitivity at a particular frequency, the microphone calibration algorithm may account for the low sensitivity, e.g., by amplifying audio content detected by the microphone at the particular frequency.
As described above, calibration of a microphone of a network device may be initiated when the microphone of the network device is used to perform a function, such as a calibration function associated with one or more playback devices, but the acoustic characteristics of the microphone or a microphone calibration algorithm corresponding to the microphone are not available. Thus, calibration of the microphone may be initiated by the device performing the calibration function associated with the one or more playback devices.
As also described above, the network device may be a controller device for controlling one or more playback devices. Thus, in one case, calibration of the microphone of the network device may be initiated when the controller device is set to control one or more playback devices. Other examples are also possible.
In one example, the association between the determined calibration algorithm and one or more characteristics, such as the model of the network device, may be stored as an entry in a database of microphone calibration algorithms. The microphone calibration algorithm may then be identified and applied when another network device has at least one of the one or more characteristics of the network device.
As described above, the present discussion relates to calibrating a microphone of a network device based on audio signals detected by the microphone of the network device when the network device is placed within a predetermined physical range of the microphone of the playback device. In one aspect, a network device is provided. The network device includes: a microphone; a processor; and a memory storing instructions executable by the processor to cause the playback device to perform the following functions. The functions include: when (i) the playback device is playing a first audio signal and (ii) the network device is moving from a first physical location to a second physical location, detecting, by the microphone, a second audio signal, identifying an audio processing algorithm based on data indicative of the second audio signal, and transmitting data indicative of the identified audio processing algorithm to the playback device.
In another aspect, a playback device is provided. The playback device includes a processor and a memory storing instructions executable by the processor to cause the playback device to perform the following functions. The functions include: playing the first audio signal; receiving, from a network device, data indicative of a second audio signal detected by a microphone of the network device as the network device moves from a first physical location to a second physical location within a playback environment; identifying an audio processing algorithm based on the data indicative of the second audio signal; and applying the identified audio processing algorithm when playing the audio content in the playback environment.
In another aspect, a non-transitory computer-readable medium is provided. The non-transitory computer readable medium stores instructions executable by a computing device to cause the computing device to perform the following functions. The functions include: receiving, from a network device, data indicative of an audio signal detected by a microphone of the network device as the network device moves from a first physical location to a second physical location within a playback environment; identifying an audio processing algorithm based on data indicative of the detected audio signal; and transmitting data indicative of the audio processing algorithm to a playback device in the playback environment.
In another aspect, a network device is provided. The network device includes a microphone, a processor, and a memory having instructions stored thereon that are executable by the processor to cause the playback device to perform the following functions. The functions include: detecting, by a microphone of a network device, a first audio signal when the network device is placed within a predetermined physical range of the microphone of the playback device; receiving data indicative of a second audio signal detected by a microphone of a playback device; identifying a microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal; and applying the microphone calibration algorithm when performing a calibration function associated with the playback device.
In another aspect, a computing device is provided. The computing device includes a processor and a memory storing instructions executable by the processor to cause the playback device to perform the following functions. The functions include: receiving, from a network device, data indicative of a first audio signal detected by a microphone of the network device when the network device is placed within a predetermined physical range of the microphone of the playback device; receiving data indicative of a second audio signal detected by a microphone of a playback device; identifying a microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal; and applying the microphone calibration algorithm when performing calibration functions associated with the network device and the playback device.
In another aspect, a non-transitory computer-readable medium is provided. The non-transitory computer readable medium stores instructions executable by a computing device to cause the computing device to perform the following functions. The functions include: receiving, from a network device, data indicative of a first audio signal detected by a microphone of the network device when the network device is placed within a predetermined physical range of the microphone of the playback device; receiving data indicative of a second audio signal detected by a microphone of a playback device; identifying a microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal; and causing an association between the determined microphone calibration algorithm and one or more characteristics of a microphone of the network device to be stored in a database.
While some examples described herein may refer to functions performed by a given actor, e.g., "user," and/or other entity, it should be understood that this is for illustration purposes only. The claims should not be construed as requiring any such example actor to take action unless explicitly required by the claim's own language. One of ordinary skill in the art will appreciate that the present disclosure includes many other embodiments.
Example work Environment
Fig. 1 illustrates an example configuration of a
Additional discussion regarding the different components of the example
a.Example playback device
Fig. 2 shows a functional block diagram of an example playback device 200, which example playback device 200 may be configured as one or more of the playback devices 102-124 of the
In one example, the processor 202 may be a clock driven computational component configured to process input data according to instructions stored in the
The particular functionality may include the playback device 200 playing back audio content in synchronization with one or more other playback devices. During synchronized playback, the listener will preferably not be able to perceive a time delay difference between the playback of the audio content by the playback device 200 and the playback of the audio content by one or more other playback devices. U.S. patent No.8,234,395 entitled "System and method for synchronizing operations a plurality of audio playback devices" provides some examples of audio playback synchronization between playback devices in greater detail, and is hereby incorporated by reference.
The
The audio processing component 208 may include one or more of the following: a digital-to-analog converter (DAC), an analog-to-digital converter (ADC), an audio pre-processing component, an audio enhancement component, or a Digital Signal Processor (DSP), etc. In one implementation, one or more of the audio processing components 208 may be a sub-component of the processor 202. In one example, the audio processing component 208 may process and/or intentionally alter audio content to produce an audio signal. The resulting audio signal may then be provided to an audio amplifier 210 for amplification and playback through a speaker 212. In particular, the audio amplifier 210 may include a device configured to amplify an audio signal to a level for driving one or more of the speakers 212. The speaker 212 may include a separate transducer (e.g., a "driver") or a complete speaker system including a housing with one or more drivers. The particular drivers of the speaker 212 may include, for example, a subwoofer (e.g., for low frequencies), a midrange driver (e.g., for mid-range frequencies), and/or a tweeter (e.g., for high frequencies). In some cases, each transducer of the one or more speakers 212 may be driven by a separate corresponding audio amplifier of the audio amplifier 210. In addition to generating analog signals for playback by the playback device 200, the audio processing component 208 may be configured to process audio content to be sent to one or more other playback devices for playback.
Audio content to be processed and/or played back by the playback device 200 may be received from an external source, for example, via an audio line in connection (e.g., an auto-detect 3.5mm audio line in connection) or the network interface 214.
Microphone 220 may include an audio sensor configured to convert detected sound into an electrical signal. The electrical signals may be processed by the audio processing component 208 and/or the processor 202. The microphone 220 may be positioned at one or more locations on the playback device 200 in one or more orientations. Microphone 220 may be configured to detect sound in one or more frequency ranges. In one case, one or more of the microphones 220 may be configured to detect sound within a frequency range of audio that the playback device 200 is capable of presenting. In another case, one or more of the microphones 220 may be configured to detect sounds in a frequency range audible to a human. Other examples are also possible.
The network interface 214 may be configured to facilitate data flow between the playback device 200 and one or more other devices on a data network. Likewise, the playback device 200 may be configured to receive audio content over a data network from one or more other playback devices in communication with the playback device 200, a network device within a local area network, or a source of audio content over a wide area network such as the internet. In one example, audio content and other signals transmitted and received by the playback device 200 may be transmitted in the form of digital packet data that includes an Internet Protocol (IP) based source address and an IP based destination address. In this case, the network interface 214 may be configured to parse the digital packet data so that the playback device 200 properly receives and processes the data destined for the playback device 200.
As shown, the network interface 214 may include a
In one example, the playback device 200 can be paired with one other playback device to play two separate audio components of audio content. For example, the playback device 200 may be configured to play a left channel audio component, while other playback devices may be configured to play a right channel audio component, thereby creating or enhancing a stereo effect of the audio content. Paired playback devices (also referred to as "bound playback devices") can also play audio content in synchronization with other playback devices.
In another example, the playback device 200 may be acoustically joined with one or more other playback devices to form a single joined playback device. Because the federated playback device may have additional speaker drivers through which audio content can be rendered, the federated playback device may be configured to process and reproduce sound differently than the non-federated playback device or the paired playback device. For example, if the playback device 200 is a playback device designed to present low-band audio content (i.e., a subwoofer), the playback device 200 can be joined with a playback device designed to present full-band audio content. In such a case, when coupled with the low frequency playback device 200, the full band playback device may be configured to present only the mid frequency component and the high frequency component of the audio content, while the low frequency playback device 200 presents the low frequency component of the audio content. The federated playback device may also be paired with a single playback device or another federated playback device.
For example, SONOS companies currently publicly sell (or have publicly sold) specific playback devices that include "PLAY: 1 "," PLAY: 3 "," PLAY: 5 "," PLAYBAR "," CONNECT: AMP "," CONNECT ", and" SUB ". Additionally or alternatively, any other past, present, and/or future playback devices may be used to implement the playback devices of the example embodiments disclosed herein. Additionally, it should be understood that the playback device is not limited to the example shown in FIG. 2 or the SONOS product offering. For example, the playback device may include a wired or wireless headset. In another example, the playback device may include or interact with a docking station for a personal mobile media playback device. In yet another example, the playback device may be necessary to form another device or component, such as a television, a lighting fixture, or some other device for indoor or outdoor use.
b.Example playback zone configuration
Referring again to the
As shown in fig. 1, balconies, restaurants, kitchens, bathrooms, offices and bedroom areas each have one playback device, while living room and main lying area each have a plurality of playback devices. In the living room area,
In one example, one or more playback zones in the environment of fig. 1 may each be playing different audio content. For example, a user may be grilling on a balcony area and listening to hip-hop music being played by the
As set forth above, the zone configuration of the
Further, different playback zones of the
c.Example control device
Fig. 3 shows a functional block diagram of an example control device 300, which example control device 300 may be configured to be one or both of the
The processor 302 may be configured to perform functions related to facilitating user access, control, and configuration of the
Microphone 310 may include an audio sensor configured to convert detected sound into an electrical signal. The electrical signals may be processed by a processor 302. In one case, if the control device 300 is a device that may also be used as a means for voice communication or voice recording, one or more of the microphones 310 may be microphones to facilitate these functions. For example, one or more of the microphones may be configured to detect sounds in a frequency range that a human being is capable of producing and/or a frequency range that is audible to a human being. Other examples are also possible.
In one example, the
Playback device control commands, such as volume control and audio playback control, may also be communicated from the control device 300 to the playback device via the
The
The playback control zone 410 can include selectable (e.g., by touch or by use of a cursor) icons for causing playback devices in a selected playback zone or group of zones to play or pause, fast forward, fast rewind, skip next, skip previous, enter/exit random mode, enter/exit repeat mode, enter/exit cross-fade mode. The playback control zone 410 may also include selectable icons for modifying equalization settings, playback volume, and the like.
The playback zone 420 may include a representation of a playback zone in the
For example, as shown, a "grouping" icon may be arranged in each of the graphical representations of the playback zones. A "grouping" icon provided in the graphical representation of a particular region may optionally bring up an option to select one or more other regions of the media playback system to be grouped with the particular region. Once grouped, the playback devices in a zone that have been grouped with a particular zone will be configured to play audio content in synchronization with one or more playback devices in that particular zone. Similarly, a "grouping" icon may be provided in the graphical representation of the zone group. In this case, the "group" icon may optionally bring up the option of deselecting one or more regions of the regional group to be removed from the regional group. Other interactions and implementations for grouping and ungrouping regions via a user interface, such as user interface 400, are also possible. The representation of the playback zones in playback zone 420 may be dynamically updated as the playback zone or zone group configuration is modified.
The playback status zone 430 can include a graphical representation of audio content in the selected playback zone or group of zones that is currently being played, previously played, or scheduled to be played next. The selected playback zone or zone group may be visually distinguished on the user interface, such as in the playback zone 420 and/or the playback status zone 430. The graphical representation may include track title, artist name, album year, track length, and other relevant information useful to the user in knowing when to control the media playback system via the user interface 400.
The playback queue zone 440 may include a graphical representation of the audio content in the playback queue associated with the selected playback zone or group of zones. In some implementations, each playback zone or zone group can be associated with a playback queue that includes information corresponding to zero or more audio items for playback by the playback zone or zone group. For example, each audio item in the playback queue can include a Uniform Resource Identifier (URI), a Uniform Resource Locator (URL), or some other identifier that a playback device in a playback zone or group of zones can use to find and/or retrieve audio items from a local audio content source or a networked audio content source that are likely to be used for playback by the playback device.
In one example, a playlist may be added to the playback queue, in which case information corresponding to each audio item in the playlist may be added to the playback queue. In another example, the audio items in the playback queue may be saved as a playlist. In yet another example, the playback queue may be empty or filled but "out of use" when the playback zone or zone group is continuously playing streaming audio content, such as an internet broadcast that may be continuously played until otherwise stopped, rather than playing discrete audio items having a playback duration. In an alternative embodiment, when a playback zone or group of zones is playing internet radio and/or other streaming audio content items, the playback queue may include those items and be "in use". Other examples are also possible.
When a playback zone or group of zones is "grouped" or "ungrouped," the playback queue associated with the affected playback zone or group of zones may be cleared or re-associated. For example, if a first playback zone that includes a first playback queue is grouped with a second playback zone that includes a second playback queue, the created zone group may have an associated playback queue that is initially empty that includes audio items from the first playback queue (e.g., if the second playback zone is added to the first playback zone), that includes audio items from the second playback queue (e.g., if the first playback zone is added to the second playback zone), or that includes a combination of audio items from both the first playback queue and the second playback queue. Subsequently, if the created zone group is ungrouped, the resulting first playback zone may be re-associated with the previous first playback queue or may be associated with a new playback queue that is empty or that includes audio items from the playback queue associated with the created zone group prior to the created zone group being ungrouped. Similarly, the resulting second playback zone may be re-associated with the previous second playback queue, or associated with a new playback queue that is empty or that includes audio items from the playback queue associated with the created zone group before the created zone group was ungrouped. Other examples are also possible.
Referring again to the user interface 400 of fig. 4, the graphical representation of the audio content in the playback queue region 440 may include the track title, artist name, track length, and other relevant information associated with the audio content in the playback queue. In one example, the graphical representation of the audio content may optionally bring up further selectable icons for managing and/or manipulating the playback queue and/or the audio content represented in the playback queue. For example, the represented audio content may be removed from the playback queue, may be moved to a different location in the playback queue, or may be selected to be played immediately, or may be selected to be played after any audio content currently being played, and so forth. The playback queue associated with a playback zone or zone group may be stored in memory on one or more playback devices in the playback zone or zone group, or may be stored in memory on playback devices not in the playback zone or zone group, and/or may be stored in memory on some other designated device.
The audio content source section 450 may include a graphical representation of a selectable audio content source from which audio content may be retrieved and from which the retrieved audio content may be played by a selected playback zone or group of zones. A discussion of audio content sources may be found in the following sections.
d.Example Audio content Source
As previously described, one or more playback devices in a zone or group of zones may be configured to retrieve audio content for playback from various available audio content sources (e.g., according to a corresponding URI or URL of the audio content). In one example, the playback device may retrieve audio content directly from a corresponding audio content source (e.g., a line-in connection). In another example, audio content may be provided to a playback device over a network via one or more other playback devices or network devices.
Example audio content sources may include: a media playback system such as a memory of one or more playback devices in the
In some implementations, the audio content source can be added to or removed from a media playback system, such as the
The above discussion relating to playback devices, controller devices, playback zone configurations, and media content sources provides only a few examples of operating environments in which the functions and methods described below may be implemented. Other operating environments and configurations of media playback systems, playback devices, and network devices not explicitly described herein are also applicable and suitable for implementation of the described functions and methods.
Calibrating playback devices of a playback environment
As described above, examples described herein relate to calibrating one or more playback devices of a playback environment based on audio signals detected by microphones of network devices as the network devices move around within the playback environment.
In one example, calibration of the playback device may be initiated when the playback device is first set up or if the playback device has been moved to a new location. For example, where the playback device is moved to a new location, calibration of the playback device may be initiated based on detecting the movement (i.e., by a Global Positioning System (GPS), one or more accelerometers, or a change in wireless signal strength, etc.) or based on user input indicating that the playback device has moved to the new location (i.e., a change in the name of a playback zone associated with the playback device).
In another example, calibration of the playback device may be initiated by a controller device (e.g., a network device). For example, a user may access a controller interface of the playback device to initiate calibration of the playback device. In one case, the user may access the controller interface and select the playback device (or a group of playback devices that includes the playback device) to be calibrated. In some cases, a calibration interface may be provided as part of a controller interface of the playback device to enable a user to initiate playback device calibration. Other examples are also possible.
a.First example method of calibrating one or more playback devices
Fig. 5 illustrates an example flow diagram of a
Additionally, for the
In one example,
To help illustrate the
Referring again to
In one example, the first audio signal may be a test signal or measurement signal representing audio content that can be played by the playback device during regular use by the user. Thus, the first audio signal may comprise audio content having a frequency substantially covering a renderable frequency range or a human-audible frequency range of the
For purposes of illustration, the network device may be
In another example, the
Assuming that the second audio signal is detected by the microphone of the
In one example, both the first physical location and the second physical location may be within the
Assuming that the second audio signal is detected while the
In one example, movement of the
In one example, the first audio signal may have a predetermined duration (e.g., about 30 seconds), and the detection of the audio signal by the microphone of the
In one example, the
At
In one example, the second audio signal detected by the microphone of the
In one case, the microphone of the
Assume that the audio signal output by the microphone of the
wherein
A mathematical function representing a convolution. Thus, it may be based on the signal x (t) output from the microphone and the acoustic properties h of the microphonem(t) to determine a second audio signal s (t) detected by the microphone. For example, a calibration algorithm such as h may be usedm -1(t) is applied to the audio signal output from the microphone of theIn one example, the acoustic characteristic h of the microphone of the network device 602m(t) may be known. For example, a database of microphone acoustic characteristics and corresponding network device models and/or network device microphone models may be available. In another example, the acoustic characteristic h of the microphone of the network device 602m(t) may be unknown. In this case, the playback device, e.g.,
In one example, identifying an audio processing algorithm may include: the method further includes determining a frequency response based on the first audio signal based on data indicative of the second audio signal and identifying an audio processing algorithm based on the determined frequency response.
Assuming that the
In one example, an audio processing algorithm may then be identified based on the average frequency response. In one case, the audio processing algorithm can be determined such that application of the audio processing algorithm by the
In one example, the predetermined audio characteristic may be an audio frequency equalization that is considered good-listening (good-listening). In one case, the predetermined audio characteristic may include an equalization that is substantially uniform across a renderable frequency range of the playback device. In another case, the predetermined audio characteristics may include equalization that is deemed pleasant to a typical listener. In another case, the predetermined audio characteristics may include a frequency response that is deemed appropriate for a particular music type.
In either case, the
In one example, the relationship between the first audio signal f (t) and the second audio signal, denoted as s (t), detected by the microphone of the
wherein h ispe(t) represents the acoustic characteristics of audio content played by the playback device 604 (at a location along the path 608) in the
z(t)=s(t)×p(t) (3)
thus, the audio processing algorithm p (t) can be described mathematically as:
p(t)=z(t)/s(t) (4)
in some cases, identifying the audio processing algorithm may include the
At
In one example, the data indicative of the identified audio processing algorithm may include one or more parameters of the identified audio processing algorithm. In another example, the database of audio processing algorithms can be accessible by the playback device. In this case, the data indicative of the identified audio processing algorithm may point to an entry in the database corresponding to the identified audio processing algorithm.
In some cases, if the
While the above discussion generally refers to calibrating a single playback device, one of ordinary skill in the art will appreciate that similar functions may also be performed to calibrate multiple playback devices, either individually or as a group. For example, the
In one example, the first audio signal and the third audio signal may be played substantially the same and/or simultaneously. In another example, the first audio signal and the third audio signal may be orthogonal or may be distinguishable. For example, the
In either case, the second audio signal detected by the microphone of the
In an example, a first audio processing algorithm can be identified for application by the
In one example, upon initially identifying the audio processing algorithm, the
In some cases, a user may activate or deactivate the identified audio processing algorithm for a certain period of time. In one example, this may allow the user more time to evaluate whether to have the
As described above, the
b.Second example method for calibrating one or more playback devices
Fig. 7 illustrates an example flow diagram of a
In one example,
At
In one example, the first audio signal may be substantially similar to the first audio signal discussed above in connection with
At
In one case, when the microphone of the
At
At
As described above, the
c.Third for calibrating one or more playback devicesExample method
Fig. 8 illustrates an example flow diagram of a
In one example,
As shown in fig. 8,
At
At
At
As described above, the
In some cases, two or more network devices may be used to calibrate one or more playback devices, either individually or collectively. For example, two or more network devices may detect audio signals played by one or more playback devices as they move around in a playback environment. For example, one network device may move around where a first user regularly listens to audio content played by one or more playback devices, while another network device may move around where a second user regularly listens to audio content played by one or more playback devices. In this case, the processing algorithm may be executed based on the audio signals detected by the two or more network devices.
Further, in some cases, a processing algorithm may be executed for each of the two or more network devices based on signals detected as each respective network device traverses a different path within the playback environment. Thus, if a particular network device is used to initiate playback of audio content by one or more playback devices, a processing algorithm determined based on audio signals detected as the particular network device traverses the playback environment may be applied. Other examples are also possible.
Calibrating a network device microphone using a playback device microphone
As discussed above in connection with fig. 5-8, calibration of a playback device in a playback environment may include knowledge of the acoustic characteristics of the microphone of the network device used for calibration and/or a calibration algorithm. However, in some cases, the acoustic characteristics of the microphone of the network device used for calibration and/or the calibration algorithm may be unknown.
Examples discussed in this section include calibrating a microphone of the network device based on audio signals detected by the microphone of the network device when the network device is placed within a predetermined physical range of the microphone of the playback device.
a.First example method for calibrating a network device microphone
Fig. 9 illustrates an example flow diagram of a first method for calibrating a network device microphone. The
In one example,
To help illustrate the
The
The
In one example, the
In one example, calibration of a microphone of
Referring again to
Thus, depending on the position of the
In one example, the
In one example, the first audio signal detected by the microphone of the
In one example, the third audio signal played by one or
Once the
Then, one or more of the
At
In another example, the second audio signal may be detected by the
In this case, one or more of the
In one example, the
At
Assume that the second audio signal detected by the
similarly, assume that the first audio signal detected by the microphone of
as described above, assuming that the first audio signal f (t) detected by the microphone of the
Thus, due to the data n (t) indicative of the first audio signal, the data m (t) indicative of the second audio signal and the data h of the acoustic properties of the
In one example, the microphone calibration algorithm for the microphone of the
In some cases, identifying the microphone calibration algorithm may include the
At
In one example,
The database may be populated with a plurality of entries of microphone calibration algorithms and/or associations between microphone calibration algorithms and one or more characteristics of microphones of the network device. As described above, the
The database may be accessed by other network devices, the computing device including
In some cases, the microphone calibration algorithms determined for the same model of network device or microphone may vary due to variations in production and manufacturing quality control of the microphone as well as variations during calibration (i.e., potential inconsistencies where the network device is placed during calibration, etc.). In this case, a representative microphone calibration algorithm may be determined from the varying microphone calibration algorithms. For example, a representative microphone calibration algorithm may be an average of varying microphone calibration algorithms. In one case, each time a calibration is performed for a microphone of a particular model of network device, the entry for the particular model of network device in the database may be updated with the updated representative calibration algorithm.
As described above,
In some cases,
b.Second example method for calibrating a network device microphone
Fig. 11 illustrates an example flow diagram of a second method for calibrating a network device microphone. The
In one example,
At
At
At
At
For example, in this case, rather than being applied by the
As described in connection with
Based on the data indicative of the detected audio signal and the data indicative of the second audio signal, a second microphone calibration algorithm is identified and an association between the determined second microphone calibration algorithm and one or more characteristics of a microphone of the second network device is stored in a database. The
As also described in connection with
In one such case, for example, if the second network device is the same model as
As described above, the
Conclusion V
The above description discloses, among other things, various example systems, methods, apparatus, and articles of manufacture including components, such as firmware and/or software, executed on hardware. It should be understood that these examples are illustrative only and should not be considered as limiting. For example, it is contemplated that any or all of these firmware, hardware, and/or software aspects or components could be embodied exclusively in hardware, exclusively in software, exclusively in firmware, or in any combination of hardware, software, and/or firmware. Accordingly, the examples provided are not the only way to implement such systems, methods, apparatus, and/or articles of manufacture.
The following examples set forth additional or alternative aspects of the disclosure. The device in any of the following examples may be a component of any of the devices described herein or any configuration of the devices described herein.
(feature 1) a network device comprising:
a microphone; a processor; and a memory storing instructions executable by the processor to cause the playback device to perform functions comprising:
detecting, by a microphone of the network device, a first audio signal when the network device is placed within a predetermined physical range of the microphone of the playback device;
receiving data indicative of a second audio signal detected by a microphone of the playback device;
identifying a microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal; and
the microphone calibration algorithm is applied when performing a calibration function associated with the playback device.
(feature 2) the network device of feature 1, wherein the functions further comprise storing an association between the determined microphone calibration algorithm and one or more characteristics of a microphone of the network device in a database.
(feature 3) the network device according to any one of features 1 and 2, wherein the microphone of the playback device detects the second audio signal when the microphone of the network device detects the first audio signal.
(feature 4) the network device of any one of features 1 to 3, wherein the functions further comprise causing one or more playback devices to play a third audio signal upon detecting the first audio signal, wherein the first audio signal and the second audio signal each include a portion corresponding to the third audio signal.
(feature 5) the network device of feature 4, wherein the one or more playback devices include the playback device.
(feature 6) the network device of any of features 1-5, wherein the functions further comprise receiving an input to calibrate a microphone of the network device prior to detecting the first audio signal.
(feature 7) the network device of any of features 1-6, wherein the functions further comprise, prior to detecting the first audio signal, providing a graphical representation on a graphical interface indicating that the network device is to be placed within a predetermined physical range of a microphone of the playback device.
(feature 8) the network device of any of features 1-7, wherein the functions further comprise determining that the network device is placed within a predetermined physical range of a microphone of the playback device prior to detecting the first audio signal.
(feature 9) the network device of any of features 1 to 8, wherein identifying the microphone calibration algorithm comprises:
transmitting data indicative of the first audio signal to a computing device; and
receiving the microphone calibration algorithm from the computing device.
(feature 10) a computing device comprising:
a processor; and a memory storing instructions executable by the processor to cause the playback device to perform functions comprising:
receiving, from a network device, data indicative of a first audio signal detected by a microphone of a playback device when the network device is placed within a predetermined physical range of the microphone;
receiving data indicative of a second audio signal detected by a microphone of the playback device;
identifying a microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal; and
the microphone calibration algorithm is applied when performing calibration functions associated with the network device and the playback device.
(feature 11) the computing device of feature 10, wherein the functions further comprise sending data indicative of the microphone calibration algorithm to the network device.
(feature 12) the computing device of any of features 10 and 11, wherein the functions further comprise storing an association between the determined microphone calibration algorithm and one or more characteristics of a microphone of the network device in a database.
(feature 13) the computing device of any of features 10-12, wherein the network device is a first network device, wherein the microphone calibration algorithm is a first microphone calibration algorithm, and wherein the functions further comprise:
receiving, from a second network device, data indicative of a third audio signal detected by a microphone of the second network device when the second network device is placed within a predetermined physical range of the microphone of the playback device;
identifying a second microphone calibration algorithm based on the data indicative of the third audio signal and the data indicative of the second audio signal; and
causing an association between the determined second microphone calibration algorithm and one or more characteristics of a microphone of the second network device to be stored in a database.
(feature 14) the computing device of feature 13, wherein the functions further comprise sending data indicative of the second microphone calibration algorithm to the second network device.
(feature 15) the computing device of any of features 13 and 14, wherein the functions further comprise:
determining that the microphone of the first network device and the microphone of the second network device are substantially the same;
responsively determining a third microphone calibration algorithm based on the first microphone calibration algorithm and the second microphone calibration algorithm; and
causing an association between the determined third microphone calibration algorithm and one or more characteristics of the microphone of the first network device to be stored in the database.
(feature 16) a non-transitory computer-readable medium having stored thereon instructions executable by a computing device to cause the computing device to perform functions comprising:
receiving, from a network device, data indicative of a first audio signal detected by a microphone of a playback device when the network device is placed within a predetermined physical range of the microphone;
receiving data indicative of a second audio signal detected by a microphone of the playback device;
identifying a microphone calibration algorithm based on the data indicative of the first audio signal and the data indicative of the second audio signal; and
causing an association between the determined microphone calibration algorithm and one or more characteristics of a microphone of the network device to be stored in a database.
(feature 17) the non-transitory computer-readable medium of feature 16, wherein the functions further comprise transmitting data indicative of the microphone calibration algorithm to the network device.
(feature 18) the non-transitory computer-readable medium of any of features 16 and 17, wherein receiving data indicative of the second audio signal detected by the microphone of the playback device includes receiving data indicative of the second audio signal detected by the microphone of the playback device from the playback device.
(feature 19) the non-transitory computer-readable medium of any of features 16 to 18, wherein receiving data indicative of the second audio signal detected by the microphone of the playback device includes receiving data indicative of the second audio signal detected by the microphone of the playback device from the network device.
(feature 20) the non-transitory computer-readable medium of any of features 16-19, wherein the functions further comprise causing one or more playback devices to play a third audio signal prior to receiving the data indicative of the first audio signal, wherein the first audio signal includes a portion corresponding to the third audio signal.
(feature 21) a computing device comprising:
a processor; and a memory storing instructions executable by the processor to cause the playback device to perform functions comprising:
identifying, within a database of microphone acoustic characteristics, acoustic characteristics of a microphone on a network device that correspond to particular characteristics of the network device; and
calibrating a playback device based on at least the identified acoustic characteristics of the microphone.
(feature 22) the computing device of feature 1, wherein the functions further comprise maintaining a database of microphone acoustic characteristics.
(feature 23) the computing device of any of features 21 and 22, wherein identifying the acoustic characteristics of the microphone comprises:
sending, to a server maintaining a database of microphone acoustic characteristics, data indicative of characteristics of the network device and a query for acoustic characteristics corresponding to the characteristics of the network device; and
receiving data from the server indicating the queried acoustic characteristics of the microphone.
(feature 24) the computing device of any of features 21-23, wherein identifying the acoustic characteristics of the microphone comprises:
identifying a particular model of the microphone that corresponds to a characteristic of the network device;
sending, to a server maintaining a database of microphone acoustic characteristics, data indicative of the particular model of the microphone and a query for acoustic characteristics corresponding to the particular model; and
receiving data from the server indicating the queried acoustic characteristics of the microphone.
(feature 25) the computing device of any of features 21-24, wherein calibrating the playback device comprises:
determining an audio processing algorithm based on the identified acoustic characteristics of the microphone; and
causing the playback device to apply the audio processing algorithm while playing media content.
(feature 26) the computing device of feature 25, wherein the functions further comprise:
receiving (i) first data indicative of a first audio signal and (ii) second data indicative of a second audio signal detected by a microphone of the network device when the playback device is playing the first audio signal, and
wherein determining the audio processing algorithm comprises:
the audio processing algorithm is also determined based on the first audio signal and the second audio signal.
(feature 27) the computing device of any of features 25 and 26, wherein the audio processing algorithm includes an inversion of the identified acoustic characteristic of the microphone, and wherein causing the playback device to apply the audio processing algorithm while playing media content includes modifying the played media content by an inverse function of the identified acoustic characteristic of the microphone.
(feature 28) a network device, comprising:
a microphone; a processor; and a memory storing instructions executable by the processor to cause the playback device to perform functions comprising:
identifying, within a database of microphone acoustic characteristics, acoustic characteristics of the microphone that correspond to particular characteristics of the network device; and
calibrating a playback device based on at least the identified acoustic characteristics of the microphone.
(feature 29) the network device of feature 28 wherein the functions further comprise maintaining a database of microphone acoustic characteristics.
(feature 30) the network device of any of features 28 and 29, wherein identifying the acoustic characteristics of the microphone comprises:
sending, to a server maintaining a database of microphone acoustic characteristics, data indicative of characteristics of the network device and a query for acoustic characteristics corresponding to the characteristics of the network device; and
receiving data from the server indicating the queried acoustic characteristics of the microphone.
(feature 31) the network device of any of features 28 to 30, wherein identifying the acoustic characteristics of the microphone comprises:
identifying a particular model of the microphone that corresponds to a characteristic of the network device;
sending, to a server maintaining a database of microphone acoustic characteristics, data indicative of the particular model of the microphone and a query for acoustic characteristics corresponding to the particular model; and
receiving data from the server indicating the queried acoustic characteristics of the microphone.
(feature 32) the network device of any of features 28 to 31, wherein calibrating the playback device comprises:
determining an audio processing algorithm based on the identified acoustic characteristics of the microphone; and
causing the playback device to apply the audio processing algorithm while playing media content.
(feature 33) the network device of feature 32, wherein the functions further comprise:
receiving (i) first data indicative of a first audio signal and (ii) second data indicative of a second audio signal detected by the microphone while the playback device is playing the first audio signal, and
wherein determining the audio processing algorithm comprises:
the audio processing algorithm is also determined based on the first audio signal and the second audio signal.
(feature 34) the network device of any of features 32 and 33, wherein the audio processing algorithm comprises an inversion of the identified acoustic characteristic of the microphone, and wherein causing the playback device to apply the audio processing algorithm while playing media content comprises modifying the played media content by an inverse function of the identified acoustic characteristic of the microphone.
(feature 35) a playback device comprising:
a processor; and a memory storing instructions executable by the processor to cause the playback device to perform functions comprising:
identifying, within a database of microphone acoustic characteristics, acoustic characteristics of a microphone on a network device that correspond to particular characteristics of the network device; and
calibrating the playback device based on at least the identified acoustic characteristics of the microphone. (feature 36) the playback device of feature 35, wherein the functions further comprise maintaining a database of microphone acoustic characteristics.
(feature 37) the playback device of any one of features 35 and 36, wherein identifying the acoustic characteristic of the microphone comprises:
transmitting, to a server maintaining a database of microphone acoustic characteristics, data indicative of characteristics of the playback device and a query for acoustic characteristics corresponding to the characteristics of the playback device; and
receiving data from the server indicating the queried acoustic characteristics of the microphone.
(feature 38) the playback device of any of features 35 to 37, wherein identifying the acoustic characteristic of the microphone comprises:
identifying a particular model of the microphone that corresponds to a characteristic of the playback device;
sending, to a server maintaining a database of microphone acoustic characteristics, data indicative of the particular model of the microphone and a query for acoustic characteristics corresponding to the particular model; and
receiving data from the server indicating the queried acoustic characteristics of the microphone.
(feature 39) the playback device of any of features 35 to 38, wherein calibrating the playback device comprises:
determining an audio processing algorithm based on the identified acoustic characteristics of the microphone; and
the audio processing algorithm is applied while playing the media content.
(feature 40) the playback device of feature 39, wherein the functions further comprise:
playing the first audio signal;
receiving data indicative of a second audio signal detected by a microphone of the network device while playing the first audio signal, and
wherein determining the audio processing algorithm comprises:
the audio processing algorithm is also determined based on the first audio signal and the second audio signal.
In addition, the present invention may be configured as follows:
scheme 1. a method comprising:
identifying a microphone calibration algorithm based on:
first data indicative of a first audio signal detected by a microphone of a playback device when the network device is placed within a predetermined physical range of the microphone;
second data indicative of a second audio signal detected by a microphone of the playback device; and
the microphone calibration algorithm is applied when performing a calibration function associated with the playback device.
Scheme 2. the method of scheme 1, further comprising storing an association between the determined microphone calibration algorithm and one or more characteristics of a microphone of the network device in a database.
Scheme 3. the method of scheme 1 or 2, wherein the microphone of the playback device detects the second audio signal when the microphone of the network device detects the first audio signal.
Scheme 4. the method of any of schemes 1-3, further comprising causing one or more playback devices to play a third audio signal upon detecting the first audio signal, wherein at least one of the first audio signal and the second audio signal includes a portion corresponding to the third audio signal.
Scheme 5. the method of scheme 4, wherein the one or more playback devices comprise the playback device.
Scheme 6. the method of any preceding scheme, further configured to, prior to detecting the first audio signal, at least one of:
receiving an input to calibrate a microphone of the network device;
cause a graphical representation to be provided on a graphical interface of the network device indicating that the network device is to be placed within a predetermined physical range of a microphone of the playback device; and
determining that the network device is placed within a predetermined physical range of a microphone of the playback device.
Scheme 7. the method of any preceding scheme, wherein identifying the microphone calibration algorithm comprises:
transmitting data indicative of the first audio signal to a computing device; and
receiving the microphone calibration algorithm from the computing device.
Scheme 8. the method according to any preceding scheme, wherein,
the network device is a first network device, wherein the microphone calibration algorithm is a first microphone calibration algorithm;
the method further comprises the following steps:
identifying a second microphone calibration algorithm based on the data indicative of the second audio signal and data received from a second network device indicative of a third audio signal detected by a microphone of the playback device when the second network device is placed within a predetermined physical range of the microphone; and
causing an association between the determined second microphone calibration algorithm and one or more characteristics of a microphone of the second network device to be stored in a database.
Scheme 9. the method of scheme 8, further comprising transmitting data indicative of the second microphone calibration algorithm to the second network device.
Scheme 10. the method of scheme 8 or 9, further comprising:
determining that the microphone of the first network device and the microphone of the second network device are substantially the same;
responsively determining a third microphone calibration algorithm based on the first microphone calibration algorithm and the second microphone calibration algorithm; and
causing an association between the determined third microphone calibration algorithm and one or more characteristics of the microphone of the first network device to be stored in the database.
The method of any preceding aspect, wherein the data indicative of the second audio signal is received from one of:
the playback device; and
the network device.
Scheme 12. a network device or computing device configured to perform a method according to any preceding scheme.
Scheme 13. the computing device of scheme 12, further configured to receive data indicative of a first audio signal detected by a microphone of the network device and to receive data indicative of a second audio signal detected by a microphone of the playback device.
Scheme 14. the computing device of scheme 12 or 13, further configured to transmit data indicative of the microphone calibration algorithm to the network device.
Scheme 15. a method comprising:
identifying, within a database of microphone acoustic characteristics, an acoustic characteristic of a microphone on a network device, wherein the acoustic characteristic corresponds to a particular characteristic of the network device; and
calibrating a playback device based on at least the identified acoustic characteristics of the microphone.
Scheme 16. the method of scheme 15, further comprising maintaining a database of microphone acoustic characteristics.
The method of aspect 15 or 16, wherein identifying the acoustic characteristic of the microphone comprises:
sending, to a server maintaining a database of microphone acoustic characteristics, data indicative of characteristics of the network device and a query for acoustic characteristics corresponding to the characteristics of the network device; and
receiving data from the server indicating the queried acoustic characteristics of the microphone.
Scheme 18. the method of one of schemes 15 to 17, wherein identifying the acoustic characteristics of the microphone comprises:
identifying a particular model of the microphone that corresponds to a characteristic of the network device;
sending, to a server maintaining a database of microphone acoustic characteristics, data indicative of the particular model of the microphone and a query for acoustic characteristics corresponding to the particular model; and
receiving data from the server indicating the queried acoustic characteristics of the microphone.
Scheme 19. the method of one of schemes 15 to 18, wherein calibrating the playback device comprises:
determining an audio processing algorithm based on the identified acoustic characteristics of the microphone; and
causing the playback device to apply the audio processing algorithm while playing media content.
Scheme 20. the method of scheme 19 in combination with one of schemes 1 to 14, further comprising:
receiving (i) first data indicative of a first audio signal played by the playback device and (ii) second data indicative of a second audio signal detected by a microphone of the network device while the playback device is playing the first audio signal, and
wherein determining the audio processing algorithm comprises determining the audio processing algorithm further based on the first audio signal and the second audio signal.
Scheme 21. the method of scheme 19 or 20, wherein,
the audio processing algorithm comprises an inversion of the identified acoustic characteristic of the microphone; and
causing the playback device to apply the audio processing algorithm while playing media content includes modifying the played media content by an inverse function of the identified acoustic characteristic of the microphone.
Scheme 22. one of the following devices:
a computing device comprising a microphone and configured to perform the method of any of aspects 15-21;
a network device comprising a microphone and configured to perform the method of any of aspects 15-21; and
a playback device configured to perform the method of any of aspects 1-5 or 7.
Furthermore, references herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one example embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are different or alternative embodiments mutually exclusive of other embodiments. Likewise, those skilled in the art will explicitly and implicitly understand that the embodiments described herein may be combined with other embodiments.
The description is presented primarily in terms of illustrative environments, systems, processes, steps, logic blocks, processes, and other symbolic representations that are directly or indirectly analogous to the operation of data processing devices coupled to a network. These process descriptions and representations are generally used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. Numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will be understood by those skilled in the art that certain embodiments of the present disclosure may be practiced without the specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments. Accordingly, the scope of the present disclosure is defined by the appended claims rather than by the foregoing description of the embodiments.
When any of the appended claims are read to cover a purely software and/or firmware implementation, at least one unit in at least one example is hereby expressly defined to include a tangible, non-transitory medium such as a memory, DVD, CD, Blu-ray, etc. storing the software and/or firmware.
- 上一篇:一种医用注射器针头装配设备
- 下一篇:用于在销售点处定制听力装置的方法