Transmitting audio to various channels using application location information

文档序号:441205 发布日期:2021-12-24 浏览:15次 中文

阅读说明:本技术 使用应用位置信息向各种通道发送音频 (Transmitting audio to various channels using application location information ) 是由 A·戈卡恩 K·A·科特里 F·达拉尔 于 2020-04-23 设计创作,主要内容包括:用于平移音频的方法和设备可以包括:从应用接收音频数据,应用在与计算机设备通信的多个显示设备中的至少一个显示设备上被打开。该方法和设备可以包括响应于扬声器位置信息和具有针对应用的当前位置的应用位置信息,从多个显示设备的多个扬声器中选择扬声器集以接收音频数据。该方法和设备可以包括接收具有针对应用的新位置的、更新的应用位置信息,以及响应于针对应用的新位置,从多个扬声器中选择新的扬声器集以接收音频数据。该方法和设备可以包括将音频数据从该扬声器集转换到该新的扬声器集。(Methods and apparatus for panning audio may include: audio data is received from an application that is opened on at least one of a plurality of display devices in communication with the computer device. The method and apparatus may include selecting a set of speakers from a plurality of speakers of a plurality of display devices to receive audio data in response to speaker location information and application location information having a current location for an application. The method and apparatus may include receiving updated application location information having a new location for an application, and selecting a new set of speakers from a plurality of speakers to receive audio data in response to the new location for the application. The method and apparatus may include converting audio data from the set of speakers to the new set of speakers.)

1. A computer device, comprising:

a memory for storing data and instructions;

at least one processor configured to communicate with the memory; and

an operating system configured to communicate with the memory and the at least one processor, wherein the operating system is operable to:

receiving audio data from an application that is opened on at least one of a plurality of display devices in communication with the computer device;

selecting a set of speakers from the plurality of speakers of the plurality of display devices to receive the audio data in response to speaker location information for a plurality of speakers and application location information having a current location for the application;

receiving updated application location information having a new location for the application;

selecting a new set of speakers from the plurality of speakers to receive the audio data in response to the new location for the application and the speaker location information; and

converting the audio data from the set of speakers to the new set of speakers.

2. The computer device of claim 1, wherein the application location information identifies the at least one display device on which the application is located and the current location identifies a location of the application on the at least one display device, or

Wherein the speaker location information identifies a static orientation of each speaker of the plurality of speakers on the plurality of display devices.

3. The computer device of claim 1, wherein the operating system is further operable to:

determining a weight to be applied to each speaker in the set of speakers and each speaker in the new set of speakers, wherein the weight controls an output quantity for the audio data; and

converting the audio data from the set of speakers to the new set of speakers by decreasing the weight for each speaker in the set of speakers and increasing the weight for each speaker in the new set of speakers.

4. The computer device of claim 3, wherein the weight is determined in response to at least one of a distance from a static orientation of each speaker in the set of speakers to the current location of the application or a distance from a static orientation of each speaker in the new set of speakers to the new location of the application.

5. The computer device of claim 1, wherein the operating system is further operable to: selecting the set of speakers in response to the speaker location information for the set of speakers indicating that at least one speaker of the set of speakers is located on the at least one display device or the set of speakers is located within a predetermined radius of the current location of the application.

6. The computer device of claim 1, wherein the new location is on a different display device of the plurality of display devices and the new set of speakers is located on the different display device.

7. The computer device of claim 1, wherein the set of speakers and the new set of speakers output the audio data.

8. A method for panning audio across a plurality of speakers, comprising:

receiving, at an operating system executing on a computer device, audio data from an application that is opened on at least one display device of a plurality of display devices in communication with the computer device;

selecting a set of speakers from the plurality of speakers of the plurality of display devices to receive the audio data in response to the speaker location information for the plurality of speakers and application location information having a current location for the application;

receiving updated application location information having a new location for the application;

selecting a new set of speakers from the plurality of speakers to receive the audio data in response to the new location for the application and the speaker location information; and

converting the audio data from the set of speakers to the new set of speakers.

9. The method of claim 8, wherein the application location information identifies the at least one display device on which the application is located and the current location identifies a location of the application on the at least one display device, or

Wherein the speaker location information identifies a static orientation of each speaker of the plurality of speakers on the plurality of display devices.

10. The method of claim 8, wherein the method further comprises:

determining a weight applied to each speaker in the set of speakers and each speaker in the new set of speakers, wherein the weight controls an output quantity for the audio data, an

Wherein the audio data is converted from the set of speakers to the new set of speakers by decreasing the weight for each speaker in the set of speakers and increasing the weight for each speaker in the new set of speakers.

11. The method of claim 10, wherein the weight is determined in response to at least one of a distance from a static orientation of each speaker in the set of speakers to the current location of the application or a distance from a static orientation of each speaker in the new set of speakers to the new location of the application.

12. The method of claim 8, wherein the set of speakers is selected in response to the speaker location information for the set of speakers indicating that at least one speaker of the set of speakers is located on the at least one display device or that the set of speakers is located within a predetermined radius of the current location of the application.

13. The method of claim 8, wherein the new location is on a different display device of the plurality of display devices and the new set of speakers is located on the different display device.

14. The method of claim 8, wherein the set of speakers is selected from the plurality of speakers in response to auxiliary information and the new set of speakers is selected from the plurality of speakers in response to updated auxiliary information.

15. A computer-readable medium storing instructions executable by a computer device, comprising:

at least one instruction for causing the computer device to receive audio data from an application that is opened on at least one display device of a plurality of display devices in communication with the computer device;

at least one instruction for causing the computer device to select a set of speakers from the plurality of speakers of the plurality of display devices to receive the audio data in response to speaker location information for a plurality of speakers and application location information having a current location for the application;

at least one instruction for causing the computer device to receive updated application location information having a new location for the application;

at least one instruction for causing the computer device to select a new set of speakers from the plurality of speakers to receive the audio data in response to the new location for the application and the speaker location information; and

at least one instruction for causing the computer device to convert the audio data from the set of speakers to the new set of speakers.

Background

The present disclosure relates to transmitting audio across multiple speakers.

When multiple devices are in communication with a computer device, there may be multiple speakers that may output audio for an application. Thus, depending on where the application is located, it may be necessary to select a speaker of the device to output audio. Furthermore, as new applications are opened, the selected speakers may need to be updated for the various applications.

Accordingly, there is a need in the art for improvements in transmitting audio across multiple speakers.

Disclosure of Invention

The following presents a simplified summary of one or more implementations of the disclosure in order to provide a basic understanding of such implementations. This summary is not an extensive overview of all contemplated implementations, and is intended to neither identify key or essential elements of all implementations, nor delineate the scope of any or all implementations. Its sole purpose is to present some concepts of one or more implementations of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.

One example implementation relates to a computer device. The computer device may include a memory for storing data and instructions, at least one processor configured to communicate with the memory, and an operating system configured to communicate with the memory and the processor, wherein the operating system is operable to: receiving audio data from an application, the application being opened on at least one of a plurality of display devices in communication with a computer device; selecting a speaker set from a plurality of speakers of a plurality of display devices to receive audio data in response to speaker location information for the plurality of speakers and application location information having a current location for an application; receiving updated application location information having a new location for the application; in response to the new location and the speaker location information for the application, selecting a new set of speakers from the plurality of speakers to receive the audio data; and converting the audio data from the set of speakers to the new set of speakers.

Another example implementation relates to a method for panning audio across a plurality of speakers. The method may include receiving, at an operating system executing on a computer device, audio data from an application, the application being opened on at least one display device of a plurality of display devices in communication with the computer device. The method may include selecting a set of speakers from a plurality of speakers of a plurality of display devices to receive audio data in response to speaker location information for the plurality of speakers and application location information having a current location for an application. The method may include receiving updated application location information having a new location for an application. The method may include selecting a new set of speakers from the plurality of speakers to receive the audio data in response to the new location for the application and the speaker location information. The method may include converting audio data from the set of speakers to the new set of speakers.

Another example implementation relates to a computer-readable medium storing instructions executable by a computer device. The computer-readable medium may include at least one instruction for causing a computer device to receive audio data from an application that is opened on at least one of a plurality of display devices in communication with the computer device. The computer-readable medium may include at least one instruction for causing a computer device to select a set of speakers from a plurality of speakers to receive audio data in response to speaker location information for the plurality of speakers and application location information having a current location for an application. The computer-readable medium may include at least one instruction for causing a computer device to receive updated application location information having a new location for an application. The computer-readable medium may include at least one instruction for causing a computer device to select a new set of speakers from a plurality of speakers to receive audio data in response to the new location and speaker location information for the application. The computer-readable medium may include at least one instruction for causing a computer device to convert audio data from the set of speakers to the new set of speakers.

Additional advantages and novel features will be set forth in part in the description which follows and in part will become apparent to those skilled in the art upon examination of the following or may be learned by practice of the disclosure.

Drawings

In the drawings:

FIG. 1 is a schematic block diagram of an example computer device in communication with a plurality of display devices in accordance with implementations of the present disclosure;

fig. 2 is an example of a speaker array according to an implementation of the present disclosure;

FIG. 3 is an example of selecting speakers to output audio data for two applications located on multiple display devices according to an implementation of the present disclosure;

FIG. 4 is an example of panning audio to a new speaker set when an application moves to a new location in accordance with an implementation of the present disclosure;

FIG. 5 is an example method flow for panning audio across multiple speakers according to an implementation of the present disclosure; and

fig. 6 is a schematic block diagram of an example device in accordance with implementations of the present disclosure.

Detailed Description

The present disclosure relates to devices and methods for panning audio for one or more applications across multiple speakers as the location of the applications changes. Panning audio may include causing the speaker to follow the movement of the application for the audio output of the application as the position of the application changes. Thus, as the location of the application changes, different speakers may be selected to output audio for the application. The device and method may include one or more display devices in communication with a computer device. The display device may communicate with the computer device via Universal Serial Bus (USB), bluetooth, and/or other network types. The display device may have at least one display and corresponding audio input and/or audio output. The display device may be any type of display, monitor, visual presentation device, computer device, and/or physical panel capable of presenting information, capturing audio, and/or emitting audio. Each display device may include any number of channels (e.g., speakers and/or microphones) for capturing audio or emitting audio. Each speaker and/or microphone of the display device may correspond to any number of channels. For example, the speaker and/or microphone may be a two-channel stereo sound having a left channel and a right channel. An audio stack on a computer device may receive information regarding the number of display devices in communication with the computer device and the number of speakers for each display device. The audio stack may also receive speaker position information for each speaker that provides a physical position for the speaker that may correspond to a static orientation of the speaker.

On a particular display device, there may be multiple windows and/or virtual panels corresponding to the applications. The location of the application may be shared with an audio stack on the computer device. The audio stack may determine a direction for panning audio to speakers on a particular display device in response to a combination of the application location information, the display device information, and the speaker location information.

For example, when the media application plays audio and video, the audio for the media application may be output by speakers on the display device where the media application is currently located. The user may drag and/or expand the window size of the application to a new location so that the application may span multiple display devices. As the application moves to a new location (e.g., to another display device or to another virtual panel or window of the same display device), the audio may be panned to a different set of speakers corresponding to the new location of the application. The audio stack may apply weights to determine the amount of audio to output via the corresponding speaker. As the application moves, the audio stack may decrease the amount of audio output by speakers near the current location of the application (e.g., by decreasing the weight) and increase the amount of audio output by speakers near the new location of the application (e.g., by increasing the weight).

Additionally, the audio stack may determine a direction for panning audio to a speaker on a particular display in response to other auxiliary information (such as, but not limited to, triggering a user position for an audio direction change). For example, when the user is located in a room, audio may be output by speakers on a display device in the room where the user is located. As the user moves to a different location (e.g., to a new room and/or to a different region of the room), the audio may pan to a different set of speakers corresponding to the user's new location. Another example may include the audio stack selecting a set of speakers to output audio that may have a shortest distance to the user location relative to distances of other speakers to the user location.

In this way, the method and apparatus may allow applications to be distributed across multiple displays, bringing an entirely new dimension to collaboration and efficient work. The method and apparatus can intelligently move audio to a speaker configuration by making a decision internally where to send the audio so that the user and/or application does not have to make any decisions.

Referring now to fig. 1, a system 100 for use with panning audio may include a computer device 102 in communication with a plurality of display devices 106, 108, 110, 112 via a wired or wireless network 104. For example, the display devices 106, 108, 110, 112 may communicate with the computer device 102 via a Universal Serial Bus (USB) or other network type. The plurality of display devices 106, 108, 110, 112 may be any type of display, monitor, visual presentation device, computer device, and/or physical panel capable of presenting information, capturing audio, and/or emitting audio. Further, each display device 106, 108, 110, 112 may include any number of channels for capturing audio and/or emitting audio. Each speaker and/or microphone of the display devices 106, 108, 110, 112 may correspond to any number of channels.

Multiple display devices 106, 108, 110, 112 may be combined together and represented as a single audio endpoint, such that the application 10 may be unaware of the multiple display devices 106, 108, 110, 112 in communication with the computer device 102. The application 10 executing on the computer device 102 may be used on any of the display devices 106, 108, 110, 112. The application 10 may be distributed across multiple display devices 106, 108, 110, 112, rather than being limited to a single display device 106, 108, 110, 112. On a particular display device 106, 108, 110, 112, there may be multiple windows and/or virtual panels that may correspond to one or more applications 10. For example, the application 10 may have a graphical User Interface (UI) corresponding to a window on the display device 106, 108, 110, 112. In this manner, a completely new dimension of collaboration and efficient work can be created using the system 100.

Computer device 102 may include any mobile or stationary computer device that may be connected to a network. For example, the computer device 102 may be a computer device such as a desktop or laptop computer or tablet computer, an internet of things (IOT) device, a cellular telephone, a gaming device, a mixed reality or virtual reality device, a music device, a television, a navigation system, a camera, a Personal Digital Assistant (PDA) or handheld device, or any other computer device that has wired and/or wireless connection capability with one or more other devices.

Computer device 102 may include an operating system 111 that may be executed by processor 42 and/or memory 44. Memory 44 of computer device 102 may be configured to store data and/or computer-executable instructions defined and/or associated with operating system 111, and processor 42 may execute such data and/or instructions to instantiate operating system 111. Examples of memory 44 may include, but are not limited to, types of memory usable by a computer, such as Random Access Memory (RAM), Read Only Memory (ROM), magnetic tape, magnetic disk, optical disk, volatile memory, non-volatile memory, and any combination thereof. Examples of processor 42 may include, but are not limited to, any processor specially programmed as described herein, including a controller, microcontroller, Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), system on chip (SoC), or other programmable logic or state machine.

The operating system 111 can include a setup component 14 that can initialize and/or configure the system 100. For example, the setup component 14 may determine the total number of display devices 18 in communication with the computer device 102. For example, the total number of display devices 18 may be predetermined by a user of the computer device 102, and/or user input may be received via a user interface, such as a display on the computer device 102 indicating the total number of display devices 18 of the system 100. The total number of display devices 18 may be increased and/or decreased as needed as display devices are added and/or removed from the system 100.

The setup component 14 may also determine the total number of speakers 20 in the system 100. Each display device 106, 108, 110, 112 may provide information regarding the number of speakers on the display device 106, 108, 110, 112, as well as corresponding speaker location information 22 for each speaker on the display device 106, 108, 110, 112. For example, the setup component 14 may receive hardware information from each of the display devices 106, 108, 110, 112 identifying how many speakers the display device 106, 108, 110, 112 may have, the speaker location information 22, how many channels the display device 106, 108, 110, 112 may support, whether the display device 106, 108, 110, 112 may be a high speed device or a super speed device, how many audio rendering endpoints the display device 106, 108, 110, 112 may have, how many capture endpoints the display device 106, 108, 110, 112 may have, and/or any hardware loopback points on the display device 106, 108, 110, 112. The setup component 14 may capture hardware information of the display devices 106, 108, 110, 112 and may construct a topology for the system 100 in response to information learned from the display devices 106, 108, 110, 112.

The setup component 14 may determine the total number of speakers 20 in the system by summing the number of speakers on each display device 106, 108, 110, 112. For example, the display device 106 may include speakers 33, 34 that support two channels (e.g., a left channel and a right channel). The display device 108 may include speakers 35, 36 supporting two channels (e.g., a right channel and a left channel). The display device 110 may include speakers 37, 38 supporting two channels (e.g., a right channel and a left channel). The display device 112 may include speakers 39, 40 supporting two channels (e.g., a right channel and a left channel). As such, the total number of speakers 20 may be eight.

The setup component 14 may also determine speaker location information 22 identifying the physical location of the speakers 33, 34, 35, 36, 37, 38, 39, 40. The speaker location information 22 may correspond to a static orientation of the speakers 33, 34, 35, 36, 37, 38, 39, 40 on each of the display devices 106, 108, 110, 112. As such, the speaker location information 22 may indicate on which display device 106, 108, 110, 112 the speaker 33, 34, 35, 36, 37, 38, 39, 40 is located and/or some area of such location (e.g., top (left/center/right), bottom (left/center/right), middle (left/center/right)). For example, the speaker location information 22 for the speaker 33 of the display device 106 may indicate that the speaker 33 is located to the left of the top of the display device 106. The speaker position information for the speaker 34 of the display device 106 may indicate that the speaker 34 is located to the right of the top of the display device 106.

The static orientation of the speaker position information 22 may also incorporate any rotation that may occur with the display devices 106, 108, 110, 112. For example, if display device 106 is rotated vertically such that speaker 33 is now located to the left of the bottom of display device 106 and speaker 34 is now located to the left of the top of display device 106, speaker position information 22 for speaker 33 may be updated to indicate that speaker 33 is located to the left of the bottom of display device 106 and speaker position information 22 for speaker 34 may be updated to indicate that speaker 34 is located to the left of the top of display device 106.

Further, the setup component 14 may determine a device ordering 21 that may impose an order on the display devices 106, 108, 110, 112. For example, the sequence may include display device 106 as a first display device, display device 108 as a second display device, display device 110 as a third display device, and display device 112 as a fourth display device. The device ordering 21 may be used to impose an order to the speakers 33, 34, 35, 36, 37, 38, 39, 40 of the different display devices 106, 108, 110, 112.

The operating system 111 may also include an application location manager 17 that may provide application location information 16 for each application 10. The application location information 16 may indicate one or more display devices 106, 108, 110, 112 on which the application 10 is currently located. Further, on a particular display device 106, 108, 110, 112, there may be multiple windows and/or virtual panels that may correspond to one or more applications 10. For example, the application 10 may have a graphical User Interface (UI) corresponding to a window on the display device 106, 108, 110, 112. The application location information 16 may also include coordinates from, for example, a cartesian coordinate system that provide pixel locations of the current location 19 of the application 10 on the display devices 106, 108, 110, 112. As such, the application location information 16 may indicate a window and/or virtual panel on the display device 106, 108, 110, 112 on which the application 10 is located, and may also provide a pixel location of the current location 19 of the application 10 on the display device 106, 108, 110, 112. When one or more applications 10 change location to a new location 23, the application location manager 17 may track the movement of the applications 10 and may update the application location information 16 with the new location 23 of the application 10 in response to the change in location.

The operating system 111 may also include an audio stack 24, which audio stack 24 may receive audio data 12 from one or more applications 10 and may select speakers 30 on one or more display devices 106, 108, 110, 112 to output the audio data 12. The audio stack 24 may receive information indicating the number of applications 10 open on the display devices 106, 108, 110, 112 and corresponding application location information 16 for each application 10. Further, the audio stack 24 may receive display device information including, but not limited to, the total number of display devices 18, the total number of speakers 20, the speaker location information 22, and the device ordering 21 information in communication with the computer device 102. The audio stack 24 may use the combination of the application position information 16 and the speaker position information 22 to select the speaker 30 to receive the audio data 12. In addition, the audio stack 24 may use other auxiliary information 29 (such as, but not limited to, user location) to select speakers 30 to receive the audio data 12. The selected speakers 30 may be a subset of the total number of speakers 20. In addition, the selected speakers 30 may be the total number of speakers 20.

The audio stack 24 may maintain a relationship between the total number of speakers 20 in the system and the corresponding speaker location information 22. For example, the audio stack 24 may create a speaker array 26, the speaker array 26 combining speakers 33, 34, 35, 36, 37, 38, 39, 40 from each display device 106, 108, 110, 112 into an aggregate speaker 27 array using the device ordering 21 information. The aggregate speaker 27 may be the total number of speakers 20 or a subset of the total number of speakers 20 in the system 100. Further, the order may be imposed using the device ordering 21 information when combining speakers into an aggregate speaker 27 array.

Further, the speaker array 26 may include corresponding speaker location information 22 for the aggregate speaker 27. The speaker array 26 may be dynamically updated as the number of speakers in the system 100 increases or decreases. In addition, the speaker array 26 may be dynamically updated as the speaker position information 22 changes (e.g., the display device is rotated to a different orientation). Thus, the speaker array 26 may be used to maintain an association between the speakers 33, 34, 35, 36, 37, 38, 39, 40 of each display device 106, 108, 110, 112 and the corresponding speaker location information 22.

The audio stack 24 may use a combination of the application location information 16 and the display device information to select one or more speakers 30 of the display devices 106, 108, 110, 112 to output the audio data 12 for the application 10. The audio stack 24 may use the speaker array 26 to select a subset of speakers 30 from the aggregate speakers 27 located near the current location 19 of the application 10. When the physical location of the selected speaker 30 is on the same display device 106, 108, 110, 112 as the application 10, the selected speaker 30 may be located near the current location 19 of the application 10. Further, the selected speaker 30 may be located near the current location 19 of the application 10 when the distance from the physical location of each speaker in the subset of speakers is within a predefined radius of the application 10. For example, the selected speaker 30 has the shortest distance to the current position 19 of the application 10 relative to the distances of the other speakers.

In addition, the audio stack 24 may also use other auxiliary information 29 to select one or more speakers 30 of the display devices 106, 108, 110, 112 to output audio data 12 for the application 10. The auxiliary information 29 may include, but is not limited to, user position information, changes in the speaker position information 22 (e.g., when the display device is rotated), and/or any other trigger that may modify the direction of audio output. For example, the selected speaker 30 may be closest to the user location rather than the application location so that audio may be output proximate to the user. The audio stack 24 may use other auxiliary information 29 in conjunction with the speaker position information 22 and/or the application position information 16.

One example may include when the application 10 located on the display device 108 plays audio and the user is located near the speaker 34 of the display device 106 and the speaker 35 of the display device 108, the audio stack 24 may select the speaker 34, 35 as the selected speaker 30 to output audio for the application 10 because the speaker 34, 35 is closest to the user's location.

Another example may include that when the media application 10 located on the display device 106 plays audio and video, the audio for the media application 10 may be output by the selected speaker 30 (e.g., speakers 33, 34) of the display device 106 where the media application is currently located. The user may drag and/or expand the window size of the application 10 such that the application 10 may span multiple display devices (e.g., display devices 106, 108). When the application 10 may move to a new location 23 (e.g., to another display device 108), the audio data 12 may be panned to a new selected set of speakers 31 (e.g., speakers 33, 34, 35, 36) corresponding to the new location 23 of the application 10.

The audio stack 24 may receive updated application location information 16 with a new location 23 for the application 10. The updated application location information 16 may identify the display device(s) 106, 108, 110, 112 for the new location 23 of the application 10. For example, the application 10 may move to a new location 23 on the same display device 106, 108, 110, 112. Further, the application 10 may move to a new location 23 on a different display device 106, 108, 110, 112 and/or across multiple display devices 106, 108, 110, 112. The updated application location information 16 may also include one or more coordinates from, for example, a cartesian coordinate system, which provide the pixel location of the new location 23 of the application 10 on the display device 106, 108, 110, 112.

The audio stack 24 may use the speaker array 26 to select a new set 31 of speakers from the aggregate speakers 27 located near the new location 23 of the application 10. When the physical location of the new speaker set 31 is located on the same display device 106, 108, 110, 112 as the application 10, the new speaker set 31 may be located near the new location 23 of the application 10. Further, the new set of speakers 31 may be located near the new location 23 of the application 10 when the distance from the physical location of each speaker in the new set of speakers 31 is within a predefined radius of the application 10.

The audio stack 24 may apply the weights 28 to determine the amount of audio data 12 to be output via the corresponding selected speaker 30. The weights 28 may be determined in response to the distance of the speaker from the current location 19 of the application 10. Speakers that are closer to the current location 19 of the application 10 may receive a higher weight 28 relative to speakers that are farther from the current location 19 of the application 10. For example, the weight 28 may indicate an amount of volume to output from the selected speaker 30, where a lower number may result in a lower volume and a higher number may result in a higher volume. As the application 10 moves to the new location 23, the audio stack 24 may reduce the weights 28 applied to the selected speakers 30 near the current location 19 of the application 10 so that the amount of audio data 12 output by the selected speakers 30 may be reduced. The audio stack 24 may increase the weights 28 applied to the selected speakers 30 near the new location 23 so that the amount of audio data 12 output by the new set of selected speakers 30 near the new location 23 of the application 10 may be increased. In this manner, the audio stack 24 may ensure that the output of the audio data 12 follows the movement of the application 10 to the new location 23.

The audio stack 24 can intelligently translate the audio to the speaker configuration by making a decision internally where to send the audio as the application moves position so that the user and/or application does not have to make any decisions. As such, audio for an application may pan to different speaker configurations as the application moves, as the application may be distributed across multiple displays, bringing new dimensions to collaboration and efficient work.

Referring now to fig. 2, an example speaker array 26 of aggregate speakers 27 is shown. For example, the speaker array 26 may combine the speakers 33, 34, 35, 36, 37, 38, 39, 40 into a single array of aggregate speakers 27. Speakers 33, 34, 35, 36, 37, 38, 39, 40 may be grouped together using device ordering 21 information. The device ordering 21 may impose an order on the display devices 106, 108, 110, 112. For example, display device 106 may be a first display device, display device 108 may be a second display device, display device 110 may be a third display device, and display device 112 may be a fourth display device. As such, speakers 33, 34, 35, 36, 37, 38, 39, 40 may be combined using the order of device ordering 21.

For example, the speakers 33, 34 of the display device 106 may be placed first into the array of aggregate speakers 27, followed by the speakers 35, 36 of the display device 108. Next, the speakers 37, 38 of the display device 110 may be placed in an array of aggregate speakers 27. The last loudspeakers may be the loudspeakers 39, 40 of the display device 112. As such, the aggregate speaker 27 may include all speakers placed in sequence that communicate with the computer device 102.

Further, the speaker array 26 may store an association between each speaker 33, 34, 35, 36, 37, 38, 39, 40 and the corresponding speaker location information 22. The speaker array 26 may be dynamically updated with any changes in the speaker location information 22 and/or changes in the total number of speakers 20 (fig. 1) in the system 100 (fig. 1). For example, if the display devices 106, 108, 110, 112 are rotated to different orientations, the speaker array 26 may be updated with new speaker position information 22 in response to the rotation. The speaker array 26 may be used to select a subset of speakers from aggregate speakers 27 located near the application 10 (fig. 1) to receive audio data 12 (fig. 1) for the application 10.

Referring now to fig. 3, an example of selecting a speaker to output audio data for two applications (application a 302 and application B310) located on multiple display devices 106, 108, 110 is shown. This example may be discussed below with reference to the architecture of fig. 1.

Application a 302 may be open on both display device 106 and display device 108. The application location information 16 for application a 302 may indicate that application a 302 is located on both the display device 106 and the display device 108. Further, the application location information 16 for application a 302 may include one or more coordinates 304 indicating the current location 19 of application a 302 on the display device 106 and one or more coordinates 306 indicating the current location 19 of application a 302 on the display device 108.

The speakers 34, 35, 36 of the display device 106 and the display device 108 may be selected by the audio stack 24 to output any audio data 12 for application a 302. The audio stack 24 may receive application location information 16 for application a 302 and may use the application location information 16 to determine that application a is on the display device 106. For example, the audio stack 24 may use the speaker array 26 to identify that the speakers 33, 34 are located on the display device 106 and may use the speaker array 26 to access the corresponding speaker location information 22 for the speakers 33, 34.

The audio stack 24 may use the speaker location information 22 for the speakers 33, 34 to determine a distance 310 of the speaker 34 to the current location 19 of the application a 302 on the display device 106 and a distance 312 of the speaker 34 to the current location 19 of the application a 302 on the display device 106. The audio stack 24 may compare the distance 310 to the distance 312 and may determine that the distance 310 of the speaker 34 is closer to the current location 19 of application a 302 relative to the distance 312 of the speaker 33. In response to the comparison of the distances 310, 312, the audio stack 24 may select the speaker 34 of the display device 106 to output any audio data 12 for application a 302. As such, the audio stack 24 may transmit the audio data 12 for application a 302 to the speaker 34 for output, and the speaker 33 may not receive the audio data 12 for application a 302.

In addition, the audio stack 24 may use the application location information 16 to determine that application a is also located on the display device 108. The audio stack 24 may use the speaker array 26 to determine that the speakers 35, 36 are located on the display device 108 and may use the speaker array 26 to access the corresponding speaker location information 22 for the speakers 35, 36. The audio stack 24 may use the speaker location information 22 to determine a distance 316 for the speaker 36 to the current location 19 of application a 302 on the display device 108 and a distance 314 for the speaker 35 to the current location 19 of application 302 on the display device 108. The audio stack 24 may compare the distance 314 to the distance 316 and may determine that the distance 314 and the distance 316 are similar to the distance from the current location 19 of the application a 302. As such, in response to the comparison of the distances 314, 316, the audio stack 24 may select both speakers 35, 36 to output any audio data 12 for application a 302. The audio stack 24 may transmit the audio data 12 for application a 302 to the speakers 35, 36 for output.

Further, application B310 may be opened on display device 110. Application location information 16 for application B310 may indicate that application B310 is located on display device 110. Further, application location information 16 for application B310 may include one or more coordinates 320 indicating the current location 19 of application B310 on display device 110.

The audio stack 24 may receive application location information 16 for application B310 and may use the application location information 16 to determine that application B310 is on the display device 110. The audio stack 24 may use the speaker array 26 to identify that the speakers 37, 38 are located on the display device 110, and the audio stack 24 may use the speaker array 26 to access the corresponding speaker location information 22 for the speakers 37, 38.

The audio stack 24 may use the speaker position information 22 for the speakers 37, 38 to determine a distance 318 of the speaker 37 to the current position 19 of application B310 and a distance 320 of the speaker 38 to the current position 19 of application B310. The audio stack 24 may compare the distance 318 to the distance 320 and may determine that the distance 318 and the distance 320 are similar to the distance from the current location 19 of the application B310. As such, in response to the comparison of the distances 318, 320, the audio stack 24 may select both speakers 37, 38 to output any audio data 12 for application B310. The audio stack 24 may transmit the audio data 12 for application B310 to the speakers 37, 38 for output.

The audio stack 24 may determine that the display device 112 does not currently have an open application 10 and the audio stack 24 may not transmit audio data 12 to the speakers 39, 40 of the display device 112.

As such, the current locations 19 of application a 302 and application B310 may be used to select speakers to output audio data 12 for application a 302 and application B310.

Referring now to fig. 4, an example of panning audio to a new set of speakers 31 (fig. 1) when application C402 moves from the current location 19 (fig. 1) to the new location 23 (fig. 1) is shown. This example may be discussed below with reference to the architecture of fig. 1.

Application C402 may be opened on the display device 106. The application location information 16 for application C402 may indicate that application C402 is located on the display device 106. Further, the application location information 16 for application C402 may include one or more coordinates P indicating the current location 19 of application C404 on the display device 1061 404。

The audio stack 24 may receive the application location information 16 for application C310 and may use the application location information 16 and the speaker array 26 to identify that the speakers 33, 34 are located on the display device 106. The audio stack 24 may use the speaker position information 22 for the speakers 33, 34 to determine a distance 408 of the speaker 33 to the current position 19 of application C402 and a distance 410 of the speaker 34 to the current position 19 of application C402. The audio stack 24 may compare the distance 408 to the current location 19 of application C402 and may compare the distance 410 to the current location 19 of application C402. The audio stack 24 may determine that the distance 408 and the distance 410 are within a predetermined radius of the current location 19 of the application C402. As such, the audio stack 24 may select both speakers 33, 44 for outputting the audio data 12. Additionally, in response to the comparison of the distances 408, 410 to the current location 19, when outputting any audio data 12 for application C402, the audio stack 24 may apply the same weights 28 for outputting audio data 12 to both speakers 33, 34. For example, the weights 28 may control the amount by which the audio data 12 may be output from the speakers 33, 34. Since the speakers 33, 34 may have the same weight 28, the amount of audio data 12 output from the speakers 33, 34 may be the same. The audio stack 24 may transmit the audio data 12 for application C402 along with the weights 28 to the speakers 33, 34 for output.

The user may move application C402 from the current location 19 on the display device 106 to the new location 23 on the display device 108. The audio stack 24 may receive updated application location information 16 for application C402 having a new location 23. E.g. one or more coordinates P2406 may indicate the new location 23 of application C402 on the display device 108.

When application C is driven from the current location 19 (e.g., coordinate P) on the display device 1061404) Move 402 to a new location (e.g., coordinate P) on display device 1082406) At this time, the audio stack 24 may translate the audio data 12 to follow the movement of the application C402. For example, the audio stack 24 may determine that the speakers 33, 34 may be a greater distance 408, 410 from the application C402 and may begin to reduce the weight 28 of the audio data 12 sent to the speakers 33, 34. As the weight 28 decreases, the amount of output from the speakers 33, 34 may also decrease. When application C402 is removed from the display device 106, the audio stack 24 may reduce the weight 28 to zero. As such, when application C402 moves to the display device 108, the speakers 33, 34 may no longer output audio data 12 for application C402.

The audio stack 24 may receive the updated application location information 16 for application C402 and may determine that the new location 23 for application C402 is on the display device 108. One or more coordinates P2406 may indicate a location on the display device 108 where the application C402 is located. The audio stack 24 may use the speaker array 26 to identify that the speakers 35, 36 are located on the display device 108. As application C402 moves to the display device 108, the audio stack 24 may begin to navigate through the display device 108The speakers 35, 36 output audio data 12 for application C402.

The audio stack 24 may use the speaker position information 22 for the speakers 35, 36 to determine a distance 412 of the speaker 35 to the new position 23 of application C402 and a distance 414 of the speaker 36 to the new position 23 of application C402. The audio stack 24 may compare the distance 412 to the distance 414 and may determine that the distance 412 is closer to the new location 23 of application C402 relative to the distance 414. In response to the comparison of the distances 412, 414, the audio stack 24 may apply a higher weight 28 relative to the weight 28 provided to the speaker 36 for outputting the audio data 12 to the speaker 35. As such, the amount of audio data 12 output from the speaker 35 may be higher relative to the amount of audio data 12 output from the speaker 36. The audio stack 24 may transmit the audio data 12 for application C410 along with the weights 28 to the speakers 35, 36 for output.

Further, since application C402 is not located on the display devices 110, 112, the audio stack 24 may not transmit the audio data 12 for application C402 to the display devices 110 and 112. Alternatively, the audio stack 24 may transmit the audio data 12 and the weight 28 of 0 for application C402 to the display device 110 and the display device 112 such that the display device 110, 112 may not output the audio data 12 because the application C402 is not located on the display device 110, 112. As such, the speakers 37, 38, 39, 40 may not output the audio data 12 for application C402.

As the application moves to a new location, the audio output for the application may smoothly transition from a set of speakers physically located near the current location of the application to a new set of speakers physically located near the new location so that the audio output may follow the movement of the application.

Referring now to fig. 5, an example method 500 may be used by the computer device 102 (fig. 1) to pan audio across multiple speakers 33, 34, 35, 36, 37, 38, 39, 40 (fig. 1). The actions of method 500 may be discussed below with reference to the architecture of FIG. 1.

At 502, method 500 may include receiving audio data from an application on at least one display device of a plurality of display devices in communication with a computer device. The audio stack 24 may receive audio data 12 from one or more applications 10 and may select speakers 30 on one or more display devices 106, 108, 110, 112 to output the audio data 12. Multiple display devices 106, 108, 110, 112 may be combined together and represented as a single audio endpoint, such that the application 10 may be unaware of the multiple display devices 106, 108, 110, 112 in communication with the computer device 102. The application 10 executing on the computer device 102 may be used on any of the display devices 106, 108, 110, 112. The application 10 may be distributed across multiple display devices 106, 108, 110, 112 without being limited to a single display device 106, 108, 110, 112. In this way, a completely new dimension can be created for collaboration and efficient work.

At 504, the method 500 may include selecting a set of speakers from a plurality of speakers of a display device to receive audio data in response to the speaker location information for the plurality of speakers and the application location information. The audio stack 24 may receive information indicating the number of applications 10 open on the display devices 106, 108, 110, 112 and corresponding application location information 16 for each application 10.

The application location information 16 may indicate one or more display devices 106, 108, 110, 112 on which the application 10 is currently located. Further, on a particular display device 106, 108, 110, 112, there may be multiple windows and/or virtual panels that may correspond to one or more applications 10. For example, the application 10 may have a graphical User Interface (UI) corresponding to a window on the display device 106, 108, 110, 112. The application location information 16 may also include coordinates from, for example, a cartesian coordinate system that provide pixel locations of the current location 19 of the application 10 on the display devices 106, 108, 110, 112. As such, the application location information 16 may indicate a window and/or virtual panel on the display device 106, 108, 110, 112 on which the application 10 is located, and may also provide a pixel location of the current location 19 of the application 10 on the display device 106, 108, 110, 112.

Further, the audio stack 24 may receive display device information including, but not limited to, the total number of display devices 18 in communication with the computer device 102, the total number of speakers 20 in the system, the speaker location information 22, and any device ordering 21 information. For example, the speaker location information 22 may identify the physical location of the speakers 33, 34, 35, 36, 37, 38, 39, 40 that may correspond to the static orientation of the speakers 33, 34, 35, 36, 37 on each of the display devices 106, 108, 110, 112. As such, the speaker location information 22 may indicate the speakers 33, 34, 35, 36, 37. 38. 39, 40 are physically located on which display device 106, 108, 110, 112 and/or some area of such location (e.g., top (left/center/right), bottom (left/center/right), middle (left/center/right)).

The audio stack 24 may maintain a relationship between the total number of speakers 20 in the system and the corresponding speaker location information 22. For example, the audio stack 24 may create a speaker array 26, the speaker array 26 combining the speakers 33, 34, 35, 36, 37, 38, 39, 40 from each display device 106, 108, 110, 112 into an aggregate speaker 27 array using the order of information from the device rankings 21. The aggregate speaker 27 array may be the total number of speakers 20 in the system 100, with the order being imposed using the device ordering 21 information.

The audio stack 24 may use a combination of the application position information 16 and the speaker position information 22 to select speakers 30 to receive the audio data 12. The audio stack 24 may use the speaker array 26 to select a set of speakers from the aggregate speakers 27 located near the current location 19 of the application 10. When the physical location of the set of speakers is on the same display device 106, 108, 110, 112 as the application 10, the set of speakers may be located near the current location 19 of the application 10. Further, the set of speakers may be located near the current location 19 of the application 10 when the distance from the physical location of each speaker in the set of speakers is within a predefined radius of the application 10. For example, the selected speaker set 30 may be within a predefined radius of the application 10.

The selected speakers 30 may be a subset of the total number of speakers 20. In addition, the selected speakers 30 may be the total number of speakers 20. For example, the user may expand the application 10 to cover all display devices 106, 108, 110, 112. The audio stack 24 may determine that all speakers 33, 34, 35, 36, 37, 38, 39, 40 are located near the application 10. As such, the selected speakers 30 may include all of the speakers in the system.

In addition, other auxiliary information 29 may also be used to select one or more speakers 30 of display devices 106, 108, 110, 112 to output audio data 12 for application 10. The auxiliary information 29 may include, but is not limited to, information to the user location, changes to the speaker location information 22 (e.g., when the display device is rotated), and/or any other trigger that may determine the direction of audio output. For example, the selected speaker 30 may be closest to the user location rather than the application location so that audio may be output close to the user.

At 506, method 500 may include transmitting audio data to the set of speakers. The audio stack 24 may transmit the audio data 12 for the application 10 to the selected speaker set 30 for output. In addition, the audio stack 24 may transmit the weights 28 to the selected speakers 30. The audio stack 24 may apply the weights 28 to determine the amount of audio data 12 to be output via the corresponding, selected speaker 30. The weights 28 may be determined in response to the distance of the speaker from the current location 19 of the application 10. Speakers that are closer to the current location 19 of the application 10 may receive a higher weight 28 relative to speakers that are farther from the current location 19 of the application 10. For example, the weight 28 may indicate an amount of volume to output from the selected speaker 30, where a lower number may result in a lower volume and a higher number may result in a higher volume.

At 508, the method 500 may include receiving updated application location information with a new location for the application. For example, the user may drag and/or expand the window size of the application 10 such that the application 10 may span multiple display devices 106, 108, 110, 112 and/or move to a new location 23 (e.g., a different location on the current display device and/or a different display device). When one or more applications 10 change location to a new location 23, the application location manager 17 may track the movement of the application(s) 10 and may update the application location information 16 in response to the change in location. The audio stack 24 may receive the updated application location information 16 with the new location 23 for the application 10. The updated application location information 16 may identify one or more display devices 106, 108, 110, 112 for the new location 23 of the application 10. Further, the updated application location information 16 may include coordinates of pixel locations of the application on one or more display devices 106, 108, 110, 112.

The audio stack 24 may also receive updated ancillary information 29. For example, as the user moves to a different location (e.g., to a different room or a different area of the same room), the audio stack 24 may receive updated ancillary information 29 for the user's location. Another example may include: if the user rotates the display devices 106, 108, 110, 112 to different orientations, the audio stack 24 may receive updated speaker position information 22.

At 510, method 500 may include selecting a new set of speakers from the plurality of speakers to receive audio data in response to the new location and the speaker location information. The audio stack 24 may use the speaker array 26 to select a new set 31 of speakers from the aggregate speakers 27 located near the new location 23 of the application 10. For example, the audio stack 24 may use the application location information 16 to identify one or more display devices 106, 108, 110, 112 for the new location 23 of the application 10. The audio stack 24 may compare the speaker location information 22 for each of the speakers 33, 34, 35, 36, 37, 38, 39, 40 on the identified display devices 106, 108, 110, 112 using the speaker array 26 to determine a new set of speakers 31 near the new location 23 of the application 10.

When the physical location of the new speaker set 31 is located on the same display device 106, 108, 110, 112 as the application 10, the new speaker set 31 may be located near the new location 23 of the application 10. Further, the new set of speakers 31 may be located near the new location 23 of the application 10 when the distance from the physical location of each speaker in the new set of speakers 31 is within a predefined radius of the application 10.

The audio stack 24 may also use the updated auxiliary information 29 to select a new set of speakers 31. For example, when the user moves to a different room, the audio stack 24 may use the speaker array 26 to compare the speaker location information 22 with the user location information to determine the new set of speakers 31 in the same room as the user. Another example may include the audio stack 24 using the speaker array 26 to select a new speaker set 31, the new speaker set 31 having the shortest distance to the user location relative to the other speakers.

At 512, the method 500 may include converting the audio data from the speaker set to the new speaker set 31. When the application 10 may move to a new location 23, the audio stack 24 may pan the audio data 12 to a new set of speakers 31 near the new location 23 for the application 10. For example, the audio stack 24 may reduce the weights 28 applied to the selected speakers 30 near the current location 19 of the application 10 so that the amount of audio data 12 output by the selected speaker set 30 may be reduced. Further, the audio stack 24 may increase the weights 28 applied to the new set of speakers 31 near the new location 23 so that the amount of audio data 12 output by the new set of speakers 31 near the new location 23 of the application 10 may be increased. In this manner, the audio stack 24 may ensure that the output of the audio data 12 follows the movement of the application 10 to the new location 23.

The audio stack 24 can intelligently translate the audio to the speaker configuration by making a decision internally where to send the audio as the application moves position so that the user and/or application does not have to make any decisions. As such, the audio of the application may pan to different speaker configurations as the application moves, as the application may be distributed across multiple displays, thereby bringing an entirely new dimension to collaboration and efficient work.

Referring now to FIG. 6, an example computer 600 that may be configured as computer device 102 in accordance with implementations includes additional component details, as compared to FIG. 1. In one example, the computer 600 may include a processor for performing processing functions associated with one or more of the components and functions described herein. Processor 42 may include a single or multiple sets of processors or multi-core processors. Further, processor 42 may be implemented as an integrated processing system and/or a distributed processing system.

The computer 600 may also include memory 44, such as for storing local versions of applications executed by the processor 42. The memory 44 may include types of memory usable by a computer, such as Random Access Memory (RAM), Read Only Memory (ROM), magnetic tape, magnetic disk, optical disk, volatile memory, non-volatile memory, and any combination thereof. Additionally, the processor 56 may include and execute an operating system 111 (FIG. 1).

Further, computer 600 may include a communications component 46, communications component 46 providing for establishing and maintaining communications with one or more parties using the hardware, software, and services described herein. Communications component 46 may carry communications between components on computer device 102 and between computer device 102 and external devices, such as devices located on a communications network and/or devices connected serially or locally to computer device 102. For example, communications component diagram 46 may include one or more buses, and may also include transmit chain components and receive chain components associated with transmitters and receivers, respectively, operable to interface with external devices.

Further, computer 600 may include data store 48, data store 48 may be any suitable combination of hardware and/or software, data store 48 providing mass storage of information, databases, and programs used in connection with implementations described herein. For example, the data repository 48 may be a data repository for the application 10, the settings component 14, the application location manager 17, and/or the audio stack 24.

The computer 600 may also include a user interface component 50, the user interface component 50 being operable to receive input from a user of the computer device 102 and further operable to generate output for presentation to the user. User interface component 50 may include one or more input devices, including but not limited to a keyboard, a keypad, a mouse, a display (which may be a touch-sensitive display, for example), navigation keys, function keys, a microphone, a voice recognition component, any other mechanism capable of receiving input from a user, or any combination thereof. Further, user interface component 50 may include one or more output devices, including but not limited to a display, a speaker, a haptic feedback mechanism, a printer, any other mechanism capable of presenting an output to a user, or any combination thereof.

In an implementation, the user interface component 50 may transmit and/or receive messages corresponding to the operation of the application 10, the setup component 14, the application location manager 17, and/or the audio stack 24. Further, the processor 42 executes the application 10, the setup component 14, the application location manager 17, and/or the audio stack 24, and the memory 44 or data store 48 may store them.

As used in this application, the terms "component," "system," and the like are intended to include a computer-related entity, such as but not limited to hardware, firmware, a combination of hardware and software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, an object, an executable, a thread of execution, a program, and/or a computer. At the latest, both an application running on a computer device and the computer device may be a component. One or more components can reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. In addition, these components can execute from various computer readable media having various data structures stored thereon. The components may communicate by way of local and/or remote processes such as in accordance with a signal having one or more data packets, such as data from one component interacting with another component in a local system, distributed system, and/or across a network such as the internet with other systems by way of the signal.

Furthermore, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or". That is, unless specified otherwise, or clear from context, the phrase "X employs a or B" is intended to mean any of the natural inclusive permutations. That is, the phrase "X employs A or B" is satisfied in any of the following cases: x is A; x is B; or X employs both A and B. In addition, the articles "a" and "an" as used in this application and the appended claims should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a singular form.

Various implementations or features have been presented in terms of systems that may include a number of devices, components, modules, and the like. It is to be understood and appreciated that the various systems may include additional devices, components, modules, etc. and/or may not include all of the devices, components, modules etc. discussed in connection with the figures. Combinations of these methods may also be used.

The various illustrative logics, logical blocks, and acts of a method described in connection with the embodiments disclosed herein can be implemented or performed with one of a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Further, at least one processor may include one or more components operable to perform one or more of the steps and/or actions described above.

Further, the steps and/or actions of a method or algorithm described in connection with the implementations disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium may be coupled to the processor such the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. Further, in some implementations, the processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. Additionally, in some implementations, the steps and/or actions of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a machine readable medium and/or computer readable medium, which may be incorporated into a computer program product.

In one or more implementations, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.

While implementations of the disclosure have been described in connection with examples thereof, those skilled in the art will appreciate that variations and modifications may be made to the above implementations without departing from the scope thereof. Other implementations will be apparent to those skilled in the art from consideration of the specification or from practice of the examples disclosed herein.

24页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:协作定位

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!