Sound field dependent rendering

文档序号:246615 发布日期:2021-11-12 浏览:16次 中文

阅读说明:本技术 声场相关渲染 (Sound field dependent rendering ) 是由 M-V·莱蒂南 J·维卡莫 L·拉克索南 于 2020-03-19 设计创作,主要内容包括:一种装置,包括被配置为执行以下操作的部件:获得至少两个音频信号;确定至少两个音频信号的类型;基于至少两个音频信号的所确定的类型,处理至少两个音频信号以被配置为被渲染。(An apparatus comprising means configured to: obtaining at least two audio signals; determining the type of at least two audio signals; based on the determined type of the at least two audio signals, the at least two audio signals are processed to be configured to be rendered.)

1. An apparatus comprising means configured to:

obtaining at least two audio signals;

determining a type of the at least two audio signals;

processing the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals.

2. The apparatus of claim 1, wherein the at least two audio signals are one of:

transmitting an audio signal; and

a previously processed audio signal.

3. The apparatus of claim 1 or 2, wherein the means is configured to: at least one parameter associated with the at least two audio signals is obtained.

4. The apparatus of claim 3, wherein the means is configured to: determining the type of the at least two audio signals based on the at least one parameter associated with the at least two audio signals.

5. The apparatus of claim 4, wherein the means configured to determine the type of the at least two audio signals based on the at least one parameter is configured to perform one of:

extracting and decoding at least one type signal from the at least one parameter; and

when the at least one parameter is representative of a spatial audio aspect associated with the at least two audio signals, analyzing the at least one parameter to determine the type of the at least two audio signals.

6. The apparatus of claim 5, wherein the means configured to analyze the at least one parameter to determine the type of the at least two audio signals is configured to:

determining a wideband left or right channel to total energy ratio based on the at least two audio signals;

determining a high frequency left or right channel to total energy ratio based on the at least two audio signals;

determining a sum-to-total energy ratio based on the at least two audio signals;

determining a difference to target energy ratio based on the at least two audio signals; and

determining the type of the at least two audio signals based on at least one of: the broadband left or right channel versus total energy ratio; based on the high frequency left or right channel to total energy ratio of the at least two audio signals; comparing the sum to a total energy based on the at least two audio signals; and the difference to target energy ratio.

7. The apparatus of any of claims 1-6, wherein the means is configured to: determining at least one type parameter associated with the type of the at least one audio signal.

8. The apparatus of claim 7, wherein the component configured to process the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals is configured to: converting the at least two audio signals based on the at least one type parameter associated with the type of the at least two audio signals.

9. The apparatus of any of claims 1-8, wherein the type of the at least two audio signals comprises at least one of:

a capture microphone arrangement;

capturing a microphone separation distance;

capturing microphone parameters;

a transmission channel identifier;

an interval audio signal type;

a down-mix audio signal type;

a coincidence audio signal type; and

the transmission channel is arranged.

10. The apparatus according to any one of claims 1 to 9, wherein the means configured to process the at least two audio signals is configured to perform one of:

converting the at least two audio signals into a panoramic surround sound audio signal representation;

converting the at least two audio signals into a multi-channel audio signal representation; and

down-mixing the at least two audio signals to fewer audio signals.

11. The apparatus of any of claims 1 to 10, wherein the means configured to process the at least two audio signals is configured to: generating at least one prototype signal based on the at least two audio signals and the type of the at least two audio signals.

12. A method, comprising:

obtaining at least two audio signals;

determining a type of the at least two audio signals;

processing the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals.

13. The method of claim 12, wherein the at least two audio signals are one of:

transmitting an audio signal; and

a previously processed audio signal.

14. The method of claim 12 or 13, further comprising: at least one parameter associated with the at least two audio signals is obtained.

15. The method of claim 14, wherein determining the type of the at least two audio signals comprises: determining the type of the at least two audio signals based on the at least one parameter associated with the at least two audio signals.

16. An apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to:

obtaining at least two audio signals;

determining a type of the at least two audio signals;

processing the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals.

17. The apparatus of claim 16, wherein the at least two audio signals are one of:

transmitting an audio signal; and

a previously processed audio signal.

18. An apparatus according to claim 16 or 17, wherein the apparatus is caused to: at least one parameter associated with the at least two audio signals is obtained.

19. The apparatus of claim 18, wherein the apparatus is caused to: determining the type of the at least two audio signals based on the at least one parameter associated with the at least two audio signals.

20. The apparatus of claim 19, wherein the apparatus caused to determine the type of the at least two audio signals based on the at least one parameter is further caused to perform one of:

extracting and decoding at least one type signal from the at least one parameter; and

when the at least one parameter is representative of a spatial audio aspect associated with the at least two audio signals, analyzing the at least one parameter to determine the type of the at least two audio signals.

21. The apparatus of claim 20, wherein the apparatus caused to analyze the at least one parameter to determine the type of the at least two audio signals is further caused to:

determining a wideband left or right channel to total energy ratio based on the at least two audio signals;

determining a high frequency left or right channel to total energy ratio based on the at least two audio signals;

determining a sum-to-total energy ratio based on the at least two audio signals;

determining a difference to target energy ratio based on the at least two audio signals; and

determining the type of the at least two audio signals based on at least one of: the broadband left or right channel versus total energy ratio; based on the high frequency left or right channel to total energy ratio of the at least two audio signals; comparing the sum to a total energy based on the at least two audio signals; and the difference to target energy ratio.

22. An apparatus according to any one of claims 16 to 21, wherein the apparatus is caused to: determining at least one type parameter associated with the type of the at least one audio signal.

23. An apparatus according to any one of claims 16 to 22, wherein the apparatus caused to process the at least two audio signals is further caused to perform one of:

converting the at least two audio signals into a panoramic surround sound audio signal representation;

converting the at least two audio signals into a multi-channel audio signal representation; and

down-mixing the at least two audio signals to fewer audio signals.

24. An apparatus according to any one of claims 16-23, wherein the apparatus caused to process the at least two audio signals is further caused to: generating at least one prototype signal based on the at least two audio signals and the type of the at least two audio signals.

25. The apparatus of any of claims 16 to 24, wherein the apparatus caused to process the at least two audio signals to be rendered is further caused to: converting the at least two audio signals based on the at least one type parameter associated with the type of the at least two audio signals.

Technical Field

This application relates to apparatus and methods for audio representation and rendering related to sound fields, but not exclusively to apparatus and methods for audio representation for audio decoders.

Background

Immersive audio codecs are being implemented to support a large number of operating points ranging from low bit rate operation to transparency. An example of such a codec is the Immersive Voice and Audio Services (IVAS) codec, which is designed to be suitable for use on communication networks such as 3GPP 4G/5G networks, including use in immersive services such as, for example, immersive voice and audio for Virtual Reality (VR). The audio codec is intended to handle the encoding, decoding and rendering of speech, music and general audio. It is also contemplated to support channel-based audio and scene-based audio input, including spatial information about sound fields and sound sources. Codecs are also expected to operate with low latency to enable conversational services and support high error robustness under various transmission conditions.

The input signal may be presented to the IVAS encoder in one of a number of supported formats (and in some allowed format combinations). For example, a mono audio signal (without metadata) may be encoded using an Enhanced Voice Service (EVS) encoder. Other input formats may use the IVAS coding tool. At least some of the inputs may use Metadata Assisted Spatial Audio (MASA) tools or any suitable spatial metadata based scheme. This is a parametric spatial audio format suitable for spatial audio processing. Parametric spatial audio processing is the field of audio signal processing that uses a set of parameters to describe spatial aspects of a sound (or sound scene). For example, in parametric spatial audio capture from a microphone array, estimating a set of parameters from the microphone array signal, such as the direction of sound in a frequency band, and the ratio of the directivity to the non-directional portion of the captured sound in the frequency band, is a typical and efficient option. As is well known, these parameters describe well the perceptual spatial characteristics of the captured sound at the location of the microphone array. These parameters may be used accordingly in the synthesis of spatial sound for headphones, speakers, or other formats such as panoramic surround sound (Ambisonics).

For example, there may be two channels (stereo) of audio signals and spatial metadata. In addition, spatial metadata may define parameters such as: a Direction index (Direction index) describing the arrival Direction of sound at time-frequency parameter intervals; direct-to-total energy ratio (Direct-to-total energy ratio), which describes the energy ratio for a directional index (i.e., time-frequency subframe); extended coherence (Spread coherence), which describes the energy extension for a directional index (i.e., a time-frequency subframe); a diffusion-to-total energy ratio (Diffuse-to-total energy ratio) describing the energy ratio of a non-directional sound in the ambient direction; surround coherence (Surround coherence), which describes the coherence of non-directional sound in the surrounding direction; a remaining-to-total energy ratio (remaining-to-total energy ratio) describing the energy ratio of the remaining portion (such as microphone noise) of acoustic energy to meet the requirement that the sum of the energy ratios is 1; and Distance (Distance), which describes on a logarithmic scale the Distance in meters of sound originating from the directional index (i.e., time-frequency sub-frame).

The IVAS stream may be decoded and rendered into various output formats, including binaural output, multichannel output, and Ambisonic (FOA/HOA) output. In addition, there may be an interface for external rendering, where the output format may correspond to, for example, the input format.

Since spatial (e.g., MASA) metadata depicts the desired spatial audio perception in a manner that is independent of the output format, any stream with spatial metadata can be flexibly rendered into any of the above-described output formats. However, since MASA streams may originate from various inputs, the transmitted audio signals received by the decoder may have different characteristics. Therefore, the decoder must take these aspects into account in order to be able to produce the best audio quality.

Disclosure of Invention

According to a first aspect, there is provided an apparatus comprising means configured to: obtaining at least two audio signals; determining the type of at least two audio signals; based on the determined type of the at least two audio signals, the at least two audio signals are processed to be configured to be rendered.

The at least two audio signals may be one of: transmitting an audio signal; and a previously processed audio signal.

The component may be configured to obtain at least one parameter associated with at least two audio signals.

The component configured to determine the type of the at least two audio signals may be configured to determine the type of the at least two audio signals based on at least one parameter associated with the at least two audio signals.

The component configured to determine the type of the at least two audio signals based on the at least one parameter may be configured to perform one of: extracting and decoding at least one type signal from the at least one parameter; and when the at least one parameter is representative of a spatial audio aspect associated with the at least two audio signals, analyzing the at least one parameter to determine a type of the at least two audio signals.

The component configured to analyze the at least one parameter to determine the type of the at least two audio signals may be configured to: determining a wideband left or right channel to total energy ratio (wideband left to right channel to total energy ratio) based on the at least two audio signals; determining a high frequency left or right channel to total energy ratio (high frequency left or right channel to total energy ratio) based on the at least two audio signals; determining a sum to total energy ratio (sum to total energy ratio) based on the at least two audio signals; determining a difference to target energy ratio (subtrect to target energy ratio) based on the at least two audio signals; and determining the type of the at least two audio signals based on at least one of: broadband left or right channel versus total energy ratio; a high frequency left or right channel to total energy ratio based on the at least two audio signals; based on a sum of the at least two audio signals versus the total energy ratio; and the difference to target energy ratio.

The component may be configured to determine at least one type parameter associated with a type of the at least one audio signal.

The component configured to process the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may be configured to convert the at least two audio signals based on at least one type parameter associated with the type of the at least two audio signals.

The types of the at least two audio signals comprise at least one of: a capture microphone arrangement; capturing a microphone separation distance; capturing microphone parameters; a transmission channel identifier; an interval audio signal type; a down-mix audio signal type; a coincidence audio signal type; and a transmission channel arrangement.

The component configured to process the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may be configured to: converting the at least two audio signals into an ambisonic audio signal representation; converting at least two audio signals into a multi-channel audio signal representation; and downmixing the at least two audio signals to fewer audio signals.

The component configured to process the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may be configured to generate at least one prototype signal based on the at least two audio signals and the type of the at least two audio signals.

According to a second aspect, there is provided a method comprising: obtaining at least two audio signals; determining the type of at least two audio signals; based on the determined type of the at least two audio signals, the at least two audio signals are processed to be configured to be rendered.

The at least two audio signals may be one of: transmitting an audio signal; and a previously processed audio signal.

The method may further comprise obtaining at least one parameter associated with at least two audio signals.

Determining the type of the at least two audio signals may comprise determining the type of the at least two audio signals based on at least one parameter associated with the at least two audio signals.

Determining the type of the at least two audio signals based on the at least one parameter may comprise one of: extracting and decoding at least one type signal from the at least one parameter; and when the at least one parameter is representative of a spatial audio aspect associated with the at least two audio signals, analyzing the at least one parameter to determine a type of the at least two audio signals.

Analyzing the at least one parameter to determine the type of the at least two audio signals may comprise: determining a wideband left or right channel to total energy ratio based on the at least two audio signals; determining a high frequency left or right channel to total energy ratio based on the at least two audio signals; determining a sum-to-total energy ratio based on the at least two audio signals; determining a difference-to-target energy ratio based on the at least two audio signals; and determining the type of the at least two audio signals based on at least one of: broadband left or right channel versus total energy ratio; a high frequency left or right channel to total energy ratio based on the at least two audio signals; based on a sum of the at least two audio signals versus the total energy ratio; and the difference to target energy ratio.

The method may further comprise determining at least one type parameter associated with a type of the at least one audio signal.

Processing the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may further include converting the at least two audio signals based on at least one type parameter associated with the type of the at least two audio signals.

The types of the at least two audio signals may comprise at least one of: a capture microphone arrangement; capturing a microphone separation distance; capturing microphone parameters; a transmission channel identifier; an interval audio signal type; a down-mix audio signal type; a coincidence audio signal type; and a transmission channel arrangement.

Processing the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may include one of: converting the at least two audio signals into an ambisonic audio signal representation; converting at least two audio signals into a multi-channel audio signal representation; and downmixing the at least two audio signals to fewer audio signals.

Processing the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may include generating at least one prototype signal based on the at least two audio signals and the type of the at least two audio signals.

According to a third aspect, there is provided an apparatus comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus at least to: obtaining at least two audio signals; determining the type of at least two audio signals; based on the determined type of the at least two audio signals, the at least two audio signals are processed to be configured to be rendered.

The at least two audio signals may be one of: transmitting an audio signal; and a previously processed audio signal.

The apparatus may be configured to obtain at least one parameter associated with at least two audio signals.

The apparatus caused to determine the type of the at least two audio signals may be caused to determine the type of the at least two audio signals based on at least one parameter associated with the at least two audio signals.

The apparatus caused to determine the type of the at least two audio signals based on the at least one parameter may be caused to perform one of: extracting and decoding at least one type signal from the at least one parameter; and when the at least one parameter is representative of a spatial audio aspect associated with the at least two audio signals, analyzing the at least one parameter to determine a type of the at least two audio signals.

The apparatus caused to analyze the at least one parameter to determine the type of the at least two audio signals may be caused to: determining a wideband left or right channel to total energy ratio based on the at least two audio signals; determining a high frequency left or right channel to total energy ratio based on the at least two audio signals; determining a sum-to-total energy ratio based on the at least two audio signals; determining a difference-to-target energy ratio based on the at least two audio signals; and determining the type of the at least two audio signals based on at least one of: broadband left or right channel versus total energy ratio; a high frequency left or right channel to total energy ratio based on the at least two audio signals; based on a sum of the at least two audio signals versus the total energy ratio; and the difference to target energy ratio.

The apparatus may be caused to determine at least one type parameter associated with a type of the at least one audio signal.

The apparatus caused to process the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may be caused to convert the at least two audio signals based on at least one type parameter associated with the type of the at least two audio signals.

The types of the at least two audio signals comprise at least one of: a capture microphone arrangement; capturing a microphone separation distance; capturing microphone parameters; a transmission channel identifier; an interval audio signal type; a down-mix audio signal type; a coincidence audio signal type; and a transmission channel arrangement.

The apparatus caused to process the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may be caused to: converting the at least two audio signals into an ambisonic audio signal representation; converting at least two audio signals into a multi-channel audio signal representation; and downmixing the at least two audio signals to fewer audio signals.

The apparatus caused to process the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals may be caused to generate at least one prototype signal based on the at least two audio signals and the type of the at least two audio signals.

According to a fourth aspect, there is provided an apparatus comprising: an obtaining circuit configured to obtain at least two audio signals; a determination circuit configured to determine a type of the at least two audio signals; processing circuitry configured to process the at least two audio signals to be configured to be rendered based on the determined type of the at least two audio signals.

According to a fifth aspect, there is provided a computer program comprising instructions (or a computer readable medium comprising program instructions) for causing an apparatus to at least: obtaining at least two audio signals; determining the type of at least two audio signals; based on the determined type of the at least two audio signals, the at least two audio signals are processed to be configured to be rendered.

According to a sixth aspect, there is provided a non-transitory computer readable medium comprising program instructions for causing an apparatus to at least: obtaining at least two audio signals; determining the type of at least two audio signals; based on the determined type of the at least two audio signals, the at least two audio signals are processed to be configured to be rendered.

According to a seventh aspect, there is provided an apparatus comprising: means for obtaining at least two audio signals; means for determining a type of the at least two audio signals; means for processing the at least two audio signals to be configured as rendered based on the determined type of the at least two audio signals.

According to an eighth aspect, there is provided a computer readable medium comprising program instructions for causing an apparatus to at least: obtaining at least two audio signals; determining the type of at least two audio signals; based on the determined type of the at least two audio signals, the at least two audio signals are processed to be configured to be rendered.

An apparatus comprising means for performing the acts of the method as described above.

An apparatus configured to perform the actions of the method as described above.

A computer program comprising program instructions for causing a computer to perform the method as described above.

A computer program product stored on a medium may cause an apparatus to perform the methods described herein.

An electronic device may include an apparatus as described herein.

A chipset may include an apparatus as described herein.

Embodiments of the present application aim to address the problems associated with the prior art.

Drawings

For a better understanding of the present application, reference will now be made, by way of example, to the accompanying drawings, in which:

FIG. 1 schematically illustrates a system suitable for implementing an apparatus of some embodiments;

fig. 2 schematically illustrates an example decoder/renderer, in accordance with some embodiments;

FIG. 3 illustrates a flow diagram of the operation of an example decoder/renderer, in accordance with some embodiments;

fig. 4 schematically illustrates an example transmitted audio signal type determiner as illustrated in fig. 2, in accordance with some embodiments;

fig. 5 schematically illustrates a second example transmission audio signal type determiner as illustrated in fig. 2, in accordance with some embodiments;

fig. 6 illustrates a flow diagram of the operation of a second example transmission audio signal type determiner, in accordance with some embodiments;

FIG. 7 schematically illustrates an example metadata assisted spatial audio signal to ambisonics format converter as shown in FIG. 2, in accordance with some embodiments;

FIG. 8 illustrates a flowchart of the operation of an example metadata assisted spatial audio signal to Ambisonics format converter, in accordance with some embodiments;

fig. 9 schematically illustrates a second example decoder/renderer, in accordance with some embodiments;

FIG. 10 illustrates a flow diagram of the operation of another example decoder/renderer, in accordance with some embodiments;

FIG. 11 schematically illustrates an example metadata assisted spatial audio signal to multi-channel audio signal format converter as shown in FIG. 9 in accordance with some embodiments;

FIG. 12 illustrates a flowchart of the operation of an example metadata assisted spatial audio signal to multi-channel audio signal format converter in accordance with some embodiments;

fig. 13 schematically illustrates a third example decoder/renderer, in accordance with some embodiments;

FIG. 14 illustrates a flow diagram of the operation of a third example decoder/renderer, in accordance with some embodiments;

FIG. 15 schematically illustrates an example metadata assisted spatial audio signal down-mixer as shown in FIG. 13, in accordance with some embodiments;

FIG. 16 illustrates a flowchart of the operation of an example metadata assisted spatial audio signal down-mixer, in accordance with some embodiments;

fig. 17 illustrates an example apparatus suitable for implementing the devices shown in fig. 1, 2, 4, 5, 7, 9, 11, 13, and 15.

Detailed Description

Suitable means and possible mechanisms for providing efficient rendering of spatial metadata auxiliary audio signals are described in further detail below.

With respect to fig. 1, an example apparatus and system for enabling audio capture and rendering is shown. The system 100 is shown with an "analysis" section 121 and a "demultiplexer/decoder/synthesizer" section 133. The "analysis" part 121 is the part from receiving the multi-channel loudspeaker signal to the encoding of the metadata and the transmission signal, while the "demultiplexer/decoder/synthesizer" part 133 is the part from the decoding of the encoded metadata and the transmission signal to the rendering of the regenerated signal (e.g. in the form of multi-channel loudspeakers).

The inputs to the system 100 and the "analyze" section 121 are the multi-channel signal 102. Microphone channel signal inputs are described in the examples below, however, in other embodiments, any suitable input (or composite multi-channel) format may be implemented. For example, in some embodiments, the spatial analyzer and the spatial analysis may be implemented external to the encoder. For example, in some embodiments, spatial metadata associated with an audio signal may be provided to an encoder as a separate bitstream. In some embodiments, spatial metadata may be provided as a set of spatial (directional) index values.

The multi-channel signal is passed to a transmission signal generator 103 and an analysis processor 105.

In some embodiments, the transmission signal generator 103 is configured to receive a multi-channel signal, generate an appropriate transmission signal comprising a determined number of channels, and output a transmission signal 104. For example, the transmission signal generator 103 may be configured to generate a 2-audio channel down-mix of the multi-channel signal. The determined number of channels may be any suitable number of channels. In some embodiments, the transmission signal generator is configured to otherwise select or combine the input audio signals to a determined number of channels and output them as transmission signals, e.g., by beamforming techniques.

In some embodiments, the transmit signal generator 103 is optional, and the multi-channel signal is passed unprocessed to the "encoder/MUX" block 107 in the same manner as the transmit signal in this example.

In some embodiments, the analysis processor 105 is further configured to receive the multi-channel signal and analyze the signal to generate metadata 106 associated with the multi-channel signal and thus the transmission signal 104. The analysis processor 105 may be configured to generate metadata that may include, for each time-frequency analysis interval, a direction parameter 108 and an energy ratio parameter 110 (an example of which is a diffuseness parameter) and a coherence parameter 112. In some embodiments, the direction, energy ratio and coherence parameters may be considered spatial audio parameters. In other words, the spatial audio parameters comprise parameters intended to characterize a sound field created by the multi-channel signal (or typically two or more playback audio signals).

In some embodiments, the generated parameters may differ from frequency band to frequency band. Thus, for example, in band X, all parameters are generated and transmitted, while in band Y, only one of the parameters is generated and transmitted, and further, in band Z, no parameter is generated or transmitted. A practical example of this may be that for some frequency bands, such as the highest frequency band, some parameters are not needed for perceptual reasons. The transmission signal 104 and metadata 106 may be passed to an "encoder/MUX" block 107.

In some embodiments, the spatial audio parameters may be grouped or separated into directional and non-directional (e.g., diffuse) parameters.

The "encoder/MUX" block 107 may be configured to receive the transport (e.g., downmix) signals 104 and generate suitable encoding of these audio signals. . In some embodiments, the "encoder/MUX" may be a computer (running suitable software stored on memory and on at least one processor), or alternatively may be a specific device, for example using an FPGA or ASIC. The encoding may be implemented using any suitable scheme. In addition, the "encoder/MUX" block 107 may be configured to receive metadata and generate information in an encoded or compressed form. In some embodiments, the "encoder/MUX" block 107 may further interleave, multiplex to a single data stream 111, or embed metadata within the encoded downmix signal prior to transmission or storage as indicated by the dashed lines in fig. 1. Multiplexing may be implemented using any suitable scheme.

On the decoder side, the received or retrieved data (stream) may be received by a "demultiplexer/decoder/synthesizer" 133. The "demultiplexer/decoder/synthesizer" 133 may demultiplex the encoded stream and decode the audio signal to obtain a transmission signal. Similarly, a "demultiplexer/decoder/compositor" 133 may be configured to receive and decode the encoded metadata. In some embodiments, the "demultiplexer/decoder/synthesizer" 133 may be a computer (running suitable software stored on memory and at least one processor), or alternatively a specific device, for example using an FPGA or ASIC.

The "demultiplexer/decoder/synthesizer" portion 133 of the system 100 may be further configured to recreate the synthesized spatial audio in the form of the multi-channel signal 110 (which may be a multi-channel speaker format, or in some embodiments any suitable output format, such as a binaural signal or Ambisonics signal for headphone listening, depending on the use case) in any suitable format based on the transmission signal and the metadata.

Thus, in summary, first, the system (analysis portion) is configured to receive a multi-channel audio signal.

In turn, the system (analysis portion) is configured to generate a suitable transmission audio signal (e.g., by selecting or down-mixing some of the audio signal channels).

The system is further configured to encode the transmission signal and the metadata for storage/transmission.

Thereafter, the system may store/transmit the encoded transmission and metadata.

The system may acquire/receive encoded transmissions and metadata.

In turn, the system is configured to extract transport and metadata from the encoded transport and metadata parameters, e.g., to demultiplex and decode the encoded transport and metadata parameters.

The system (synthesizing section) is configured to synthesize an output multi-channel audio signal based on the extracted transmission audio signal and metadata. As regards the decoder (synthesis part), it is configured to receive spatial metadata and transmit audio signals, which may be, for example, (a possibly preprocessed version of) a down-mix of 5.1 signals, two spaced microphone signals from a mobile device, or two beam patterns from a coincident microphone array.

The decoder may be configured to render spatial audio (such as Ambisonics) from the spatial metadata and the transport audio signal. This is typically achieved by using one of two methods for rendering spatial audio from such input as follows: linear rendering and parametric rendering.

Linear rendering refers to using some static mixing weights to generate the desired output, assuming processing in frequency bands. Parametric rendering refers to modifying a transmitted audio signal based on spatial metadata to generate a desired output.

Methods have been proposed to generate Ambisonics from various inputs:

in the case of a transmitted audio signal and spatial metadata from a 5.1 signal, the parameterization process may be used to render Ambisonics;

in the case of transmitted audio signals and spatial metadata from spaced microphones, a combination of linear and parametric processing may also be used;

in the case of the transmitted audio signals and spatial metadata from coincident microphones, a combination of linear and parametric processing may be used.

Thus, there are a number of methods for rendering Ambisonics from various inputs. However, all of these Ambisonic rendering methods assume some input. Some embodiments as discussed below illustrate apparatus and methods to prevent the following problems from occurring.

Using linear rendering, the Y signal, which is a left-right oriented first order (splay) signal in Ambisonics, may pass through Y (f) -S0(f)-S1(f) Is created from two coincident opposing cardioids, where f is the frequency. As another example, the Y signal may pass through Y (f) -i (S)0(f)-S1(f))geq(f) From spaced microphones are created, where geq(f) Is a frequency dependent equalizer (which depends on the microphone distance) and i is an imaginary unit. The processing of spaced microphones (including-90 degree phase shift and frequency dependent equalization) is different from the processing of the recombined microphones and the use of the wrong processing technique may result in a degradation of audio quality.

The use of parametric rendering in some rendering schemes requires the use of linear means to generate the "prototype" signal. In turn, these prototype signals are adaptively modified in the time-frequency domain based on spatial metadata. Optimally, the prototype signal should follow the target signal as much as possible, so that the need for the parameterization process is minimized, thereby minimizing possible artifacts from the parameterization process. For example, the prototype signal should contain to a sufficient extent all signal components related to the corresponding output channel.

For example, when rendering an omnidirectional signal W (similar effects exist for other Ambisonic signals), a prototype may be created from a stereo transmission audio signal, for example, in two straightforward ways:

selecting one channel (e.g., left channel); or

The two channels are summed.

The choice of which depends largely on the type of audio signal being transmitted. If the transmission signals originate from 5.1 signals, typically the left-side signal is a left-only transmission audio signal and the right-side signal is a right-only transmission audio signal (when using a conventional down-mix matrix). Thus, using one channel for prototyping can lose the signal content of the other channel, resulting in the generation of clear artifacts (e.g., in the worst case, no signal at all exists on a selected channel). In this case, therefore, the W prototype is preferably constructed as the sum of the two channels. On the other hand, if the transmit signals originate from spaced microphones, using the sum of the transmit audio signals as a prototype for the W signal may result in severe comb filtering (because of the time delay between the signals). This can lead to artifacts similar to those described above. In this case, it is preferable to select only one of the two channels as the W prototype, at least in the high frequency range.

Therefore, there is no good choice for all transmitted audio signal types.

Thus, applying spatial audio processing designed for one transmitted audio signal type to another transmitted audio signal type using linear and parametric methods is expected to produce significant audio quality degradation.

The concepts discussed in further detail with respect to the following embodiments and examples relate to audio encoding and decoding, wherein a decoder receives at least two transmission audio signals from an encoder. Further, embodiments may be wherein the transmitted audio signals may be of at least two types, e.g. a down-mix of 5.1 signals, spaced microphone signals, or coincident microphone signals. Furthermore, in some embodiments, the apparatus and methods implement a solution that improves the quality of the processing of the transmitted audio signal and provides the determined output (e.g., Ambisonics, 5.1, mono). By determining the type of the transmission audio signal and performing audio processing based on the determined type of the transmission audio signal, quality can be improved.

In some embodiments as discussed in further detail herein, the transmission audio signal type is determined by any one of:

obtaining metadata indicating the type of audio signal transmitted, or

The type of audio signal to transmit is determined based on the audio signals to transmit (and possibly spatial metadata, if available) themselves.

Metadata that specifies the type of audio signal transmitted may include, for example, the following conditions:

spaced microphones (possibly accompanied by the position of the microphone);

coincident microphones or beams that are effectively similar to coincident microphones (possibly accompanied by directional patterns of microphones);

down-mixing of multi-channel audio signals, such as 5.1.

Determining the transmission audio signal type based on an analysis of the transmission audio signals themselves may be based on comparing frequency bands or combined (in different ways) spectral effects with expected spectral effects (based in part on spatial metadata, if available).

Further, in some embodiments, the processing of the audio signal may include:

rendering an Ambisonic signal;

rendering a multi-channel audio signal (e.g., 5.1); and

the transmission audio signals are down-mixed to a smaller number of audio signals.

Fig. 2 shows a schematic diagram of an example decoder suitable for implementing some embodiments. Example embodiments may be implemented, for example, within a "demultiplexer/decoder/synthesizer" block 133. In this example, the input is a Metadata Assisted Spatial Audio (MASA) stream containing two audio channels and spatial metadata. However, as discussed herein, the input format may be any suitable metadata-assisted spatial audio format.

The (MASA) bitstream is forwarded to the transmit audio signal type determiner 201. The transmission audio signal type determiner 201 is configured to determine a transmission audio signal type 202 and possibly some additional parameters 204 (such as microphone distance) based on the bitstream. The determined parameters are forwarded to MASA to Ambisonic signal converter 203.

The MASA-to-Ambisonic signal converter 203 is configured to receive the bitstream and the transmission audio signal type 202 (and possibly some additional parameters 204) and to convert the MASA stream into an Ambisonic signal based on the determined transmission audio signal type 202 (and possibly some additional parameters 204).

The operation of this example is summarized in the flowchart shown in fig. 3.

As shown in step 301 of fig. 3, the first operation is to receive or obtain a bit stream (MASA stream).

The following operation is to determine the type of audio signal to transmit (and to generate a type signal or indicator and possibly other additional parameters) based on the bitstream, as shown in step 303 of fig. 3.

After the transmit audio signal type has been determined, the next operation is to convert the bit stream (MASA stream) into an Ambisonic signal based on the determined transmit audio signal type, as shown in step 305 of fig. 3.

Fig. 4 shows a schematic diagram of an example transmission audio signal type determiner 201. In this example, the example transmit audio signal type determiner is applicable to the case where the transmit audio signal type is available in a MASA stream.

In this example, the example transmission audio signal type determiner 201 includes a transmission audio signal type extractor 401. The transport audio signal type extractor 401 is configured to receive a bit (MASA) stream and extract (i.e., read and/or decode) a type indicator from the MASA stream. Such information may be obtained, for example, in the "channel audio format" field of the MASA stream. Furthermore, if additional parameters are available, they are also extracted. This information is output from the transmission audio signal type extractor 401. In some embodiments, the transmitted audio signal types may include "spaced", "downmix", "coincidence". In some other embodiments, the transmission audio signal type may include any suitable value.

Fig. 5 shows a schematic diagram of another example transmission audio signal type determiner 201. In this example, the transmission audio signal type cannot be extracted or decoded directly from the MASA stream. Thus, this example estimates or determines the transmit audio signal type from an analysis of the MASA stream. In some embodiments, this determination is based on using a set of estimators/energy comparisons that reveal certain spectral effects of different transmitted audio signal types.

In some embodiments, the transmission audio signal type determiner 201 includes a transmission audio signal and a spatial metadata extractor/decoder 501. The transmission audio signal and spatial metadata extractor/decoder 501 is configured to receive a MASA stream, and extract and/or decode a transmission audio signal and spatial metadata from the MASA stream. The resulting transmission audio signal 502 may be forwarded to a time/frequency converter 503. In addition, the resulting spatial metadata 522 may be forwarded to the difference and target energy comparator 511.

In some embodiments, the transmission audio signal type determiner 201 includes a time/frequency transformer 503. The time/frequency transformer 503 is configured to receive the transmission audio signals 502 and convert them into the time-frequency domain. Suitable transforms include, for example, short-time fourier transforms (STFTs) and complex-modulated Quadrature Mirror Filterbanks (QMFs). The resulting signal is denoted Si(b, n), where i is a channel index, b is a frequency bin (frequency bin) index, and n is a time index. In case the transmission audio signal (from the output of the extractor and/or decoder) is already in the time-frequency domain, this may be omitted or, alternatively, may involve a transformation from one time-frequency domain representation to another. The T/F domain transmit audio signal 504 may be forwarded to a comparator.

In some embodiments, the transmitted audio signal type determiner 201 includes a wideband L/R versus total energy comparator 505. The wideband L/R versus total energy comparator 505 is configured to receive the T/F domain transmitted audio signal 504 and output a wideband L/R versus total energy ratio parameter.

Within wideband L/R vs. Total energy comparator 505, wideband left, right and Total energy are calculated:

Etotal,bb(n)=Eleft,bb(n)+Eright,bb(n)

where B is the number of frequency bins. These energies are smoothed, for example, by the following equation:

E′x,bb(n)=a1Ex,bb(n)+b1E′x,bb(n-1)

wherein, a1And b1Is a smoothing coefficient (e.g., a)1=0.01,b1=1-a1). In turn, the wideband L/R versus total energy comparator 505 is configured to select and scale the smallest left and right energy:

E′lr,bb(n)=2min(E′left,bb(n),E′right,bb(n))

wherein multiplier 2 is used relative to the sum E 'of the two channels'total,bb(n) to normalize the energy.

Further, wideband L/R versus total energy comparator 505 may generate a wideband L/R versus total energy ratio 506 of:

which is in turn output as a ratio 506.

In some embodiments, the transmitted audio signal type determiner 201 includes a high frequency L/R versus total energy comparator 507. The high frequency L/R versus total energy comparator 507 is configured to receive the T/F domain transmitted audio signal 504 and output a high frequency L/R versus total energy ratio parameter.

Within wideband L/R vs. Total energy comparator 507, the high band left, right and Total energies are calculated:

Etotal,hi(n)=Eleft,hi(n)+Eright,hi(n)

wherein, B1Is the first point (first bin) where the high frequency region is defined to start (the value of which depends on the applied T/F transform, which may correspond to 6kHz, for example). These energies are smoothed, for example, by the following equation:

E′x,hi(n)=a2Ex,hi(n)+b2E′x,hi(n-1)

wherein, a2And b2Is a smoothing coefficient. At high frequencies, the energy difference may occur at a higher frequency, and thus the smoothing factor may be set to provide less smoothing (e.g., a2=0.1,b2=1-a2)。

Further, the high frequency L/R versus total energy comparator 507 may be configured to select the smaller of the left and right energies, and the result is multiplied by 2:

E′lr,hi(n)=2min(E′left,hi(n),E′right,hi(n))

further, high frequency L/R versus total energy comparator 507 may generate a high frequency L/R versus total energy ratio 508 of:

which is then output.

In some embodiments, the transmitted audio signal type determiner 201 includes a sum versus total energy comparator 509. The sum-to-total energy comparator 509 is configured to receive the T/F domain transmitted audio signal 504 and output a sum-to-total energy ratio parameter. The sum versus total energy comparator 509 is configured to detect the case where the two channels are out of phase at some frequencies, which is a typical phenomenon of spaced microphone recordings, among other things.

The sum versus total energy comparator 509 is configured to calculate, for each frequency point, the energy of the sum signal and the total energy:

Esum(b,n)=|S0(b,n)+S1(b,n)|2

Etotal(b,n)=|S0(b,n)|2+|S1(b,n)|2

these energies can be smoothed, for example, by the following equation:

E′x(b,n)=a3Ex(b,n)+b3E′x(b,n-1)

wherein, a3And b3Is a smoothing coefficient (e.g., a)3=0.01,b3=1-a3)。

Further, the sum-to-total energy comparator 509 is configured to calculate a minimum sum-to-total energy ratio 510 as:

wherein, B2Is the highest point (highest bin) of the frequency region in which this calculation is performed (the value of which depends on the T/F transform used, which may correspond to 10kHz, for example).

Further, the sum-to-total energy comparator 509 is configured to output a ratio χ (n) 510.

In some embodiments, the transmitted audio signal type determiner 201 includes a difference versus target energy comparator 511. The difference to target energy comparator 511 is configured to receive the T/F domain transmitted audio signal 504 and the spatial metadata 522 and output a difference to target energy ratio parameter 512.

The difference versus target energy comparator 511 is configured to calculate the difference energy of the left and right channels:

Esub(b,n)=|S0(b,n)-S1(b,n)|2

this can be thought of as a "prototype" of the Ambisonics Y signal (Y signal has a directional pattern of dipoles with a positive lobe on the left and a negative lobe on the right) for at least some input signal types.

Further, the difference versus target energy comparator 511 may be configured to calculate the target energy E for the Y signaltarget(b, n). This is based on estimating how the total energy should be distributed among the spherical harmonics based on the spatial metadata. For example, in some embodiments, the difference versus target energy comparator 511 is configured to construct a target covariance matrix (channel energy and cross-correlation) based on the spatial metadata and the energy estimates. However, in some embodiments, the energy of only the Y signal is estimated, which is one entry of the target covariance matrix. Thus, as the target energy E for Ytarget(b, n) consists of two parts:

Etarget(b,n)=Etarget,amb(b,n)+Etarget,dir(b,n)

wherein E istarget,amb(b, n) is the ambient/non-directional portion of the target energy, which is defined by:

where r (b, n) is a direct contrast energy ratio parameter between 0 and 1 for spatial metadata, csur(b, n) is a surround coherence parameter between 0 and 1 for spatial metadata (surround coherent sound is not captured by Y-dipoles, since in this case the positive and negative lobes cancel each other out). Divide by 3 is because the SN3D normalization scheme for the Ambisonic output is assumed, and in this case, the ambient energy of the Y component is one-third of the total omnidirectional energy.

It should be noted that the frequency and/or temporal resolution of the spatial metadata may be lower for each b, n, so that the parameters may be the same for several frequency or time indices.

Etarget,dir(b, n) is the more directional portion of the energy. In its construction, it is necessary to define an extended coherence c between 0 and 1 as in spatial metadataspread(b, n) a spread coherence distribution vector of a function of the parameters:

the difference versus target energy comparator 511 may also be configured to determine a vector of azimuth values:

where θ (b, n) is an azimuth value of spatial metadata in units of radians. Assuming a sin () operation based on vector entries, the direct partial target energy is:

Etarget,dir(b,n)=sin(θ(b,n))TVDISTR,3(b,n)Etotal(b,n)r(b,n)

thus, obtaining Etarget(b, n). In some embodiments, these energies may be smoothed, for example, by the following equation:

E′x(b,n)=a4Ex(b,n)+b4E′x(b,n-1)

wherein, a4And b4Is a smoothing coefficient (e.g., a)4=0.0004,b4=1-a4)。

Further, the difference to target energy comparator 511 is configured to use the energy at the lowest frequency point to calculate the difference to target ratio 512 as:

which is then output.

In some embodiments, the transmit audio signal type determiner 201 comprises a transmit audio signal type (based on the estimated metric) determiner 513. The transmit audio signal type determiner 513 is configured to receive the wideband L/R to total energy ratio 506, the high frequency L/R to total energy ratio 508, the minimum sum to total energy ratio 510, and the difference to target ratio 512, and determine the transmit audio signal type based on these received estimated metrics.

The decision may be done in various ways, and the actual implementation may differ in many ways, such as the T/F transform used. An example, in a non-limiting form, could be that the transmitted audio signal type (based on the estimated metric) determiner 513 first computes the variation of the spacing metric:

if v (n)<-3, otherwise xis(n)=0

Further, the transmit audio signal type (based on the estimated metric) determiner 513 may be configured to calculate a variation of the downmix metric:

if v (n)>0, otherwise xid1(n)=0

If eta (n)<-12, otherwise xid2(n)=0

In turn, the transmit audio signal type (based on the estimated metrics) determiner 513 may decide based on these metrics whether the transmit audio signals originate from spaced microphones or whether they are from a down-mix of surround sound signals (such as 5.1). For example, wherein,

if xis(n)>1,T(n)=″spaced″

Otherwise if xid1(n)>1∨Ξd2(n)>1,T(n)=″downmix″

Otherwise T (n) ═ T (n-1)

In this example, the transmit audio signal type (based on the estimated metric) determiner 513 does not detect a coincident microphone type. However, in practice, processing according to the t (n) ═ downmix "type can also generally produce good audio in the case of coincidence capture (e.g., cardioids oriented to the left and right).

In turn, the transmit audio signal type (based on the estimated metric) determiner 513 may be configured to output the transmit audio signal type t (n) as the transmit audio signal type 202. In some embodiments, other parameters 204 may be output.

Fig. 6 summarizes the operation of the apparatus shown in fig. 5. Thus, in some embodiments, the first operation is to extract and/or decode the transport audio signal and metadata from the MASA stream (or bitstream), as shown in step 601 of fig. 6.

The next operation may be to perform a time-frequency domain transform on the transmitted audio signal, as shown in step 603 in fig. 6.

A series of comparisons may then be made. For example, as shown in step 605 of FIG. 6, a wideband L/R to total energy ratio may be generated by comparing the wideband L/R energy value to the total energy value.

For example, as shown in step 607 of FIG. 6, a high frequency L/R to total energy ratio may be generated by comparing the high frequency L/R energy value to the total energy value.

As shown in step 609 of fig. 6, a sum-to-total energy ratio may be generated by comparing the sum energy value to the total energy value.

Further, as shown in step 611 in FIG. 6, a difference to target energy ratio may be generated.

After the metrics have been determined, the method may then determine the transmission audio signal type by analyzing the metric ratios, as shown in step 613 of fig. 6.

FIG. 7 shows an example MASA to Ambisonic converter 203 in more detail. The MASA to Ambisonic converter 203 is configured to receive a MASA stream (bitstream) and a transmission audio signal type 202 and possibly additional parameters 204, and to convert the MASA stream into Ambisonic signals based on the determined transmission audio signal type.

The MASA to Ambisonic transformer 203 includes a transport audio signal and spatial metadata extractor/decoder 501. Which is configured to receive MASA streams and output a transport audio signal 502 and spatial metadata 522 in the same manner as found in the transport audio signal type determiner shown in fig. 5 and discussed herein. In some embodiments, the extractor/decoder 501 is an extractor/decoder from a transmitted audio signal type determiner. The resulting transmission audio signal 502 may be forwarded to a time/frequency converter 503. Furthermore, the resulting spatial metadata 522 may be forwarded to the signal mixer 705.

In some embodiments, MASA to Ambisonic transducer 203 includes a time/frequency translator 503. The time/frequency transformer 503 is configured to receive the transmission audio signals 502 and convert them into the time-frequency domain. Suitable transforms include, for example, short-time fourier transforms (STFTs) and complex-modulated Quadrature Mirror Filterbanks (QMFs). The resulting signal is denoted Si(b, n), wherein i is a channel index, b is a frequency point index, and n is a time index. This block may be omitted if the output of the audio extraction and/or decoding is already in the time-frequency domain, or alternatively it may contain a transform from one time-frequency domain representation to another. The T/F domain transmission audio signal 504 may be forwarded to the prototype signal creator 701. In some embodiments, the time/frequency transformer 503 is the same time/frequency transformer from the transmit audio signal type determiner.

In some embodiments, MASA to Ambisonic transducer 203 includes prototype signal creator 701. The prototype signal creator 701 is configured to receive the T/F domain transmission audio signal 504, the transmission audio signal type 202 and possibly the additional parameters 204. Further, the T/F prototype signal 702 may be output to a signal mixer 705 and a decorrelator 703.

In some embodiments, MASA to Ambisonic converter 203 includes a decorrelator 703. The decorrelator 703 is configured to receive the T/F prototype signal 702, apply decorrelation, and output a decorrelated T/F prototype signal 704 to the signal mixer 705. In some embodiments, decorrelator 703 is optional.

In some embodiments, MASA to Ambisonic transducer 203 includes signal mixer 705. The signal mixer 705 is configured to receive the T/F prototype signal 702, the decorrelated T/F prototype signal and the spatial metadata 522.

The prototype signal creator 701 is configured to generate a prototype signal for each of the spherical harmonics Ambisonics (FOA/HOA) based on the transmission audio signal type.

In some embodiments, the prototype signal creator 701 is configured to operate such that:

if t (n) ═ spaced ", a prototype of the W signal can be created as follows:

Wproto(b,n)=S0(b,n),b>B3

in practice, Wproto(b, n) may be created as an average of the transmitted audio signal at low frequencies (where the signals are approximately in phase and no comb filtering occurs), and by selecting one of the channels at high frequencies. B is3The value of (d) depends on the distance between the T/F transform and the microphone. If the distance is unknown, some default value (e.g., a value corresponding to 1 kHz) may be used.

If t (n) ═ downmix "or t (n) ═ coincident", a prototype of the W signal can be created as follows:

Wproto(b,n)=S0(b,n)+S1(b,n)

Wproto(b, n) is created by summing the transmitted audio signals, since it can be assumed that there is typically no significant delay between the original audio signals with these signal types.

About Y prototype signals

If t (n) ═ spaced ", a prototype of the Y signal can be created as follows:

Yproto(b,n)=-i(S0(b,n)-S1(b,n))geq(b),B4<b≤B5

Yproto(b,n)=S0(b,n),b>B5

at intermediate frequency (B)4And B5In between), canTo create a dipole signal by subtracting the transmitted signal, phase shifting-90 degrees, and equalizing. It can therefore be a good prototype of the Y signal, especially if the microphone distance is known, and therefore the equalization coefficients are suitable. This is not feasible at low and high frequencies, and the prototype signal is generated in the same way as the omni-directional W signal.

If the microphone distance is known accurately, the Y prototype can be used directly for Y at those frequencies (i.e., Y (b, n) ═ Yproto(b, n)). If the microphone spacing is unknown, g may be usedeq(b)=1。

In some embodiments, signal mixer 705 may apply gain processing in the frequency band to take advantage of potential gain smoothing to W in the frequency bandprotoThe energy of (b, n) is corrected to the target energy in the frequency band. The target energy of the omni-directional signal in a frequency band may be the sum of the transmitted audio signal energies in that frequency band. The result of this processing is an omni-directional signal W (b, n).

For Yproto(B, n) cannot be used directly for Y signal of Y (B, n) and when the frequency is at B4And B5In between, adaptive gain processing is performed. This situation is similar to the omni-directional W case described above: the prototype signal is already a Y dipole, except for possible error spectra, and the signal mixer performs gain processing on the prototype signal in the frequency band. (in addition, with respect to the Y signal, decorrelation is not required in this particular context). Gain processing may refer to using spatial metadata (direction, ratio, other parameters) and an overall signal energy estimate (e.g., sum of transmitted signal energies) in a frequency band to determine what the energy of the Y component should be in the frequency band, and then using gain to correct the energy of the prototype signal in the frequency band to the determined energy, which in turn is the output Y (b, n).

In the current context of t (n) ═ spaced, "the aforementioned process of generating Y (b, n) is not valid for all frequencies. The signal mixer and decorrelator are configured differently according to the frequency with this type of transmission signal, since the prototype signal is different at different frequencies. To illustrate different kinds of prototype signals, one may considerConsider a scene where sound arrives from the negative gain direction of the Y dipole (which has both positive and negative lobes). At intermediate frequency (at B)4And B5In between), the phase of the Y prototype signal is opposite to the phase of the W prototype signal, since it should be directed to that direction of the arriving sound. At other frequencies (lower than B)4And is higher than B5) The phase of the prototype Y signal is the same as the phase of the W prototype signal. The synthesis of the appropriate phase (and energy and correlation) will in turn be taken into account by the signal mixer and decorrelator at those frequencies.

At low frequencies (below B) where the wavelength is large4) The phase difference between audio signals captured with spaced microphones, which are typically somewhat close to each other, is small. Thus, for SNR reasons, the prototype signal creator should not be configured to match that at B4And B5The prototype signal is generated in the same way as the frequencies in between. Therefore, the channel and omni-directional signals are often used instead as the prototype signals. At high frequencies (above B) where the wavelength is small5) Spatial aliasing severely distorts the beam pattern (if used as in B)4And B5Frequency between) so that the channel selection omni-directional prototype signal is preferably used.

The following describes the frequency at these frequencies (below B)4Or higher than B5) Signal mixer and decorrelator configurations. For a simple example, the spatial metadata parameter set consists of the azimuth angle θ and the ratio r in the frequency band. The gain sin (θ) sqrt (r) is applied to the prototype signal within the signal mixer to generate the Y dipole signal, and the result is the coherent partial signal. The prototype signal is also decorrelated (in a decorrelator) and the decorrelated result is received in a signal mixer where it is associated with a factor sqrt (1-r) gorderMultiplied and the result is an incoherent partial signal. Gain g according to the known SN3D normalization schemeorderIs the diffusion field gain at the order of the spherical harmonic function. For example, for the first order (as in the case of a Y dipole) it is sqrt (1/3), for the second order it is sqrt (1/5), for the third order it is sqrt (1/7), and so on. Coherent part signal and non-coherent part signal are phasedAre added together. The result is a synthesized Y signal, in addition to possible erroneous energy due to possible erroneous prototype signal energy. Can be applied at intermediate frequency (at B)4And B5In between) to correct the energy in the frequency band to a desired target, and the output is the signal Y (b, n).

The above process may be applied for other spherical harmonics, such as the X and Z components, or second or higher order components, except that the gain with respect to azimuth (and other possible parameters) depends on the spherical harmonic signal being synthesized. For example, the gain generated from the W prototype for the X dipole coherent portion is cos (θ) sqrt (r). For purposes other than in B4And B5The decorrelation, ratio processing and energy correction may be the same as determined above for the Y component, except for the frequencies in between.

Other parameters such as elevation, extended coherence and surround coherence may be considered in the above process. The value of the extended coherence parameter can be from 0 to 1. The extended coherence value 0 represents a point source, in other words when using a multi-speaker system to reproduce an audio signal, sound should be reproduced with as few speakers as possible (e.g. only the center speaker when the direction is center). As the value of the extended coherence increases, more energy is extended to other speakers around the center speaker until the value 0.5, the energy is evenly spread between the center speaker and the neighboring speakers. As the extended coherence value increases above 0.5, the energy in the center speaker decreases until a value of 1, where there is no energy and all energy is in the neighboring speakers. The value of the surround coherence parameter is from 0 to 1. A value of 1 means that there is coherence between all (or almost all) speaker channels. A value of 0 means that there is no coherence between all (or even almost all) speaker channels. This is further explained in GB application No 1718341.9 and PCT application PCT/FI 2018/050788.

For example, increased surround coherence can be achieved by reducing the synthesized ambient energy in the spherical harmonic components, and elevation can be increased by adding elevation-related gain according to the definition of Ambisonic mode when generating the coherent part.

If t (n) ═ downmix "or t (n) ═ coincident", a prototype of the Y signal can be created as follows:

Yproto(b,n)=S0(b,n)-S1(b,n)

in this case, no phase shift is required, since it can be assumed that there is usually no significant delay between the original audio signals with these signal types. With respect to the "mixed signal" block, if t (n) ═ coincident ", the Y and W prototypes can be used directly for the Y and W outputs, possibly after gain is applied (according to the actual directivity pattern). If T (n) ═ downmix ", Yproto(b, n) and Wproto(b, n) cannot be used directly for Y (b, n) and W (b, n), but it may be necessary to correct the energy in the frequency band to the "desired target, as determined for the case t (n) ═ spaced" (note that the omni-directional component has a spatial gain of 1 regardless of the angle of arrival sound).

For other spherical harmonics (such as X and Z), it is not possible to create prototypes that can replicate the target signal well, because typical downmix signals are oriented on the left-right axis rather than the front-back X-axis or the top-bottom Z-axis. In some embodiments, therefore, the approach is to use a prototype of the omni-directional signal, e.g.,

Xproto(b,n)=Wproto(b,n)

Zproto(b,n)=Wproto(b,n)

similarly, Wproto(b, n) is also used for higher harmonics for the same reason. In this case, the signal mixer and decorrelator may process the signal in the same way as t (n) ═ spaced "for these spherical harmonic components.

In some cases, the transmitted audio signal type t (n) may change during audio playback (e.g., due to an actual change in signal type, or a defect in automatic type detection). To avoid artifacts due to abrupt changes in type, the prototype signal may be interpolated in some embodiments. This can be achieved, for example, by simply linearly interpolating from a prototype signal according to the old type to a prototype signal according to the new type.

The output of the signal mixer is the resulting time-frequency domain Ambisonic signal, which is forwarded to an inverse T/F converter 707.

In some embodiments, MASA-to-Ambisonic signal converter 203 includes an inverse T/F transformer 707 configured to convert the signal to the time domain. Time domain Ambisonic signal 906 is the output from the MASA to Ambisonic converter.

With respect to fig. 8, an overview of the operation of the apparatus shown in fig. 7 is shown.

Thus, in some embodiments, the first operation is to extract and/or decode the transport audio signal and metadata from the MASA stream (or bitstream), as shown in step 801 in fig. 8.

The next operation may be to perform a time-frequency domain transform on the transmitted audio signal, as shown in step 803 in fig. 8.

Further, as shown in step 805 of fig. 8, the method includes creating a prototype audio signal based on the time-frequency domain transmission signal and further based on the transmission audio signal type (and further based on the additional parameters).

As shown in step 807 of fig. 8, in some embodiments, the method includes applying decorrelation to the time-frequency prototype audio signal.

Further, the decorrelated time-frequency prototype audio signal and the time-frequency prototype audio signal may be mixed based on the spatial metadata and the transmission audio signal type, as shown in step 809 in fig. 8.

The mixed signal may then be inverse time-frequency transformed, as shown in step 811 of fig. 8.

Further, as shown in step 813 of fig. 8, a time domain signal may be output.

Fig. 9 shows a schematic diagram of an example decoder suitable for implementing some embodiments. Example embodiments may be implemented, for example, within the example "demultiplexer/decoder/synthesizer" block 133 shown in fig. 1. In this example, the input is a Metadata Assisted Spatial Audio (MASA) stream containing two audio channels and spatial metadata. However, as discussed herein, the input format may be any suitable metadata-assisted spatial audio format.

The (MASA) bitstream is forwarded to the transmit audio signal type determiner 201. The transmission audio signal type determiner 201 is configured to determine a transmission audio signal type 202 and possibly some additional parameters 204 (e.g. microphone distance) based on the bitstream. The determined parameters are forwarded to a MASA-to-multi-channel audio signal converter 903. In some embodiments, the transmission audio signal type determiner 201 is the same transmission audio signal type determiner 201 as described above with respect to fig. 2, or may be a separate instance of the transmission audio signal type determiner 201 that is configured to operate in a similar manner to the transmission audio signal type determiner 201 described above with respect to the example shown in fig. 2.

The MASA-to-multi-channel audio signal converter 903 is configured to receive the bitstream and the transmission audio signal type 202 (and possibly some additional parameters 204) and to convert the MASA stream into a multi-channel audio signal (such as 5.1) based on the determined transmission audio signal type 202 (and possibly some additional parameters 204).

The example operations shown in fig. 9 are summarized in the flowchart shown in fig. 10.

As shown in step 301 of fig. 10, the first operation is to receive or obtain a bit stream (MASA stream).

The following operation is to determine the type of audio signal to transmit (and to generate a type signal or indicator and possibly other additional parameters) based on the bitstream, as shown in step 303 of fig. 10.

After the transmit audio signal type has been determined, the next operation is to convert the bitstream (MASA stream) into a multi-channel audio signal (such as 5.1) based on the determined transmit audio signal type, as shown in step 305 of fig. 10.

Fig. 11 shows an example MASA-to-multi-channel audio signal converter 903 in more detail. The MASA-to-multi-channel audio signal converter 903 is configured to receive a MASA stream (bitstream) and a transmission audio signal type 202 and possibly additional parameters 204 and to convert the MASA stream into a multi-channel audio signal type based on the determined transmission audio signal.

The MASA-to-multi-channel audio signal converter 903 includes a transport audio signal and a spatial metadata extractor/decoder 501. Which is configured to receive MASA streams and output a transport audio signal 502 and spatial metadata 522 in the same manner as found in the transport audio signal type determiner shown in fig. 5 and discussed herein. In some embodiments, the extractor/decoder 501 is an extractor/decoder from the previously described transmitted audio signal type determiner, or is a separate instance of an extractor/decoder. The resulting transmission audio signal 502 may be forwarded to a time/frequency converter 503. Further, the resulting spatial metadata 522 may be forwarded to the target signal characteristics determiner 1101.

In some embodiments, the MASA-to-multi-channel audio signal converter 903 comprises a time/frequency transformer 503. The time/frequency transformer 503 is configured to receive the transmission audio signals 502 and convert them into the time-frequency domain. Suitable transforms include, for example, short-time fourier transforms (STFTs) and complex-modulated Quadrature Mirror Filterbanks (QMFs). The resulting signal is denoted Si(b, n), wherein i is a channel index, b is a frequency point index, and n is a time index. This block may be omitted if the output of the audio extraction and/or decoding is already in the time-frequency domain, or alternatively it may contain a transform from one time-frequency domain representation to another. The T/F domain transmission audio signal 504 may be forwarded to the prototype signal creator 1111. In some embodiments, time/frequency converter 503 is the same time/frequency converter from the transmit audio signal type determiner, or a MASA to Ambisonics converter, or a separate instance.

In some embodiments, the MASA-to-multi-channel audio signal converter 903 comprises a prototype signal creator 1111. The prototype signal creator 1111 is configured to receive the T/F domain transmission audio signal 504, the transmission audio signal type 202 and possibly the additional parameters 204. The T/F prototype signal 1112 may in turn be output to a signal mixer 1105 and a decorrelator 1103.

As an example of the operation with respect to the prototype signal creator 1111, rendering of a 5.1 multi-channel audio signal configuration is described.

In this example, the prototype signal for the left side (left front and left surround) output channel may be created as:

Lf,proto(b,n)=Ls,proto(b,n)=S0(b,n)

the prototype signal for the right side output (right front and right surround) channel can be created as:

Rf,proto(b,n)=Rs,proto(b,n)=S1(b,n)

thus, for output channels on either side of the mid-plane, the prototype signal may directly use the corresponding transmitted audio signal.

For the center output channel, the prototype audio signal should contain energy from the left and right sides, as it can be used to pan to either side. Thus, in the case of Ambisonic rendering, the prototype signal can be created as an omni-directional channel, in other words, if t (n) ═ spaced ",

Cproto(b,n)=S0(b,n),b>B3

in some embodiments, if t (n) or t (n) is "coincident", the prototype audio signal may generate a prototype central audio channel:

Cproto(b,n)=S0(b,n)+S1(b,n)

in some embodiments, the MASA-to-multi-channel audio signal converter 903 comprises a decorrelator 1103. Decorrelator 1103 is configured to receive T/F prototype signal 1112, apply decorrelation, and output decorrelated T/F prototype signal 1104 to signal mixer 1105. In some embodiments, decorrelator 1103 is optional.

In some embodiments, the MASA-to-multi-channel audio signal converter 903 comprises a target signal property determiner 1101. In some embodiments, the target signal characteristic determiner 1101 is configured to generate a target covariance matrix (target signal characteristics) based on the spatial metadata and the overall estimate of the signal energy in the frequency band. In some embodiments, this energy estimate may be the sum of the transmitted signal energies in the frequency bands. This target covariance matrix (target signal characteristics) determination may be performed in a similar manner as provided by patent application GB 1718341.9.

The target signal characteristics 1102 may in turn be passed to a signal mixer 1105.

In some embodiments, the MASA-to-multi-channel audio signal converter 903 comprises a signal mixer 1105. The signal mixer 1105 is configured to measure the covariance matrix of the prototype signal and formulate a mixing scheme based on the estimated (prototype signal) covariance matrix and the target covariance matrix. In some embodiments, the hybrid scheme may be similar to the scheme described in GB 1718341.9. The mixing scheme is applied to the prototype signal and the decorrelated prototype signal, and the resulting signal is obtained with band characteristics based on target signal characteristics. In other words, based on the determined target covariance matrix.

In some embodiments, the MASA-to-multi-channel audio signal converter 903 comprises an inverse T/F transformer 707 configured to convert the signal to the time domain. The time domain multi-channel audio signal is the output of a MASA to multi-channel audio signal converter.

With respect to fig. 12, an overview of the operation of the apparatus shown in fig. 11 is shown.

Thus, as shown in step 801 in fig. 12, in some embodiments, the first operation is to extract and/or decode the transport audio signal and metadata from the MASA stream (or bitstream).

The next operation may be to perform a time-frequency domain transform on the transmitted audio signal, as shown in step 803 in fig. 12.

Further, as shown in step 1205 in fig. 12, the method includes creating a prototype audio signal based on the time-frequency domain transmission signal and further based on the transmission audio signal type (and further based on the additional parameters).

As shown in step 1207 of fig. 12, in some embodiments, the method includes applying decorrelation to the time-frequency prototype audio signal.

Further, as shown in step 1208 in fig. 12, the target signal characteristics may be determined based on the time-frequency domain transmission audio signal and the spatial metadata (to generate a covariance matrix of the target signal).

As shown in step 1209 of fig. 12, the covariance matrix of the prototype audio signal may be measured.

Further, as shown in step 1210 of fig. 12, the decorrelated time-frequency prototype audio signal and the time-frequency prototype audio signal may be mixed based on the target signal characteristics.

As shown in step 1211 of fig. 12, the mixed signal may be further subjected to inverse time-frequency transform.

Further, as shown in step 1213 of fig. 12, a time domain signal may be output.

Fig. 13 illustrates a schematic diagram of another example decoder suitable for implementing some embodiments. In other embodiments, a similar method may be implemented in devices other than a decoder, for example as part of an encoder. Example embodiments may be implemented, for example, within an (IVAS) "demultiplexer/decoder/synthesizer" block 133 such as that shown in fig. 1. In this example, the input is a Metadata Assisted Spatial Audio (MASA) stream containing two audio channels and spatial metadata. However, as discussed herein, the input format may be any suitable metadata-assisted spatial audio format.

The (MASA) bitstream is forwarded to the transmit audio signal type determiner 201. The transmission audio signal type determiner 201 is configured to determine a transmission audio signal type 202 and possibly some additional parameters 204 (an example of such additional parameters is microphone distance) based on the bitstream. The determined parameters are forwarded to the down-mixer 1303. In some embodiments, the transmission audio signal type determiner 201 is the same as the transmission audio signal type determiner 201 described above, or may be a separate instance of the transmission audio signal type determiner 201 that is configured to operate in a similar manner as the transmission audio signal type determiner 201 described above.

The down-mixer 1303 is configured to receive the bitstream and the transmission audio signal type 202 (and possibly some additional parameters 204) and to down-mix the MASA stream from 2 transmission audio signals into 1 transmission audio signal based on the determined transmission audio signal type 202 (and possibly some additional parameters 204). The output MASA stream 1306 is output in turn.

The operation of the example shown in fig. 13 is summarized in the flowchart shown in fig. 14.

As shown in step 301 of fig. 14, the first operation is to receive or obtain a bit stream (MASA stream).

The following operation is to determine the type of audio signal to transmit (and to generate a type signal or indicator and possibly other additional parameters) based on the bitstream, as shown in step 303 of fig. 14.

After the transmit audio signal type has been determined, the next operation is to down-mix the MASA stream from the 2 transmit audio signals into 1 transmit audio signal based on the determined transmit audio signal type 202 (and possibly additional parameters 204), as shown in step 1405 in fig. 14.

Fig. 15 shows an example down-mixer 1303 in more detail. The down-mixer 1303 is configured to receive the MASA stream (bitstream), the transmission audio signal type 202 and possibly the additional parameters 204 and to down-mix the two transmission audio signals into one transmission audio signal based on the determined transmission audio signal type.

The down-mixer 1303 includes a transmission audio signal and a spatial metadata extractor/decoder 501. Which is configured to receive MASA streams and output a transport audio signal 502 and spatial metadata 522 in the same manner as found in the transport audio signal type determiner as discussed herein. In some embodiments, extractor/decoder 501 is the previously described extractor/decoder or a separate instance of an extractor/decoder. The resulting transmission audio signal 502 may be forwarded to a time/frequency converter 503. Further, the resulting spatial metadata 522 may be forwarded to a signal multiplexer 1507.

In some embodiments, the down mixer 1303 includes a time/frequency transformer 503. The time/frequency converter 503 is configured to receive the transmission audioSignals 502 and converts them to the time-frequency domain. Suitable transforms include, for example, short-time fourier transforms (STFTs) and complex-modulated Quadrature Mirror Filterbanks (QMFs). The resulting signal is denoted Si(b, n), wherein i is a channel index, b is a frequency point index, and n is a time index. This block may be omitted if the output of the audio extraction and/or decoding is already in the time-frequency domain, or alternatively it may contain a transform from one time-frequency domain representation to another. The T/F domain transmission audio signal 504 may be forwarded to a prototype signal creator 1511. In some embodiments, the time/frequency transformer 503 is the same time/frequency transformer or a separate instance as previously described.

In some embodiments, the down mixer 1303 includes a prototype signal creator 1511. The prototype signal creator 1511 is configured to receive the T/F domain transmission audio signal 504, the transmission audio signal type 202 and possibly the additional parameters 204. The T/F prototype signal 1512 may in turn be output to a prototype energy determiner 1503 and a prototype to match the target energy equalizer 1505.

In some embodiments, the prototype signal creator 1511 is configured to create a prototype signal for a mono transmission audio signal using two transmission audio signals based on the received transmission audio signal type. For example, the following formula can be used.

If t (n) ═ spaced ",

Mproto(b,n)=S0(b,n).

if t (n) is "downmix" or t (n) is "coincident",

Mproto(b,n)=S0(b,n)+S1(b,n)

in some embodiments, the down-mixer 1303 includes a target energy determiner 1501. The target energy determiner 1501 is configured to receive the T/F domain transmitted audio signal 504 and generate a target energy value as the sum of the energies of the transmitted audio signals:

Etarget(b,n)=|S0(b,n)|2+|S1(b,n)|2

the target energy value may then be passed to the prototype to match the target equalizer 1505.

In some embodiments, the down-mixer 1303 includes a prototype energy determiner 1503. The prototype energy determiner 1503 is configured to receive the T/F prototype signal 1512 and determine an energy value, for example:

Eproto(b,n)=|Mproto(b,n)|2

the prototype energy value may then be passed to the prototype to match the target equalizer 1505.

In some embodiments, the down-mixer 1303 includes a prototype to match the target energy equalizer 1505. In some embodiments, prototype to match target energy equalizer 1505 is configured to receive T/F prototype signal 1502, a prototype energy value, and a target energy value. In some embodiments, equalizer 1505 is configured to first smooth the energy over time, for example using the following equation:

E′x(b,n)=a5Ex(b,n)+b5E′x(b,n-1)

wherein, a5And b5Is a smoothing coefficient (e.g., a)5=0.1,b5=1-a5). In turn, equalizer 1505 is configured to determine the equalization gain as:

these gains may, in turn, be used to equalize a prototype signal, such as,

M(b,n)=geq(b,n)Mproto(b,n)

the equalized prototype signal is passed to an inverse T/F converter 707.

In some embodiments, the down mixer 1303 includes an inverse T/F transformer 707 configured to convert the output of the equalizer to a time domain version. The time domain equalized audio signal (mono signal) 1510 is in turn passed to a transmit audio signal and spatial metadata multiplexer 1507 (or multiplexer).

In some embodiments, the down-mixer 1303 includes a transmit audio signal and spatial metadata multiplexer 1507 (or multiplexer). The transmit audio signal and spatial metadata multiplexer 1507 (or multiplexer) is configured to receive the spatial metadata 522 and the mono audio signal 1510 and multiplex them to regenerate a suitable output format (e.g., a MASA stream with only one transmit audio signal) 1506. In some embodiments, the input mono audio signal takes the form of Pulse Code Modulation (PCM). In such embodiments, the signals may be encoded and multiplexed. In some embodiments, multiplexing may be omitted and the audio signal and spatial metadata are transmitted directly using mono in the audio encoder.

In some embodiments, the output of the apparatus shown in fig. 15 is a mono PCM audio signal 1510, where the spatial metadata is discarded.

In some embodiments, other parameters may be implemented, for example, in some embodiments, when the type is "spaced," the separation microphone distance may be estimated.

With respect to fig. 16, an example operation of the apparatus shown in fig. 15 is shown.

Thus, as shown in step 1601 in fig. 16, in some embodiments, the first operation is to extract and/or decode the transport audio signal and metadata from the MASA stream (or bitstream).

The next operation may be to perform a time-frequency domain transform on the transmitted audio signal, as shown in step 1603 in fig. 16.

Further, as shown in step 1605 in fig. 16, the method includes creating a prototype audio signal based on the time-frequency domain transmission signal and further based on the transmission audio signal type (and further based on the additional parameters).

Further, as shown at step 1604 in fig. 16, in some embodiments, the method is configured to generate, determine or calculate a target energy value based on the transformed transmission audio signal.

Further, as shown in step 1606 in fig. 16, in some embodiments, the method is configured to generate, determine, or calculate a prototype audio signal energy value based on the prototype audio signal.

After the energies have been determined, the method may also equalize the prototype audio signal to match the target audio signal energy, as shown in step 1607 in fig. 16.

The equalized prototype signal (single channel signal) may then be inverse time-frequency domain transformed to generate a time-domain mono signal, as shown in step 1609 in fig. 16.

The time domain mono audio signal may then be (optionally encoded and) multiplexed with spatial metadata, as shown in step 1610 in fig. 16.

Further, as shown in step 1611 of fig. 16, the multiplexed audio signal may be output (as a MASA data stream).

As mentioned above, the illustrated block diagram is only one example of a possible implementation. Other practical implementations may differ from the examples described above. For example, the implementation may be without a separate T/F converter.

Furthermore, in some embodiments, any suitable bitstream utilizing audio channels and (spatial) metadata may be used instead of having an input MASA stream as shown above. Furthermore, in some embodiments, the IVAS codec may be replaced by any other suitable codec (e.g., a codec with an audio channel and an operation mode of spatial metadata).

In some embodiments, a transmitted audio signal type determiner may be used to estimate other parameters besides the transmitted audio signal type. For example, the pitch of the microphones may be estimated. The pitch of the microphones may be an example of possible additional parameters 204. In some embodiments, this may be accomplished by: inspection Esum(b, n) and Esub(b, n) frequencies of local maxima and minima, determining a time delay between microphones based on these frequencies, and estimating a distance based on the delay and estimated direction of arrival (available in spatial metadata). Other methods for estimating the delay between two signals also exist.

With respect to fig. 17, an example electronic device that may be used as an analysis or synthesis device is shown. The device may be any suitable electronic device or apparatus. For example, in some embodiments, device 1700 is a mobile device, a user device, a tablet computer, a computer, an audio playback device, and/or the like.

In some embodiments, the apparatus 1700 includes at least one processor or central processing unit 1707. The processor 1707 may be configured to execute various program code, such as the methods described herein.

In some embodiments, device 1700 includes memory 1711. In some embodiments, at least one processor 1707 is coupled to memory 1711. The memory 1711 may be any suitable storage device. In some embodiments, the memory 1711 includes program code portions for storing program code that may be implemented on the processor 1707. Furthermore, in some embodiments, the memory 1711 may also include a store data portion for storing data (e.g., data that has been or is to be processed according to embodiments described herein). The implemented program code stored in the program code portion and the data stored in the data portion may be retrieved by the processor 1707 via a memory-processor coupling, as desired.

In some embodiments, device 1700 includes a user interface 1705. In some embodiments, a user interface 1705 may be coupled to the processor 1707. In some embodiments, the processor 1707 may control the operation of the user interface 1705 and receive input from the user interface 1705. In some embodiments, user interface 1705 may enable a user to enter commands to device 1700, for example, via a keypad. In some embodiments, user interface 1705 may enable a user to obtain information from device 1700. For example, user interface 1705 may include a display configured to display information from device 1700 to a user. In some embodiments, user interface 1705 may include a touch screen or touch interface, which can both enable information to be input into device 1700 and display information to a user of device 1700. In some embodiments, the user interface 1705 may be a user interface for communicating with a position determiner as described herein.

In some embodiments, device 1700 includes input/output ports 1709. In some embodiments, input/output port 1709 comprises a transceiver. In such embodiments, the transceiver may be coupled to the processor 1707 and configured to enable communication with other apparatuses or electronic devices, e.g., via a wireless communication network. In some embodiments, the transceiver or any suitable transceiver or transmitter and/or receiver components may be configured to communicate with other electronic devices or apparatuses via wired or wired coupling.

The transceiver may communicate with other devices by any suitable known communication protocol. For example, in some embodiments, the transceiver may use a suitable Universal Mobile Telecommunications System (UMTS) protocol, a Wireless Local Area Network (WLAN) protocol such as IEEE 802.X, a suitable short range radio frequency communication protocol such as bluetooth, or an infrared data communication path (IRDA).

Transceiver input/output port 1709 may be configured to receive signals and, in some embodiments, determine parameters as described herein by using processor 1707 executing suitable code.

In some embodiments, device 1700 may be used as at least a portion of a composition device. The input/output port 1709 may be coupled to any suitable audio output, such as to a multi-channel speaker system and/or headphones (which may be headphones or non-headphones), and so forth.

In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the invention may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well known that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.

Embodiments of the invention may be implemented by computer software executable by a data processor of a mobile device, such as in a processor entity, or by hardware, or by a combination of software and hardware. Also in this regard it should be noted that any block of the logic flows as in the figures may represent a program step, or an interconnected set of logic circuits, blocks and functions, or a combination of a program step and a logic circuit, block and function. The software may be stored on physical media such as memory chips or memory blocks implemented within the processor, magnetic media such as hard or floppy disks, and optical media such as DVDs and data variant CDs thereof.

The memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The data processor may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), gate level circuits based on a multi-core processor architecture, and processors, as non-limiting examples.

Embodiments of the invention may be practiced in various components such as integrated circuit modules. The design of integrated circuits is generally a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.

Programs, such as those provided by Synopsys, Inc. of mountain View, California and Cadence Design, of san Jose, California, may automatically route conductors and locate components on a semiconductor chip using well established rules of Design as well as libraries of pre-stored Design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication.

The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiments of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention, as defined in the appended claims.

45页详细技术资料下载
上一篇:一种医用注射器针头装配设备
下一篇:信号处理装置、方法和程序

网友询问留言

已有0条留言

还没有人留言评论。精彩留言会获得点赞!

精彩留言,会给你点赞!

技术分类